insights-core

Contents:

Red Hat Insights

Version

3.0

Date

Mar 04, 2020

Introduction

The Red Hat Insights application is built upon four somewhat independent subsystems:

  1. the collection component;

  2. the processing engine component;

  3. the plugin components; and

  4. the customer interface component.

The collection component is called Red Hat Insights Client (“Client”) and is part of the Red Hat Enterprise Linux distribution. It is installed via RPM onto a host system where it collects information to send to the infrastructure engine for analysis. The processing engine component is called Insights Core (“Engine”) and runs on Red Hat internal systems to process the information collected by the Client and provides results to the customer interface component. The Engine processes the information by extracting each unique set of data, parsing the data into facts, and analyzing the facts via algorithms. Parsing and analysis of the information is performed in a collection of plugin components. The parsing and combining is performed by the Parser and Combiner plugins, and the analysis is performed by Rule plugins (collectively “Plugins”). The results of the analysis are presented to the user via the user interface component (“UI”). The figure below provides a graphical overview of the components and the information flow.

Insights Overview

Overview of the Insights Components and Information Flow

Insights Client - Collection

Collection of information is performed by the Client component of Red Hat Insights. The Client RPM is installed on one or more host systems where data collection is to be performed. A host system may be a physical system or a virtual machine. Information collected by the Client is filtered to remove sensitive information and then sent to the Engine for analysis. Collection is typically performed daily but may be configured to run on other schedules. It may also be disabled/enabled by the system administrator.

More information about the Client component can be found at Red Hat Insights Portal and source code is available on the Red Hat Insights Client GitHub Project.

Red Hat Insights Core - Data Analysis Engine

Once host information has been collected by the Client it is transferred to the Engine for processing. The Engine is a SaaS application hosted at Red Hat. The Engine processes information as it is received and provides the results to the Customer Interface for review by the customer.

The Engine begins processing by unarchiving the information and identifying each type of information that included in the Client upload. The Engine then configures the Plugins into a map/reduce network and executes them to analyze the data. The workflow consists of the steps:

  1. parsing the data into facts specific to the system (such as RPMS installed, CPU details, storage details, etc.);

  2. combining certain facts where there are multiple sources, or differences across platforms (this provides a more consistent set of facts for the analysis step); and

  3. analyzing the facts to determine the results.

Each Plugin provides results which are all collected by the Engine and upon job completion the results are made available to the Customer Interface for review. The Engine evaluates the input data and only invokes Plugins that are necessary to process the data that is present. The Engine also optimizes the Rules so that they are only invoked in the workflow if the necessary facts are present.

More information about the Red Hat Insights Core component can be found on Red Hat Insights Core GitHub Project.

Plugin Components - Parsing and Fact Analysis

The Engine coordinates analysis of the information via the Plugins. There are three types of plugins that are used in the analysis workflow, Parsers, Combiners, and Rules. Parsers parser data into facts. Combiners aggregate facts into higher level facts. Rules analyze facts.

Parser Plugins

Parser plugins are responsible for analyzing the raw data and converting it into usable facts that can be evaluated by the Combiners and Rules. Each Parser plugin is typically responsible for parsing a specific set of data. For instance, the Mount parser plugin (insights.parsers.mount.Mount) parses the output of the mount command and the FSTab parser plugin (insights.parsers.fstab.FSTab) parses the contents of the /etc/fstab/ file.

Combiner Plugins

Combiner plugins perform aggregation of facts to make the facts more consistent to Rules. For instance the Red Hat Enterprise Linux release number (i.e. 6.8 or 7.3) is available in the file /etc/redhat_release and may also be derived from the command uname -a. The redhat_release Combiner plugin (insights.combiners.redhat_release()) looks at the facts from both Parsers (insights.parsers.redhat_release.RedhatRelease and insights.parsers.uname.Uname) to determine the major and minor release numbers. The Combiner will use the best source of information first, and then use the second source if the first is not available. This allows Rules to simply rely on this Combiner as the source of the fact instead of having to look at the facts from two different Parsers.

Rule Plugins

Rule plugins perform the analysis of the facts made available by the Parsers and Combiners. Rules may look at any number of facts to determine if a symptom or condition is present in a system, or that one is likely to occur in the future. For instance if particular ssh vulnerability is present when using Red Hat Enterprise Linux 7.1 with a particular setting in file /etc/ssh/sshd_config, a Rule could look at the facts from the Red Hat Release Combiner to determine if the system was running 7.1 and then check facts from the sshd_config file to determine if the setting was present. If both facts are true then the Rule will report the results and it will be displayed with information regarding the vulnerability and how it can be resolved on the specific system. These results from all Rules are accumulated and consolidated by the Engine to provide to the Customer Interface.

Customer Interface - Analysis Results

The Customer Interface provides views of the Insights results via the Red Hat Customer Portal. Multiple views are provided for all of customer’s systems reporting to Insights. Information is provide regarding the results including metadata related to the findings, an explanation of the findings, and information related to correction of identified conditions and/or problems. The Customer Interface provides many customization options to optimize each customer’s specific needs.

Quickstart Insights Development

Insights-core is the framework upon which Red Hat Insights rules are built and delivered. The basic purpose is to apply “rules” to a set of files collected from a system at a given point in time.

Insights-core rule “plugins” are written in Python. The rules follow a “MapReduce” approach, dividing the logic between “mapping” and “reducing” methods. This is a convenient approach where a rule’s logic takes place in two steps. First, there is a “gathering of facts” (the map phase) followed by logic being applied to the facts (the reduce phase).

Prerequisites

All Plugin code is written in Python and all Insights libraries and framework code necessary for development and execution are stored in Git repositories. Before you begin make sure you have the following installed:

  • Python 3 (recommended Python 3.6)

  • Git

  • Python Virtualenv

  • Python PIP

Further requirements can be found in the README.rst file associated with the insights-core project.

Hint

You might also need to install gcc to be able to build some python modules, unzip to be able to run pytest on the insights-core repo, and pandoc to build Insights Core documentation.

Rule Development Setup

In order to develop rules to run in Red Hat Insights you’ll need Insights Core (http://github.com/RedHatInsights/insights-core) as well as your own rules code. The commands below assume the following sample project directory structure containing the insights-core project repo and your directory and files for rule development:

project_dir
├── insights-core
└── myrules
    ├── hostname_rel.py
    └── bash_version.py

Insights Core Setup

Clone the project:

[userone@hostone project_dir]$ git clone git@github.com:RedHatInsights/insights-core.git

Or, alternatively, using HTTPS:

[userone@hostone project_dir]$ git clone https://github.com/RedHatInsights/insights-core.git

Initialize a virtualenv (depending on your python installation you may need to specify a specific version of python using the -p option and the name or path of the python you want to use. For example -p python3 or -p /usr/bin/python3.6):

[userone@hostone project_dir]$ cd insights-core
[userone@hostone project_dir/insights-core]$ virtualenv -p python3.6 .

Verify that you have the desired version of python by enabling the virtualenv:

[userone@hostone project_dir/insights-core]$ source bin/activate
(insights-core)[userone@hostone project_dir/insights-core]$ python --version

You can also ensure that your virtualenv is set by checking which python is being used:

(insights-core)[userone@hostone project_dir/insights-core]$ which python
project_dir/insights-core/bin/python

Depending upon your environment, your prompt may also indicate you are using a virtualenv. In the example above it is indicated by (insights-core).

Next install the insights-core project and its dependencies into your virtualenv:

(insights-core)[userone@hostone project_dir/insights-core]$ bin/pip install -e .[develop]

Now check to make sure your environment is setup correctly by invoking the insights-run command. You should see the help information for insights-run:

(insights-core)[userone@hostone project_dir/insights-core]$ insights-run --help

When you are finished working on this project you can deactivate your virtualenv using the deactivate command.

Tip

If you don’t plan on digging into the Insights Core code you can skip cloning the insights-core repo and follow these steps:

  1. Create a virtualenv in your myrules directory.

  2. Activate and check your virtualenv as specified above

  3. Install insights-core using the command pip install insights-core

  4. Check insights installation with insights-run --help

If you use this method make sure you periodically update insights core in your virtualenv with the command pip install --upgrade insights-core.

Rule Development

From your project root directory create a directory for your rules:

(insights-core)[userone@hostone project_dir/insights-core]$ cd ..
(insights-core)[userone@hostone project_dir]$ mkdir myrules
(insights-core)[userone@hostone project_dir]$ cd myrules
(insights-core)[userone@hostone project_dir/myrules]$

Create an empty file named __init__.py that will enable your rules directory as a python package. This makes the myrules directory a python package allowing you to use insights-run to run multiple components in the package. If you create subdirectories create an empty __init__.py in each subdir that contains any components you want to run.

(insights-core)[userone@hostone project_dir/myrules]$ touch __init__.py

Create a sample rule called hostname_rel.py in the myrules directory:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/usr/bin/env python
from insights.core.plugins import make_fail, make_pass, rule
from insights.parsers.hostname import Hostname
from insights.parsers.redhat_release import RedhatRelease

ERROR_KEY_1 = "RELEASE_IS_RHEL"
ERROR_KEY_2 = "RELEASE_IS_NOT_RECOGNIZED"
ERROR_KEY_3 = "RELEASE_CANNOT_BE_DETERMINED"

CONTENT = {
    ERROR_KEY_1: "This release is RHEL\nHostname: {{ hostname }}\nRelease: {{ release }}",
    ERROR_KEY_2: "This release is not RHEL\nHostname: {{ hostname }}\nRelease: {{ release }}",
    ERROR_KEY_3: "This release is not RHEL\nHostname: {{ hostname }}\nRelease: not present"
}


@rule(Hostname, [RedhatRelease])
def report(hostname, release):
    if release and release.is_rhel:
        return make_pass(ERROR_KEY_1,
                         hostname=hostname.fqdn,
                         release=release.version)
    elif release:
        return make_fail(ERROR_KEY_2,
                         hostname=hostname.fqdn,
                         release=release.raw)
    else:
        return make_fail(ERROR_KEY_3, hostname=hostname.fqdn)


if __name__ == "__main__":
    from insights import run
    run(report, print_summary=True)

Hint

You can download the code for hostname_rel.py

Now you can use Insights to evaluate your rule by running your rule script:

(insights-core)[userone@hostone project_dir/myrules]$ python hostname_rel.py

Depending upon the system you are using you will see several lines of output ending with your rule results that should look something like this:

---------
Progress:
---------
F

--------------
Rules Executed
--------------
[FAIL] __main__.report
---------------
This release is not RHEL
Hostname: hostone
Release: Fedora release 29 (Twenty Nine)


----------------------
Rule Execution Summary
----------------------
Missing Deps: 0
Passed      : 0
Fingerprint : 0
Failed      : 1
Metadata    : 0
Metadata Key: 0
Exceptions  : 0

Depending on your system you may also be able to make this file executable (chmod +x hostname_rel.py) and run like this: ./hostname_rel.py.

Now create a second rule named bash_version.py` and include the following code

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from insights.core.plugins import make_pass, rule
from insights.parsers.installed_rpms import InstalledRpms

KEY = "BASH_VERSION"

CONTENT = "Bash RPM Version: {{ bash_version }}"


@rule(InstalledRpms)
def report(rpms):
    bash_ver = rpms.get_max('bash')
    return make_pass(KEY, bash_version=bash_ver)

Hint

You can download the code for bash_version.py

You’ll notice that this file does not include the #!/usr/bin/env python and the run(report…) lines. You can still run this rule easily from the command line using insights-run. Here’s how you can run each rule individually with insights-run:

(insights-core)[userone@hostone project_dir/myrules]$ insights-run -p bash_version
(insights-core)[userone@hostone project_dir/myrules]$ insights-run -p hostname_rel

Finally you can run multiple rules at once. First you can specify a comma separate list of all rules as the argument to -p:

(insights-core)[userone@hostone project_dir/myrules]$ insights-run -p bash_version,hostname_rel

The second way to do this is by taking advantage of the fact that all of your rules are in one package (remember the empty __init__.py file we created in the myrules dir to make it a python package). Just provide the name of the package to run all rules in the package:

(insights-core)[userone@hostone project_dir/myrules]$ cd ..
(insights-core)[userone@hostone project_dir]$ insights-run -p myrules

You can run one module in the package using either dot notation, myrules.bash_version, or simply using bash tab completion to specify the path name myrules/bash_version.py:

(insights-core)[userone@hostone project_dir]$ insights-run -p myrules.bash_version
(insights-core)[userone@hostone project_dir]$ insights-run -p myrules/bash_version.py

Tip

If you don’t see the results you expect when using insights-run, try adding the -t flag to show python exception tracebacks and look for exceptions in your rule code. You can expect to see some exceptions from parsers if the data is not accessible due to permissions or is missing from your system or the data source.

Evaluating Archive Files and Directories

By default Insights will collect information from your computer for evaluation of your rules. You can also evaluate a sosreport or insights archive or directory by specifying it as the last argument on the command line:

(insights-core)[userone@hostone project_dir/myrules]$ insights-run -p bash_version sosreport.tar.xz
(insights-core)[userone@hostone project_dir/myrules]$ insights-run -p bash_version sosreport_dir

For a more detailed description of how to develop your own rules see the Rule tutorial section in the Insights Core Tutorials.

Insights Core Contributor Setup

If you wish to contribute to the insights-core project you’ll need to create a fork in GitHub. See Fork a repo on Github for help on forking a repo. After you have created your fork continue with these steps to setup your development environment.

  1. Clone your fork:

    [userone@hostone project_dir]$ git clone git@github.com:your-user/insights-core.git
    
  2. Reference the original project as “upstream”:

    [userone@hostone project_dir]$ cd insights-core
    [userone@hostone project_dir/insights-core]$ git remote add upstream git@github.com:RedHatInsights/insights-core.git
    

At this point, you would synchronize your fork with the upstream project using the following commands:

[userone@hostone project_dir/insights-core]$ git pull upstream master
[userone@hostone project_dir/insights-core]$ git push origin master

You should synchronize your fork with the upstream project regularly to ensure you have the most recent Insights Core code.

For more details steps on contributing to Insights Core see CONTRIBUTING.md.

Insights API

Input Data Formats

Before any data reaches the rules framework, it obviously has to be generated. There are currently several input data formats that can be processed by insights-core:

SOSReports

A SOSReport is a command-line tool for Red Hat Enterprise Linux (and other systems) to collect configuration and diagnostic information from the system.

Insights Archives

These archives have been designed from the ground up to fit the Insights use case -- that is, to be automatically uploaded on a daily basis. This means that the data contained in the archive is exactly what is required to effectively process all currently-developed rules -- and nothing more.

In addition, there are several features built in to the insights-client package (which is the tool that creates Insights archives) to better meet Red Hat customers’ security and privacy concerns.

Blacklists

A list of files or commands to never upload.

Filters

A set of simple strings used to filter files before adding them to the archive.

Dynamic Uploader Configuration

The client will download a configuration file from Red Hat (by default) every time it’s executed that will list out every file to collect and command to run, including filters to apply to each file, specified by rule plugins. The configuration is signed, which should be verified by the client before using it to collect data.

These features allow these archives to be processed quickly and more securely in the Insights production environment. On the other hand, the reduced data set narrows the scope of uses to be Insights-specific.

OCP 4 Archives

OCP 4 can generate diagnostic archives with a component called the insights-operator. They are automatically uploaded to Red Hat for analysis.

The openshift-must-gather CLI tool produces more comprehensive archives than the operator. insights-core recognizes them as well.

Execution Model

To build rules effectively, one should have a general idea of how data is executed against each rule. At a high level:

  • Each unit of input data is mapped to a symbolic name

  • Each parser that “subscribes” to that symbolic name is executed with the given content as part of a Context object.

  • The outputs of all parsers are sorted by host

  • For each host, every rule is invoked with the local context, populated by parsers from the same plugin, and the shared context, the combination of all shared parser outputs for this particular host.

  • The outputs of all rules is returned, along with other various bits of metadata, to the client, depending on what invoked the rules framework.

Contexts

The term Context refers to the context of the information that is collected and evaluated by Insights. Examples of context are Host Context (directly collected from a host), Host Archive Context (uploaded Insights archive), SOSReports (uploaded SOSReport archive), and Docker Image Context (directly collected from Docker image). The context determines which data sources are collected, and that in determines the hierarchy of parsers, collectors and rules that are executed. Contexts enable different collection methods for data for each unique context, and also provide a default set of data sources that are common among one or more contexts. All available contexts are defined in the module insights.core.context.

Data Sources

Data Sources define how data processed by Insights is collected. Each data source is specific to a unique set of data. For example a data source is defined for the contents of the file /etc/hosts and for the output of the command /sbin/fdisk -l. The default data sources provide the primary data collection specifications for all contexts and are located in insights.specs.default.DefaultSpecs.

Each specific Context may override a default data source to provide a different collection specification. For instance when the Insights client collects the fdisk -l information it will use the default datasource and execute the command on the target machine. This is the insights.core.context.HostContext. The Insights client stores that information as a file in an archive.

When the client uploads that information to the Red Hat Insights service it is processed in the insights.core.context.HostArchiveContext. Because the fdisk -l data is now in a file in the archive the data sources defined in insights.specs.insights_archive.InsightsArchiveSpecs are used instead. In this case Insights will collect the data from a file named insights_commands/fdisk_-l.

The command type datasources in default.py (simple_command and foreach_execute) only target HostContext. File based datasources fire for any context annotated with @fs_root (HostContext, InsightsArchiveContext, SosArchiveContext, DockerImageContext, and etc.). That’s why we need a definition in the *_archive.py files for every command but only for the files that are different from default.py.

Also, the order in which spec modules load matters. Say we have 2 classes containing specs, A and B. If B loads after A and both A and B have entries for hostname, the one that fires depends on the context that each one targets. E.g. if A.hostname targets HostContext and B.hostname targets InsightsArchiveContext, then they’ll each fire for whichever context is loaded. But if both A.hostname and B.hostname target HostContext, the datasource in the class that loads last will win for that context.

While data sources are specific to the context, the purpose of the data source hierarchy is to provide a consistent set of input to Parsers. For this reason Parsers should generally depend upon insights.specs.Specs data sources.

This hierarchy allows a developer to override a particular datasource. For instance, if a developer found a bug in a sos_archive datasource, she could create her own class inheriting from insights.core.spec_factory.SpecSet, create the datasource in it, and have the datasource target SosArchiveContext. So long as the module containing her class loads after default.py, insights_archive.py, and sos_archive.py, her definition will win for that datasource when running under a SosArchiveContext.

Specification Factories

Data sources may utilize various methods called spec factories for collection of information. Collection from a file (/etc/hosts) and from a command (/sbin/fdisk -l) are two of the most common. These are implemented by the insights.core.spec_factory.simple_file() and insights.core.spec_factory.simple_command() spec factories respectively. All of the spec factories currently available for the creation of data sources are listed below.

insights.core.spec_factory.simple_file()

simple_file collects the contents of files, for example:

auditd_conf = simple_file("/etc/audit/auditd.conf")
audit_log = simple_file("/var/log/audit/audit.log")
insights.core.spec_factory.simple_command()

simple_command collects the output from a command, for example:

blkid = simple_command("/sbin/blkid -c /dev/null")
brctl_show = simple_command("/usr/sbin/brctl show")
insights.core.spec_factory.glob_file()

glob_file collects the contents of each file matching the glob pattern(s). glob_file also can take a list of patterns as well as an ignore keyword arg that is a regular expression telling it which of the matching files to throw out, for example:

httpd_conf = glob_file(["/etc/httpd/conf/httpd.conf", "/etc/httpd/conf.d/*.conf"])
ifcfg = glob_file("/etc/sysconfig/network-scripts/ifcfg-*")
rabbitmq_logs = glob_file("/var/log/rabbitmq/rabbit@*.log", ignore=".*rabbit@.*(?<!-sasl).log$")
insights.core.spec_factory.first_file()

first_file collects the contents of the first readable file from a list of files, for example:

meminfo = first_file(["/proc/meminfo", "/meminfo"])
postgresql_conf = first_file([
                             "/var/lib/pgsql/data/postgresql.conf",
                             "/opt/rh/postgresql92/root/var/lib/pgsql/data/postgresql.conf",
                             "database/postgresql.conf"
                             ])
insights.core.spec_factory.listdir()

listdir collects a simple directory listing of all the files and directories in a path, for example:

block_devices = listdir("/sys/block")
ethernet_interfaces = listdir("/sys/class/net", context=HostContext)
insights.core.spec_factory.foreach_execute()

foreach_execute executes a command for each element in provider. Provider is the output of a different datasource that returns a list of single elements or a list of tuples. This spec factory is typically utilized in combination with a simple_file, simple_command or listdir spec factory to generate the input elements, for example:

ceph_socket_files = listdir("/var/run/ceph/ceph-*.*.asok", context=HostContext)
ceph_config_show = foreach_execute(ceph_socket_files, "/usr/bin/ceph daemon %s config show")
ethernet_interfaces = listdir("/sys/class/net", context=HostContext)
ethtool = foreach_execute(ethernet_interfaces, "/sbin/ethtool %s")
insights.core.spec_factory.foreach_collect()

foreach_collect substitutes each element in provider into path and collects the files at the resulting paths. This spec factory is typically utilized in combination with a simple_command or listdir spec factory to generate the input elements, for example:

httpd_pid = simple_command("/usr/bin/pgrep -o httpd")
httpd_limits = foreach_collect(httpd_pid, "/proc/%s/limits")
block_devices = listdir("/sys/block")
scheduler = foreach_collect(block_devices, "/sys/block/%s/queue/scheduler")
insights.core.spec_factory.first_of()

first_of returns the first of a list of dependencies that exists. At least one must be present, or this component won’t fire. This spec factory is typically utilized in combination with other spec factories to generate the input list, for example:

postgresql_log = first_of([glob_file("/var/lib/pgsql/data/pg_log/postgresql-*.log"),
                           glob_file("/opt/rh/postgresql92/root/var/lib/pgsql/data/pg_log/postgresql-*.log"),
                           glob_file("/database/postgresql-*.log")])
systemid = first_of([simple_file("/etc/sysconfig/rhn/systemid"),
                     simple_file("/conf/rhn/sysconfig/rhn/systemid")])

Custom Data Source

If greater control over data source content is required than provided by the existing specification factories, it is possible to write a custom data source. This is accomplished by decorating a function with the @datasource decorator and returning a list type. Here’s an example:

1
2
3
4
5
@datasource(HostContext)
def block(broker):
    remove = (".", "ram", "dm-", "loop")
    tmp = "/dev/%s"
    return [(tmp % f) for f in os.listdir("/sys/block") if not f.startswith(remove)]

Custom datasources also can return insights.core.spec_factory.CommandOutputProvider, insights.core.spec_factory.TextFileProvider, or insights.core.spec_factory.RawFileProvider instances.

Parsers

A Parser takes the raw content of a particular Data Source such as file contents or command output, parses it, and then provides a small API for plugins to query. The parsed data and computed facts available via the API are also serialized to be used in downstream processes.

Choosing a Module

Currently all shared parsers are defined in the package insights.parsers. From there, the parsers are separated into modules based on the command or file that the parser consumes. Commands or files that are logically grouped together can go in the same module, e.g. the ethtool based commands and ps based commands.

Defining Parsers

There are a couple things that make a function a parser:

  1. The function is decorated with the @parser decorator

  2. The function can take multiple parameters, the first is always expected to be of type Context. Any additional parameters will normally represent a component with a sole purpose of determining if the parser will fire.

Registration and Symbolic Names

Parsers are registered with the framework by use of the @parser decorator. This decorator will add the function object to the list of parsers associated with the given data source name. Without the decorator, the parser will never be found by the framework.

Data source names represent all the possible file content types that can be analyzed by parsers. The rules framework uses the data source name mapping defined in insights.specs.Specs to map a symbolic name to a command, a single file or multiple files. More detail on this mapping is provided in the section Specification Factories

The same mapping is used to create the uploader.json file consumed by Insights Client to collect data from customer systems. The Client RPM is developed and distributed with Red Hat Enterprise Linux as part of the base distribution. Updates to the Client RPM occur less frequently than to the Insights Core application. Additionally customers may not update the Client RPM on their systems. So developers need to check both the Insights Core and the Client applications to determine what information is available for processing in Insights.

class insights.core.plugins.parser(*args, **kwargs)[source]

Decorates a component responsible for parsing the output of a datasource. @parser should accept multiple arguments, the first will ALWAYS be the datasource the parser component should handle. Any subsequent argument will be a component used to determine if the parser should fire. @parser should only decorate subclasses of insights.core.Parser.

Warning

If a Parser component handles a datasource that returns a list, a Parser instance will be created for each element of the list. Combiners or rules that depend on the Parser will be passed the list of instances and not a single parser instance. By default, if any parser in the list succeeds, those parsers are passed on to dependents, even if others fail. If all parsers should succeed or fail together, pass continue_on_error=False.

Parser Contexts

Each parser may take multiple parameters. The first is always expected to be of type Context. Order is also important and the parameter of type Context must always be first. All information available to a parser is found in the insights.core.context.Context object. Please refer to the Context API documentation insights.core.context for more details. Any additional parameters will not be of type Context but will normally represent a component with a sole purpose of determining if the parser will fire.

Parser Outputs

Parsers can return any value, as long as it’s serializable.

Parser developers are encouraged to wrap output data in a Parser class. This makes plugin developers able to query for higher-level facts about a particular file, while also exporting the higher level facts for use outside of Insights plugins.

class insights.core.Parser(context)[source]

Base class designed to be subclassed by parsers.

The framework will construct your object with a Context that will provide at least the content as an interable of lines and the path that the content was retrieved from.

Facts should be exposed as instance members where applicable. For example:

self.fact = "123"

Examples

>>> class MyParser(Parser):
...     def parse_content(self, content):
...         self.facts = []
...         for line in content:
...             if 'fact' in line:
...                 self.facts.append(line)
>>> content = '''
... # Comment line
... fact=fact 1
... fact=fact 2
... fact=fact 3
... '''.strip()
>>> my_parser = MyParser(context_wrap(content, path='/etc/path_to_content/content.conf'))
>>> my_parser.facts
['fact=fact 1', 'fact=fact 2', 'fact=fact 3']
>>> my_parser.file_path
'/etc/path_to_content/content.conf'
>>> my_parser.file_name
'content.conf'
file_name = None

Filename portion of the input file.

Type

str

file_path = None

Full context path of the input file.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

Rule Plugins

The purpose of Rule plugins is to identify a particular problem in a given system based on certain facts about that system. Each Rule plugin consists of a module with:

  • One @rule-decorated function

  • An ERROR_KEY member (recommended)

  • A docstring for the module that includes
    • A summary of the plugin

    • A longer description of what the plugin identifies

    • Links to Red Hat solutions

class insights.core.plugins.rule(*args, **kwargs)[source]

Decorator for components that encapsulate some logic that depends on the data model of a system. Rules can depend on datasource instances, parser instances, combiner instances, or anything else.

For example:

@rule(SshDConfig, InstalledRpms, [ChkConfig, UnitFiles], optional=[IPTables, IpAddr])
def report(sshd_config, installed_rpms, chk_config, unit_files, ip_tables, ip_addr):
    # ...
    # ... some complicated logic
    # ...
    bash = installed_rpms.newest("bash")
    return make_pass("BASH", bash=bash)

Notice that the arguments to report correspond to the dependencies in the @rule decorator and are in the same order.

Parameters to the decorator have these forms:

Criteria

Example Decorator Arguments

Description

Required

SshDConfig, InstalledRpms

Regular arguments

At Least One

[ChkConfig, UnitFiles]

An argument as a list

Optional

optional=[IPTables, IpAddr]

A list following optional=

If a parameter is required, the value provided for it is guaranteed not to be None. In the example above, sshd_config and installed_rpms will not be None.

At least one of the arguments to parameters of an “at least one” list will not be None. In the example, either or both of chk_config and unit_files will not be None.

Any or all arguments for optional parameters may be None.

The following keyword arguments may be passed to the decorator:

Keyword Arguments
  • requires (list) -- a list of components that all components decorated with this type will require. Instead of using requires=[...], just pass dependencies as variable arguments to @rule as in the example above.

  • optional (list) -- a list of components that all components decorated with this type will implicitly depend on optionally. Additional components passed as optional to the decorator will be appended to this list.

  • metadata (dict) -- an arbitrary dictionary of information to associate with the component you’re decorating. It can be retrieved with get_metadata.

  • tags (list) -- a list of strings that categorize the component. Useful for formatting output or sifting through results for components you care about.

  • group -- GROUPS.single or GROUPS.cluster. Used to organize components into “groups” that run together with insights.core.dr.run().

  • cluster (bool) -- if True will put the component into the GROUPS.cluster group. Defaults to False. Overrides group if True.

  • content (string or dict) -- a jinja2 template or dictionary of jinja2 templates. The Response subclasses rules can return are dictionaries. make_pass, make_fail, and make_response all accept first a key and then a list of arbitrary keyword arguments. If content is a dictionary, the key is used to look up the template that the rest of the keyword argments will be interpolated into. If content is a string, then it is used for all return values of the rule. If content isn’t defined but a CONTENT variable is declared in the module, it will be used for every rule in the module and also can be a string or list of dictionaries

  • links (dict) -- a dictionary with strings as keys and lists of urls as values. The keys categorize the urls, e.g. “kcs” for kcs urls and “bugzilla” for bugzilla urls.

Rule Parameters

The parameters for each rule function mirror the parser or parsers identified in the @rule decorator. This is best demonstrated by an example:

1
2
3
 @rule(InstalledRpms, Lsof, Netstat)
 def heartburn(installed_rpms, lsof, netstat):
     # Rule implementation

Line 1 of this example indicates that the rule depends on 3 parsers, InstalledRpms, Lsof, and Netstat. The signature for the rule function on line 2 contains the parameters that correspond respectively to the parsers specified in the decorator. All three parsers are required so if any are not present in the input data, then the rule will not be called. This also means that all three input parameters will have some value corresponding to the parser objects. It is up to the rule to evaluate the object attributes and methods to determine if the criteria is met to trigger the rule.

Rule Output

Rules can return multiple types of responses. If a rule is detecting some problem and finds it, it should return make_fail. If it is detecting a problem and is sure the problem doesn’t exist, it should return make_pass. If it wants to return information not associated with a failure or success, it should return make_info.

To return a rule “hit”, return the result of make_fail:

class insights.core.plugins.make_fail(key, **kwargs)[source]

Returned by a rule to signal that its conditions have been met.

Example:

# completely made up package
buggy = InstalledRpms.from_package("bash-3.4.23-1.el7")

@rule(InstalledRpms)
def report(installed_rpms):
   bash = installed_rpms.newest("bash")
   if bash == buggy:
       return make_fail("BASH_BUG_123", bash=bash)
   return make_pass("BASH", bash=bash)

To return a rule success, return the result of make_pass:

class insights.core.plugins.make_pass(key, **kwargs)[source]

Returned by a rule to signal that its conditions explicitly have not been met. In other words, the rule has all of the information it needs to determine that the system it’s analyzing is not in the state the rule was meant to catch.

An example rule might check whether a system is vulnerable to a well defined exploit or has a bug in a specific version of a package. If it can say for sure “the system does not have this exploit” or “the system does not have the buggy version of the package installed”, then it should return an instance of make_pass.

Example:

# completely made up package
buggy = InstalledRpms.from_package("bash-3.4.23-1.el7")

@rule(InstalledRpms)
def report(installed_rpms):
   bash = installed_rpms.newest("bash")
   if bash == buggy:
       return make_fail("BASH_BUG_123", bash=bash)
   return make_pass("BASH", bash=bash)

To return system info, return the result of make_info:

class insights.core.plugins.make_info(key, **kwargs)[source]

Returned by a rule to surface information about a system.

Example:

@rule(InstalledRpms)
def report(rpms):
   bash = rpms.newest("bash")
   return make_info("BASH_VERSION", bash=bash.nvra)

Testing

Since the plugin itself is a fairly simple set of python functions, individual functions can be easily unit tested. Unit tests are required for all plugins and can be found in the rules/tests directory of the source. Unit tests are written using the usual x-unit based unittests module with some helpers from pytest framework. pytest is the used test runner.

To run all unit tests with pytest:

py.test

Run a single single unit test one can:

py.test path/test_plugin_name.py::TestCaseClass::test_method

To get test results with coverage report:

py.test --cov=plugin_package

Feature Deprecation

Parsers and other parts of the framework go through periodic revisions and updates, and sometimes previously used features will be deprecated. This is a three step process:

  1. An issue to deprecate the outdated feature is raised in GitHub. This allows discussion of the plans to deprecate this feature and the proposed replacement functionality.

  2. The outdated function, method or class is marked as deprecated. Code using this feature now generates a warning when the tests are run, but otherwise works. At this stage anyone receiving a warning about pending deprecation SHOULD change over to using the new functionality or at least not using the deprecated version. The deprecation message MUST include information about how to replace the deprecated function.

  3. Once sufficient time has elapsed, the outdated feature is removed. The py.test tests will fail with a fatal error, and any code checked in that uses deprecated features will not be able to be merged because of the tests failing. Anyone receiving a warning about deprecation MUST fix their code so that it no longer warns of deprecation.

The usual time between each step should be two minor versions of the Insights core.

To deprecate code, call the insights.util.deprecated() function from within the code that will be eventually removed, in the following manner:

Functions

from insights.util import deprecated

def old_feature(arguments):
    deprecated(old_feature, "Use the new_feature() function instead")
    ...

Class methods

from insights.util import deprecated

class ThingParser(Parser):
    ...

    def old_method(self, *args, **kwargs):
        deprecated(self.old_method, "Use the new_method() method instead")
        self.new_method(*args, **kwargs)
    ...

Class

from insights.util import deprecated

class ThingParser(Parser):
    def __init__(self, *args, **kwargs):
        deprecated(ThingParser, "Use the new_feature() function instead")
        super(ThingParser, self).__init__(*args, **kwargs)
    ...

The insights.util.deprecated() function takes three arguments:

  • The function or method being deprecated. This is used to tell the user where the deprecated code is. Classes cannot be directly deprecated, and should instead emit a deprecation message in their __init__ method.

  • The solution to using this deprecated. This is a descriptive string that should tell anyone using the deprecated function what to do in future. Examples might be:

    • For a replaced parser: “Please use the NewParser parser in the new_parser module.”

    • For a specific method being replaced by a general mechanism: “Please use the search method with the arguments state="LISTEN".”

Components and Exceptions

When parsers parse input data, there are three likely outcomes:

  1. All data is parsed as expected

  2. Data is unparsable due to errors in the data and nothing can be retrieved by the parser

  3. Data is unparsable due to errors in the data but some useful information can be retrieved

    1. The useful information is from the parsable portion of the data

    2. The useful information is the fact that an error is present in the data

In each of these cases the parser should produce a response that is predictable to the insights framework and should produce output that is deterministic in terms of being processed by the rules.

Case 1 all Data is Parsed as Expected

In case 1 the parser should store the information in a representation that is consistent with the input data. For example, generally, log data should be stored in a python list, configuration data should be stored in a python dictionary, and discrete data items should be stored as attributes or properties of the parser.

Exceptions that are raised during parsing of the data are not anticipated by the parser, and if raised should be presumed to be potential errors in either the collection or the parsing of the data. These need to be logged and investigated.

Case 2 Data is Unparsable

In case 2 the parser is expecting to receive parsable data and instead receives data that is corrupt or not present as expected in a form that renders it impossible for the parser to have a substantial level of confidence in the data. The parser should provide logic to identify known issues in the data (such as error messages indicating the data was not present) and attempt to catch via python mechanisms issues that could reasonably be expected (conversion of a character to a number, missing values, etc.). When a parser makes the determination that the data is not usable, then it should explicitly raise a insights.parsers.ParseException and provide as much useful information as is possible to help the Insights team and parser developer understand what happened. If any exception is expected to be raised it should be caught, and the insights.parsers.ParseException raised in its place. No data will be made available to other parsers, combiners or rules in this case. It will be as if the data was not present in the input.

Case 3 Unparsable Data Provides Useful Information

Case 3a Parsable Data having Some Errors

In case 3 there are two subcases. The first subcase (a), is that the parser is able to detect errors in the input data but is also able to successfully parse at least some portion of the data. In this subcase the parser must do the following:

  1. Document how partial data will be handled in the module or class documentation so that a rule developer will understand how to determine what data is valid and what data is not valid.

  2. Do not leave any attributes or properties in an unknown state, meaning that all attributes should be initialized to known values and if unparsable they should either be removed or be reset to known values as documented in step 1.

  3. A specific attribute/property should be provided to allow rules to determine the quality of the data, rather than for example the rule having to check every attribute for None.

No exception will be explicitly raised by the parser in this case.

Case 3b Parsing Data to Find Errors (“Dirty Parser”)

In case 3 (b) the parser is specifically written to identify errors in the data. This is the desired case for known errors/vulnerabilities. For example for a known issue with RPM data one parser will parse the data to return valid information from the input data (“clean parser”), and a second parser will be responsible for identifying any exceptions in the data (“dirty parser”). This allows rules that don’t care about the exceptions to rely on only the first parser, and those rules will not run if valid data is not present. If the dirty parser identifies errors in the data then it will save information regarding the errors for use by rules. If no errors are found in the data then the dirty parser will raise insights.parsers.SkipException to indicate to the engine that it should be removed from the dependency hierarchy.

Other Exceptions from Parsers

Parsers should not explicitly raise any exceptions that would be raised in a rule-calling context. Problems that could be detected in parse_content should be detected there and not pushed out to the rules. Parser methods and functions should however be prepared to handle common exceptional cases (such as an invalid argument type) via standard python exception handling processes. That is, try something and handle the exception where you can. Parsers probably shouldn’t eagerly check types since there are many cases where strict types aren’t important and such checks may limit expressiveness and flexibility.

Parsers should not use the assert statement in place of error handling code. Asserts are for debugging purposes only.

SkipComponent and SkipException

Any component may raise insights.SkipComponent to signal to the engine that nothing is wrong but that the component should be taken out of dependency resolution. This is useful if a component’s dependencies are met but it’s still unable to produce a meaningful result. insights.parsers.SkipException is a specialization of this for the dirty parser use case above, but it’s treated the same as SkipComponent.

Exception Recognition by the Insights Engine

Exceptions that are raised by parsers and combiners will be collected by the engine in order to determine whether to remove the component from the dependency hierarchy, for data metrics, and to help identify issues with the parsing code or with the data. Specific use of insights.parsers.ParseException, insights.parsers.SkipException, and insights.SkipComponent will make it much easier for the engine to identify and quickly deal with known conditions versus unanticipated conditions (i.e., other exceptions being raised) which could indicate errors in the parsing code, errors in data collection, or data errors.

API Documentation

insights.core

class insights.core.AttributeDict(*args, **kwargs)[source]

Bases: dict

Class to convert the access to each item in a dict as attribute.

Warning

Deprecated class, please set attributes explicitly.

Examples

>>> data = {
... "fact1":"fact 1"
... "fact2":"fact 2"
... "fact3":"fact 3"
... }
>>> d_obj = AttributeDict(data)
{'fact1': 'fact 1', 'fact2': 'fact 2', 'fact3': 'fact 3'}
>>> d_obj['fact1']
'fact 1'
>>> d_obj.get('fact1')
'fact 1'
>>> d_obj.fact1
'fact 1'
>>> 'fact2' in d_obj
True
>>> d_obj.get('fact3', default='no fact')
'fact 3'
>>> d_obj.get('fact4', default='no fact')
'no fact'
class insights.core.CommandParser(context, extra_bad_lines=[])[source]

Bases: insights.core.Parser

This class checks output from the command defined in the spec. If context.content contains a single line and that line is included in the bad_lines list a ContentException is raised

static validate_lines(results, bad_lines)[source]

If results contains a single line and that line is included in the bad_lines list, this function returns False. If no bad line is found the function returns True

Parameters

results (str) -- The results string of the output from the command defined by the command spec.

Returns

True for no bad lines or False for bad line found.

Return type

(Boolean)

class insights.core.ConfigCombiner(confs, main_file, include_finder)[source]

Bases: insights.core.ConfigComponent

Base Insights component class for Combiners of configuration files with include directives for supplementary configuration files. httpd and nginx are examples.

find_matches(confs, pattern)[source]
class insights.core.ConfigComponent[source]

Bases: object

property directives
find(*queries, **kwargs)[source]

Finds matching results anywhere in the configuration

find_all(*queries, **kwargs)

Finds matching results anywhere in the configuration

property sections
select(*queries, **kwargs)[source]

Given a list of queries, executes those queries against the set of Nodes. A Node has three primary attributes: name (str), attrs ([str|int]), and children ([Node]).

Nodes also have a value attribute that is either the first attribute (in the case of simple directives that only have one), or the string representation of all attributes joined by a single space.

Each positional argument to select represents a query against the name and/or attributes of the corresponding level of the configuration tree. The first argument queries root nodes, the second argument queries children of the root nodes, etc.

An individual query is either a single value or a tuple. A single value queries the name of a Node. A tuple queries the name and the attrs.

So: select(name_predicate) or select((name_predicate, attrs_predicate))

In general, select(pred1, pred2, pred3, …)

If a predicate is a simple value (string or int), an exact match is required for names, and an exact match of any attribute is required for attributes.

Examples: select(“Directory”) queries for all root nodes named Directory.

select(“Directory”, “Options”) queries for all root nodes named Directory that contain at least one child node named Options. Notice the argument positions: Directory is in position 1, and Options is in position 2.

select((“Directory”, “/”)) queries for all root nodes named Directory that contain an attribute exactly matching “/”. Notice this is one argument to select: a 2-tuple with predicates for name and attrs.

If you are only interested in attributes, just pass None for the name predicate in the tuple: select((None, “/”)) will return all root nodes with at least one attribute of “/”

In addition to exact matches, the elements of a query can be functions that accept the value corresponding to their position in the query. A handful of useful functions and boolean operators between them are provided.

select(startswith(“Dir”)) queries for all root nodes with names starting with “Dir”.

select(~startswith(“Dir”)) queries for all root nodes with names not starting with “Dir”.

select(startswith(“Dir”) | startswith(“Ali”)) queries for all root nodes with names starting with “Dir” or “Ali”. The return of | is a single callable passed in the first argument position of select.

select(~startswith(“Dir”) & ~startswith(“Ali”)) queries for all root nodes with names not starting with “Dir” or “Ali”.

If a function is in an attribute position, it is considered True if it returns True for any attribute.

For example, select((None, 80)) often will return the list of one Node [Listen 80]

select((“Directory”, startswith(“/var”))) will return all root nodes named Directory that also have an attribute starting with “/var”

If you know that your selection will only return one element, or you only want the first or last result of the query , pass one=first or one=last.

select((“Directory”, startswith(“/”)), one=last) will return the single root node for the last Directory entry starting with “/”

If instead of the root nodes that match you want the child nodes that caused the match, pass roots=False.

node = select((“Directory”, “/var/www/html”), “Options”, one=last, roots=False) might return the Options node if the Directory for “/var/www/html” was defined and contained an Options Directive. You could then access the attributes with node.attrs. If the query didn’t match anything, it would have returned None.

If you want to slide the query down the branches of the config, pass deep=True to select. That allows you to do conf.select(“Directory”, deep=True, roots=False) and get back all Directory nodes regardless of nesting depth.

conf.select() returns everything.

Available predicates are: & (infix boolean and) | (infix boolean or) ~ (prefix boolean not)

For ints or strings: eq (==) e.g. conf.select(“Directory, (“StartServers”, eq(4))) ge (>=) e.g. conf.select(“Directory, (“StartServers”, ge(4))) gt (>) le (<=) lt (<)

For strings: contains endswith startswith

class insights.core.ConfigParser(context)[source]

Bases: insights.core.Parser, insights.core.ConfigComponent

Base Insights component class for Parsers of configuration files.

Raises

SkipException -- When input content is empty.

lineat(pos)[source]
parse_content(content)[source]

This method must be implemented by classes based on this class.

parse_doc(content)[source]
class insights.core.FileListing(context)[source]

Bases: insights.core.Parser

Reads a series of concatenated directory listings and turns them into a dictionary of entities by name. Stores all the information for each directory entry for every entry that can be parsed, containing:

  • type (one of [bcdlps-])

  • permission string including ACL character

  • number of links

  • owner and group (as given in the listing)

  • size, or major and minor number for block and character devices

  • date (in the format given in the listing)

  • name

  • name of linked file, if a symlink

In addition, the raw line is always stored, even if the line doesn’t look like a directory entry.

Also provides a number of other conveniences, such as:

  • lists of regular and special files and subdirectory names for each directory, in the order found in the listing

  • total blocks allocated to all the entities in this directory

Note

For listings that only contain one directory, ls does not output the directory name. The directory is reverse engineered from the path given to the parser by Insights - this assumes the translation of spaces to underscores and ‘/’ to ‘.’ in paths. For example, ls -l /var/www/html will be translated to ls_-l_.var.www.html. The reverse translation will make mistakes, for example in translating .etc.yum.repos.d to /etc/yum/repos/d. Use caution in checking the paths when requesting single directories.

Parses the SELinux information if present in the listing. SELinux directory listings contain:

  • the type of file

  • the permissions block

  • the owner and group as given in the directory listing

  • the SELinux user, role, type and MLS

  • the name, and link destination if it’s a symlink

Sample input data looks like this:

/example_dir:
total 20
dr-xr-xr-x. 3 0 0 4096 Mar 4 16:19 .
-rw-r--r--. 1 0 0 123891 Aug 25 2015 config-3.10.0-229.14.1.el7.x86_64
lrwxrwxrwx. 1 0 0 11 Aug 4 2014 menu.lst -> ./grub.conf
brw-rw----. 1 0 6 253, 10 Aug 4 16:56 dm-10
crw-------. 1 0 0 10, 236 Jul 25 10:00 control

Examples

>>> file_listing
<insights.core.FileListing at 0x7f5319407450>
>>> '/example_dir' in file_listing
True
>>> file_listing.dir_contains('/example_dir', 'menu.lst')
True
>>> dir = file_listing.listing_of('/example_dir')
>>> dir['.']['type']
'd'
>>> dir['config-3.10.0-229.14.q.el7.x86_64']['size']
123891
>>> dir['dm-10']['major']
253
>>> dir['menu.lst']['link']
'./grub.conf'
dir_contains(directory, name)[source]

Does this directory contain this entry name?

dir_entry(directory, name)[source]

The parsed data for the given entry name in the given directory.

dirs_of(directory)[source]

The list of subdirectories in the given directory.

files_of(directory)[source]

The list of non-special files (i.e. not block or character files) in the given directory.

listing_of(directory)[source]

The listing of this directory, in a dictionary by entry name. All entries contain the original line as is in the ‘raw_entry’ key. Entries that can be parsed then have fields as described in the class description above.

parse_content(content)[source]

Called automatically to process the directory listing(s) contained in the content.

path_entry(path)[source]

The parsed data given a path, which is separated into its directory and entry name.

specials_of(directory)[source]

The list of block and character special files in the given directory.

total_of(directory)[source]

The total blocks of storage consumed by entries in this directory.

class insights.core.IniConfigFile(context)[source]

Bases: insights.core.ConfigParser

A class specifically for reading configuration files in ‘ini’ format.

The input file format supported by this class is:

[section 1]
key = value
; comment
# comment
[section 2]
key with spaces = value string
[section 3]
# Must implement parse_content in child class
# and pass allow_no_value=True to parent class
# to enable keys with no values
key_with_no_value

References

See Python RawConfigParser documentation for more information https://docs.python.org/2/library/configparser.html#rawconfigparser-objects

Examples

>>> class MyConfig(IniConfigFile):
...     pass
>>> content = '''
... [defaults]
... admin_token = ADMIN
... [program opts]
... memsize = 1024
... delay = 1.5
... [logging]
... log = true
... logging level = verbose
... '''.split()
>>> my_config = MyConfig(context_wrap(content, path='/etc/myconfig.conf'))
>>> 'program opts' in my_config
True
>>> my_config.sections()
['program opts', 'logging']
>>> my_config.defaults()
{'admin_token': 'ADMIN'}
>>> my_config.items('program opts')
{'memsize': 1024, 'delay': 1.5}
>>> my_config.get('logging', 'logging level')
'verbose'
>>> my_config.getint('program opts', 'memsize')
1024
>>> my_config.getfloat('program opts', 'delay')
1.5
>>> my_config.getboolean('logging', 'log')
True
>>> my_config.has_option('logging', 'log')
True
defaults()[source]

list: Return a dict of key/value pairs in the [default] section.

get(section, key)[source]

value: Get value for section and key.

getboolean(section, key)[source]

boolean: Get boolean value for section and key.

getfloat(section, key)[source]

float: Get float value for section and key.

getint(section, key)[source]

int: Get int value for section and key.

has_option(section, key)[source]

boolean: Returns True of section is present and has option key.

items(section)[source]

dict: Return a dictionary of key/value pairs for section.

parse_content(content, allow_no_value=False)[source]

Parses content of the config file.

In child class overload and call super to set flag allow_no_values and allow keys with no value in config file:

def parse_content(self, content):
    super(YourClass, self).parse_content(content,
                                         allow_no_values=True)
parse_doc(content)[source]
sections()[source]

list: Return a list of section names.

class insights.core.JSONParser(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

A parser class that reads JSON files. Base your own parser on this.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.core.LegacyItemAccess[source]

Bases: object

Mixin class to provide legacy access to self.data attribute.

Provides expected passthru functionality for classes that still use self.data as the primary data structure for all parsed information. Use this as a mixin on parsers that expect these methods to be present as they were previously.

Examples

>>> class MyParser(LegacyItemAccess, Parser):
...     def parse_content(self, content):
...         self.data = {}
...         for line in content:
...             if 'fact' in line:
...                 k, v = line.split('=')
...                 self.data[k.strip()] = v.strip()
>>> content = '''
... # Comment line
... fact1=fact 1
... fact2=fact 2
... fact3=fact 3
... '''.strip()
>>> my_parser = MyParser(context_wrap(content, path='/etc/path_to_content/content.conf'))
>>> my_parser.data
{'fact1': 'fact 1', 'fact2': 'fact 2', 'fact3': 'fact 3'}
>>> my_parser.file_path
'/etc/path_to_content/content.conf'
>>> my_parser.file_name
'content.conf'
>>> my_parser['fact1']
'fact 1'
>>> 'fact2' in my_parser
True
>>> my_parser.get('fact3', default='no fact')
'fact 3'
get(item, default=None)[source]

Returns value of key item in self.data or default if key is not present.

Parameters
  • item -- Key to get from self.data.

  • default -- Default value to return if key is not present.

Returns

String value of the stored item, or the default if not found.

Return type

(str)

class insights.core.LogFileOutput(context)[source]

Bases: insights.core.Parser

Class for parsing log file content.

Log file content is stored in raw format in the lines attribute.

Assume the log file content is:

Log file line one
Log file line two
Log file line three, and more

Examples

>>> class MyLogger(LogFileOutput):
...     pass
>>> MyLogger.keep_scan('get_one', 'one')
>>> MyLogger.keep_scan('get_three_and_more', ['three', 'more'])
>>> MyLogger.keep_scan('get_one_or_two', ['one', 'two'], check=any)
>>> MyLogger.last_scan('last_line_contains_file', 'file')
>>> MyLogger.keep_scan('last_2_lines_contain_file', 'file', num=2, reverse=True)
>>> MyLogger.keep_scan('last_3_lines_contain_line_and_t', ['line', 't'], num=3, reverse=True)
>>> MyLogger.token_scan('find_more', 'more')
>>> MyLogger.token_scan('find_four_and_more', ['four', 'more'])
>>> MyLogger.token_scan('find_four_or_more', ['four', 'more'], check=any)
>>> my_logger = MyLogger(context_wrap(contents, path='/var/log/mylog'))
>>> my_logger.file_path
'/var/log/mylog'
>>> my_logger.file_name
'mylog'
>>> my_logger.get('two')
[{'raw_message': 'Log file line two'}]
>>> 'line three,' in my_logger
True
>>> my_logger.get(['three', 'more'])
[{'raw_message': 'Log file line three, and more'}]
>>> my_logger.lines[0]
'Log file line one'
>>> my_logger.get_one
[{'raw_message': 'Log file line one'}]
>>> my_logger.get_three_and_more == my_logger.get(['three', 'more'])
True
>>> my_logger.last_line_contains_file
{'raw_message': 'Log file line three, and more'}
>>> len(my_logger.last_2_lines_contain_file)
2
>>> len(my_logger.last_3_lines_contain_line_and_t)  # Only 2 lines contain 'line' and 't'
2
>>> my_logger.find_more
True
>>> my_logger.find_four_and_more
False
>>> my_logger.find_four_or_more
True
lines

List of the lines from the log file content.

Type

list

get(s, check=<built-in function all>, num=None, reverse=False)[source]

Returns all lines that contain s anywhere and wrap them in a list of dictionaries. s can be either a single string or a string list. For list, all keywords in the list must be found in each line.

Parameters
  • s (str or list) -- one or more strings to search for

  • check (func) -- built-in function all or any applied to each line

  • num (int) -- the number of lines to get, None for unlimited

  • reverse (bool) -- scan start from the head when False by default, otherwise start from the tail

Returns

list of dictionaries corresponding to the parsed lines contain the s.

Return type

(list)

Raises

TypeError -- When s is not a string or a list of strings, or num is not an integer.

get_after(timestamp, s=None)[source]

Find all the (available) logs that are after the given time stamp.

If s is not supplied, then all lines are used. Otherwise, only the lines contain the s are used. s can be either a single string or a string list. For list, all keywords in the list must be found in each line.

This method then finds all lines which have a time stamp after the given timestamp. Lines that do not contain a time stamp are considered to be part of the previous line and are therefore included if the last log line was included or excluded otherwise.

Time stamps are recognised by converting the time format into a regular expression which matches the time format in the string. This is then searched for in each line in turn. Only lines with a time stamp matching this expression will trigger the decision to include or exclude lines. Therefore, if the log for some reason does not contain a time stamp that matches this format, no lines will be returned.

The time format is given in strptime() format, in the object’s time_format property. Users of the object should not change this property; instead, the parser should subclass LogFileOutput and change the time_format property.

Some logs, regrettably, change time stamps formats across different lines, or change time stamp formats in different versions of the program. In order to accommodate this, the timestamp format can be a list of strptime() format strings. These are combined as alternatives in the regular expression, and are given to strptime in order. These can also be listed as the values of a dict, e.g.:

{'pre_10.1.5': '%y%m%d %H:%M:%S', 'post_10.1.5': '%Y-%m-%d %H:%M:%S'}

Note

Some logs - notably /var/log/messages - do not contain a year in the timestamp. This detected by the absence of a ‘%y’ or ‘%Y’ in the time format. If that year field is absent, the year is assumed to be the year in the given timestamp being sought. Some attempt is made to handle logs with a rollover from December to January, by finding when the log’s timestamp (with current year assumed) is over eleven months (specifically, 330 days) ahead of or behind the timestamp date and shifting that log date by 365 days so that it is more likely to be in the sought range. This paragraph is sponsored by syslog.

Parameters
  • timestamp (datetime.datetime) -- lines before this time are ignored.

  • s (str or list) -- one or more strings to search for. If not supplied, all available lines are searched.

Yields

dict -- The parsed lines with timestamps after this date in the same format they were supplied. It at least contains the raw_message as a key.

Raises

ParseException -- If the format conversion string contains a format that we don’t recognise. In particular, no attempt is made to recognise or parse the time zone or other obscure values like day of year or week of year.

classmethod keep_scan(result_key, token, check=<built-in function all>, num=None, reverse=False)[source]

Define a property that is set to the list of dictionaries of the lines that contain the given token. Uses the get method of the log file.

Parameters
  • result_key (str) -- the scanner key to register

  • token (str or list) -- one or more strings to search for

  • check (func) -- built-in function all or any applied to each line

  • num (int) -- the number of lines to get, None for unlimited

  • reverse (bool) -- scan start from the head when False by default, otherwise start from the tail

Returns

list of dictionaries corresponding to the parsed lines contain the token.

Return type

(list)

classmethod last_scan(result_key, token, check=<built-in function all>)[source]

Define a property that is set to the dictionary of the last line that contains the given token. Uses the get method of the log file.

Parameters
  • result_key (str) -- the scanner key to register

  • token (str or list) -- one or more strings to search for

  • check (func) -- built-in function all or any applied to each line

Returns

dictionary corresponding to the last parsed line contains the token.

Return type

(dict)

parse_content(content)[source]

Use all the defined scanners to search the log file, setting the properties defined in the scanner.

classmethod scan(result_key, func)[source]

Define computed fields based on a string to “grep for”. This is preferred to utilizing raw log lines in plugins because computed fields will be serialized, whereas raw log lines will not.

Raises

ValueError -- When result_key is already a registered scanner key.

scanner_keys = {}
scanners = []
time_format = '%Y-%m-%d %H:%M:%S'

The timestamp format assumed for the log files. A subclass can override this for files that have a different timestamp format. This can be:

  • A string in strptime() format.

  • A list of strptime() strings.

  • A dictionary with each item’s value being a strptime() string. This allows the item keys to provide some form of documentation.

classmethod token_scan(result_key, token, check=<built-in function all>)[source]

Define a property that is set to true if the given token is found in the log file. Uses the __contains__ method of the log file.

Parameters
  • result_key (str) -- the scanner key to register

  • token (str or list) -- one or more strings to search for

  • check (func) -- built-in function all or any applied to each line

Returns

the property will contain True if a line contained (any or all) of the tokens given.

Return type

(bool)

class insights.core.Parser(context)[source]

Bases: object

Base class designed to be subclassed by parsers.

The framework will construct your object with a Context that will provide at least the content as an interable of lines and the path that the content was retrieved from.

Facts should be exposed as instance members where applicable. For example:

self.fact = "123"

Examples

>>> class MyParser(Parser):
...     def parse_content(self, content):
...         self.facts = []
...         for line in content:
...             if 'fact' in line:
...                 self.facts.append(line)
>>> content = '''
... # Comment line
... fact=fact 1
... fact=fact 2
... fact=fact 3
... '''.strip()
>>> my_parser = MyParser(context_wrap(content, path='/etc/path_to_content/content.conf'))
>>> my_parser.facts
['fact=fact 1', 'fact=fact 2', 'fact=fact 3']
>>> my_parser.file_path
'/etc/path_to_content/content.conf'
>>> my_parser.file_name
'content.conf'
file_name = None

Filename portion of the input file.

Type

str

file_path = None

Full context path of the input file.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.core.ScanMeta[source]

Bases: type

class insights.core.Scannable(context)[source]

Bases: insights.core.Parser

A class to enable early and easy collection of data in a file.

The Scannable class makes it easy to collect two common types of information from a data file:

  • A flag to indicate that the data contains one or more lines with a given string.

  • a list of lines containing a given string.

To create a parser from the Scannable parser class, the main job is to override the parse() method, returning your choice of data structure to represent the information in the file. This takes the form of a generator that yields structures for users of your parser. You can yield more than object per line, or you can condense multiple lines into one object. Each object is then scanned with all the defined scanners for this class.

How does that work? Well, the individual rules using your parser will use the any() and collect() methods on the class object itself to set up new attributes of the class that will be given values based on the results of a function that checks each object from your parser for the properties it’s looking for. That’s pretty vague, so let’s give some examples - imagine a parser defined as:

class AnacondaLog(Scannable):

pass

(Which uses the default parse() function that simply yields each line in turn.) A rule using this parser then does:

def warnings(line):

return line if ‘WARNING’ in line

def has_fcoe_edd(line):

return ‘/usr/libexec/fcoe/fcoe_edd.sh’ in line

AnacondaLog.any(‘has_fcoe’, has_fcoe_edd) AnacondaLog.collect(‘warnings’, warnings)

These then act in the following way:

  • When an object is instantiated from the AnacondaLog class, it will have the ‘has_fcoe’ attribute. This will be set to True if ‘/usr/libexec/fcoe/fcoe_edd.sh’ was found in any line in the file, or False otherwise.

  • When an object is instantiated from the AnacondaLog class, it will have the ‘warnings’ attribute. This will be a list containing all the lines found.

Users of your class can supply any function to either any() or collect(). Functions given to collect() can return anything they want to be collected - if they return something that evaluates to False then nothing is collected (so avoid returning empty lists, empty dicts, empty strings or False).

classmethod any(result_key, func)[source]

Sets the result_key to the output of func if func ever returns truthy

classmethod collect(result_key, func)[source]

Sets the result_key to an iterable of objects for which func(obj) returns True

parse(content)[source]

Default ‘parsing’ method. Subclasses should override this method with their own custom parsing as necessary.

parse_content(content)[source]

This method must be implemented by classes based on this class.

scanner_keys = {}
scanners = []
class insights.core.StreamParser(context)[source]

Bases: insights.core.Parser

Parsers that don’t have to store lines or look back in the data stream should implement StreamParser instead of Parser as it is more memory efficient. The only difference between StreamParser and Parser is that StreamParser.parse_content will receive a generator instead of a list.

class insights.core.SysconfigOptions(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

A parser to handle the standard ‘keyword=value’ format of files in the /etc/sysconfig directory. These are provided in the standard ‘data’ dictionary.

Examples

>>> 'OPTIONS' in ntpconf
True
>>> 'NOT_SET' in ntpconf
False
>>> 'COMMENTED_OUT' in ntpconf
False
>>> ntpconf['OPTIONS']
'-x -g'

For common variables such as OPTIONS, it is recommended to set a specific property in the subclass that fetches this option with a fallback to a default value.

Example subclass:

class DirsrvSysconfig(SysconfigOptions):

    @property
    def options(self):
        return self.data.get('OPTIONS', '')
keys()[source]

Return the list of keys (in no order) in the underlying dictionary.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.core.Syslog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing syslog file content.

The important function is get(s)(), which finds all lines with the string s and parses them into dictionaries with the following keys:

  • timestamp - the time the log line was written

  • procname - the process or facility that wrote the line

  • hostname - the host that generated the log line

  • message - the rest of the message (after the process name)

  • raw_message - the raw message before being split.

It is best to use filters and/or scanners with the messages log, to speed up parsing. These work on the raw message, before being parsed.

Sample log lines:

May  9 15:13:34 lxc-rhel68-sat56 jabberd/sm[11057]: session started: jid=rhn-dispatcher-sat@lxc-rhel6-sat56.redhat.com/superclient
May  9 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon
May  9 15:13:36 lxc-rhel68-sat56 wrapper[11375]: Launching a JVM...
May 10 15:24:28 lxc-rhel68-sat56 yum[11597]: Installed: lynx-2.8.6-27.el6.x86_64
May 10 15:36:19 lxc-rhel68-sat56 yum[11954]: Updated: sos-3.2-40.el6.noarch

Examples

>>> Syslog.token_scan('daemon_start', 'Wrapper Started as Daemon')
>>> Syslog.token_scan('yum_updated', ['yum', 'Updated'])
>>> Syslog.keep_scan('yum_lines', 'yum')
>>> Syslog.keep_scan('yum_installed_lines', ['yum', 'Installed'])
>>> syslog.get('wrapper')[0]
{'timestamp': 'May  9 15:13:36', 'hostname': 'lxc-rhel68-sat56',
 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',
 'raw_message': 'May  9 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'
}
>>> syslog.daemon_start
True
>>> syslog.yum_updated
True
>>> len(syslog.yum_lines)
2
>>> len(syslog.yum_updated_lines)
1

Note

Because syslog timestamps by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

get_logs_by_procname(proc)[source]
Parameters

proc (str) -- The process or facility that you’re looking for

Yields

(dict) -- The parsed syslog messages produced by that process or facility

scanner_keys = {}
scanners = []
time_format = '%b %d %H:%M:%S'
class insights.core.XMLParser(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

A parser class that reads XML files. Base your own parser on this.

Examples

>>> content = '''
... <?xml version="1.0"?>
... <data xmlns:fictional="http://characters.example.com"
...       xmlns="http://people.example.com">
...     <country name="Liechtenstein">
...         <rank updated="yes">2</rank>
...         <year>2008</year>
...         <gdppc>141100</gdppc>
...         <neighbor name="Austria" direction="E"/>
...         <neighbor name="Switzerland" direction="W"/>
...     </country>
...     <country name="Singapore">
...         <rank updated="yes">5</rank>
...         <year>2011</year>
...         <gdppc>59900</gdppc>
...         <neighbor name="Malaysia" direction="N"/>
...     </country>
...     <country name="Panama">
...         <rank>68</rank>
...         <year>2011</year>
...         <gdppc>13600</gdppc>
...         <neighbor name="Costa Rica" direction="W"/>
...     </country>
... </data>
... '''.strip()
>>> xml_parser = XMLParser(context_wrap(content))
>>> xml_parser.xmlns
'http://people.example.com'
>>> xml_parser.get_elements(".")[0].tag # Top-level elements
'data'
>>> len(xml_parser.get_elements("./country/neighbor", None)) # All 'neighbor' grand-children of 'country' children of the top-level elements
3
>>> len(xml_parser.get_elements(".//year/..[@name='Singapore']")[0]) # Nodes with name='Singapore' that have a 'year' child
1
>>> xml_parser.get_elements(".//*[@name='Singapore']/year")[0].text # 'year' nodes that are children of nodes with name='Singapore'
'2011'
>>> xml_parser.get_elements(".//neighbor[2]", "http://people.example.com")[0].get('name') # All 'neighbor' nodes that are the second child of their parent
'Switzerland'
raw

raw XML content

Type

str

dom

Root element of parsed XML file

Type

Element

xmlns

The default XML namespace, an empty string when no namespace is declared.

Type

str

data

All required specific properties can be included in data.

Type

dict

get_elements(element, xmlns=None)[source]

Return a list of elements those match the searching condition. If the XML input has namespaces, elements and attributes with prefixes in the form prefix:sometag get expanded to {namespace}element where the prefix is replaced by the full URI. Also, if there is a default namespace, that full URI gets prepended to all of the non-prefixed tags. Element names can contain letters, digits, hyphens, underscores, and periods. But element names must start with a letter or underscore. Here the while-clause is to set searching condition from /element1/element2 to /{namespace}element1/{namespace}/element2

Parameters
  • element -- Searching condition to search certain elements in an XML file. For more details about how to set searching condition, refer to section 19.7.2.1. Example and 19.7.2.2. Supported XPath syntax in https://docs.python.org/2/library/xml.etree.elementtree.html

  • xmlns -- XML namespace, default value to None. None means that xmlns equals to the self.xmlns (default namespace) instead of “” all the time. Only string type parameter (including “”) will be regarded as a valid xml namespace.

Returns

List of elements those match the searching condition

Return type

(list)

parse_content(content)[source]

All child classes inherit this function to parse XML file automatically. It will call the function parse_dom() by default to parser all necessary data to data and the xmlns (the default namespace) is ready for this function.

parse_dom()[source]

If self.data is required, all child classes need to overwrite this function to set it

class insights.core.YAMLParser(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

A parser class that reads YAML files. Base your own parser on this.

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.core.default_parser_deserializer(_type, data)[source]
insights.core.default_parser_serializer(obj)[source]
insights.core.find_main(confs, name)[source]
insights.core.flatten(docs, pred)[source]

Replace include nodes with their config trees. Allows the same files to be included more than once so long as they don’t induce a recursion.

insights.core.context

class insights.core.context.ClusterArchiveContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

class insights.core.context.Context(**kwargs)[source]

Bases: object

product()[source]
stream()[source]
class insights.core.context.Docker(role=None)[source]

Bases: insights.core.context.MultiNodeProduct

name = 'docker'
parent_type = 'host'
class insights.core.context.DockerImageContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

class insights.core.context.ExecutionContext(root='/', timeout=None, all_files=None)[source]

Bases: object

check_output(cmd, timeout=None, keep_rc=False, env=None)[source]

Subclasses can override to provide special environment setup, command prefixes, etc.

connect(*args, **kwargs)[source]
classmethod handles(files)[source]
locate_path(path)[source]
marker = None
shell_out(cmd, split=True, timeout=None, keep_rc=False, env=None)[source]
stream(*args, **kwargs)[source]
class insights.core.context.ExecutionContextMeta(name, bases, dct)[source]

Bases: type

classmethod identify(files)[source]
registry = [<class 'insights.core.context.HostContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.JBossContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.OpenStackContext'>, <class 'insights.core.context.OpenShiftContext'>]
class insights.core.context.HostArchiveContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

marker = 'insights_commands'
class insights.core.context.HostContext(root='/', timeout=30, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

class insights.core.context.InsightsOperatorContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

Recognizes insights-operator archives

marker = 'config/featuregate'
class insights.core.context.JBossContext(root='/', timeout=30, all_files=None)[source]

Bases: insights.core.context.HostContext

class insights.core.context.JDRContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

locate_path(path)[source]
marker = 'JBOSS_HOME'
class insights.core.context.MultiNodeProduct(role=None)[source]

Bases: object

is_parent()[source]
class insights.core.context.MustGatherContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

Recognizes must-gather archives

marker = 'namespaces'
class insights.core.context.OSP(role=None)[source]

Bases: insights.core.context.MultiNodeProduct

name = 'osp'
parent_type = 'Director'
class insights.core.context.OpenShiftContext(hostname)[source]

Bases: insights.core.context.ExecutionContext

class insights.core.context.OpenStackContext(hostname)[source]

Bases: insights.core.context.ExecutionContext

class insights.core.context.RHEL(version=['-1', '-1'], release=None)[source]

Bases: object

classmethod from_metadata(metadata, processor_obj)[source]
name = 'rhel'
class insights.core.context.RHEV(role=None)[source]

Bases: insights.core.context.MultiNodeProduct

name = 'rhev'
parent_type = 'Manager'
class insights.core.context.SerializedArchiveContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

marker = 'insights_archive.txt'
class insights.core.context.SosArchiveContext(root='/', timeout=None, all_files=None)[source]

Bases: insights.core.context.ExecutionContext

marker = 'sos_commands'
insights.core.context.create_product(metadata, hostname)[source]
insights.core.context.fs_root(thing)[source]
insights.core.context.get_system(metadata, hostname)[source]
insights.core.context.product(klass)[source]

insights.core.dr

This module implements an inversion of control framework. It allows dependencies among functions and classes to be declared with decorators and the resulting dependency graphs to be executed.

A decorator used to declare dependencies is called a ComponentType, a decorated function or class is called a component, and a collection of interdependent components is called a graph.

In the example below, needs is a ComponentType, one, two, and add are components, and the relationship formed by their dependencies is a graph.

from insights import dr

class needs(dr.ComponentType):
    pass

@needs()
def one():
    return 1

@needs()
def two():
    return 2

@needs(one, two)
def add(a, b):
    return a + b

results = dr.run(add)

Once all components have been imported, the graphs they form can be run. To execute a graph, dr sorts its components into an order that guarantees dependencies are tried before dependents. Components that raise exceptions are considered invalid, and their dependents will not be executed. If a component is skipped because of a missing dependency, its dependents also will not be executed.

During evaluation, results are accumulated into an object called a Broker, which is just a fancy dictionary. Brokers can be inspected after a run for results, exceptions, tracebacks, and execution times. You also can register callbacks with a broker that get invoked after the attempted execution of every component, so you can inspect it during an evaluation instead of at the end.

class insights.core.dr.Broker(seed_broker=None)[source]

Bases: object

The Broker is a fancy dictionary that keeps up with component instances as a graph is evaluated. It’s the state of the evaluation. Once a graph has executed, the broker will contain everything about the evaluation: component instances, timings, exceptions, and tracebacks.

You can either inspect the broker at the end of an evaluation, or you can register callbacks with it, and they’ll get invoked after each component is called.

instances

the component instances with components as keys.

Type

dict

missing_requirements

components that didn’t have their dependencies met. Values are a two-tuple. The first element is the list of required dependencies that were missing. The second element is the list of “at least one” dependencies that were missing. For more information on dependency types, see the ComponentType docs.

Type

dict

exceptions

Components that raise any type of exception except SkipComponent during evaluation. The key is the component, and the value is a list of exceptions. It’s a list because some components produce multiple instances.

Type

defaultdict(list)

tracebacks

keys are exceptions and values are their text tracebacks.

Type

dict

exec_times

component -> float dictionary where values are the number of seconds the component took to execute. Calculated using time.time(). For components that produce multiple instances, the execution time here is the sum of their individual execution times.

Type

dict

add_exception(component, ex, tb=None)[source]
add_observer(o, component_type=<class 'insights.core.dr.ComponentType'>)[source]

Add a callback that will get invoked after each component is called.

Parameters

o (func) -- the callback function

Keyword Arguments

component_type (ComponentType) -- the ComponentType to observe. The callback will fire any time an instance of the class or its subclasses is invoked.

The callback should look like this:

def callback(comp, broker):
    value = broker.get(comp)
    # do something with value
    pass
fire_observers(component)[source]
get(component, default=None)[source]
get_by_type(_type)[source]

Return all of the instances of ComponentType _type.

items()[source]
keys()[source]
observer(component_type=<class 'insights.core.dr.ComponentType'>)[source]

You can use @broker.observer() as a decorator to your callback instead of Broker.add_observer().

print_component(component_type)[source]
values()[source]
exception insights.core.dr.MissingRequirements(requirements)[source]

Bases: Exception

Raised during evaluation if a component’s dependencies aren’t met.

exception insights.core.dr.SkipComponent[source]

Bases: Exception

This class should be raised by components that want to be taken out of dependency resolution.

insights.core.dr.add_dependency(component, dep)[source]
insights.core.dr.add_dependent(component, dep)[source]
insights.core.dr.add_ignore(c, i)[source]
insights.core.dr.add_observer(o, component_type=<class 'insights.core.dr.ComponentType'>)[source]

Add a callback that will get invoked after each component is called.

Parameters

o (func) -- the callback function

Keyword Arguments

component_type (ComponentType) -- the ComponentType to observe. The callback will fire any time an instance of the class or its subclasses is invoked.

The callback should look like this:

def callback(comp, broker):
    value = broker.get(comp)
    # do something with value
    pass
insights.core.dr.first_of(dependencies, broker)[source]
insights.core.dr.generate_incremental(components=None, broker=None)[source]
insights.core.dr.get_base_module_name(obj)[source]
insights.core.dr.get_component(name)[source]

Returns the class or function specified, importing it if necessary.

insights.core.dr.get_component_by_name(name)[source]

Look up a component by its fully qualified name. Return None if the component hasn’t been loaded.

insights.core.dr.get_component_type(component)[source]
insights.core.dr.get_components_of_type(_type)[source]
insights.core.dr.get_delegate(component)[source]
insights.core.dr.get_dependencies(component)[source]
insights.core.dr.get_dependency_graph(component)[source]

Generate a component’s graph of dependencies, which can be passed to run() or run_incremental().

insights.core.dr.get_dependents(component)[source]
insights.core.dr.get_group(component)[source]

Return the dictionary of links associated with the component. Defaults to dict().

insights.core.dr.get_metadata(component)[source]

Return any metadata dictionary associated with the component. Defaults to an empty dictionary.

insights.core.dr.get_missing_requirements(func, requires, d)[source]

Deprecated since version 1.x.

insights.core.dr.get_module_name(obj)[source]
insights.core.dr.get_name(component)[source]

Attempt to get the string name of component, including module and class if applicable.

insights.core.dr.get_simple_name(component)[source]
insights.core.dr.get_subgraphs(graph=None)[source]

Given a graph of possibly disconnected components, generate all graphs of connected components. graph is a dictionary of dependencies. Keys are components, and values are sets of components on which they depend.

insights.core.dr.get_tags(component)[source]

Return the set of tags associated with the component. Defaults to set().

insights.core.dr.hashable(v)[source]
insights.core.dr.is_enabled(component)[source]

Check to see if a component is enabled.

Parameters

component (callable) -- The component to check. The component must already be loaded.

Returns

True if the component is enabled. False otherwise.

insights.core.dr.is_hidden(component)[source]
insights.core.dr.load_components(*paths, **kwargs)[source]

Loads all components on the paths. Each path should be a package or module. All components beneath a path are loaded.

Parameters

paths (str) -- A package or module to load

Keyword Arguments
  • include (str) -- A regular expression of packages and modules to include. Defaults to ‘.*’

  • exclude (str) -- A regular expression of packges and modules to exclude. Defaults to ‘test’

  • continue_on_error (bool) -- If True, continue importing even if something raises an ImportError. If False, raise the first ImportError.

Returns

The total number of modules loaded.

Return type

int

Raises

ImportError --

insights.core.dr.mark_hidden(component)[source]
insights.core.dr.observer(component_type=<class 'insights.core.dr.ComponentType'>)[source]

You can use @broker.observer() as a decorator to your callback instead of add_observer().

insights.core.dr.run(components=None, broker=None)[source]

Executes components in an order that satisfies their dependency relationships.

Keyword Arguments
  • components -- Can be one of a dependency graph, a single component, a component group, or a component type. If it’s anything other than a dependency graph, the appropriate graph is built for you and before evaluation.

  • broker (Broker) -- Optionally pass a broker to use for evaluation. One is created by default, but it’s often useful to seed a broker with an initial dependency.

Returns

The broker after evaluation.

Return type

Broker

insights.core.dr.run_all(components=None, broker=None, pool=None)[source]
insights.core.dr.run_incremental(components=None, broker=None)[source]

Executes components in an order that satisfies their dependency relationships. Disjoint subgraphs are executed one at a time and a broker containing the results for each is yielded. If a broker is passed here, its instances are used to seed the broker used to hold state for each sub graph.

Keyword Arguments
  • components -- Can be one of a dependency graph, a single component, a component group, or a component type. If it’s anything other than a dependency graph, the appropriate graph is built for you and before evaluation.

  • broker (Broker) -- Optionally pass a broker to use for evaluation. One is created by default, but it’s often useful to seed a broker with an initial dependency.

Yields

Broker -- the broker used to evaluate each subgraph.

insights.core.dr.run_order(graph)[source]

Returns components in an order that satisfies their dependency relationships.

insights.core.dr.set_enabled(component, enabled=True)[source]

Enable a component for evaluation. If set to False, the component is skipped, and all components that require it will not execute. If component is a fully qualified name string of a callable object instead of the callable object itself, the component’s module is loaded as a side effect of calling this function.

Parameters
  • component (str or callable) -- fully qualified name of the component or the component object itself.

  • enabled (bool) -- whether the component is enabled for evaluation.

Returns

None

insights.core.dr.split_requirements(requires)[source]
insights.core.dr.stringify_requirements(requires)[source]
insights.core.dr.walk_dependencies(root, visitor)[source]

Call visitor on root and all dependencies reachable from it in breadth first order.

Parameters
  • root (component) -- component function or class

  • visitor (function) -- signature is func(component, parent). The call on root is visitor(root, None).

insights.core.dr.walk_tree(root, method=<function get_dependencies>)[source]
class insights.core.dr.ComponentType(*deps, **kwargs)[source]

ComponentType is the base class for all component type decorators.

For Example:

class my_component_type(ComponentType):
    pass

@my_component_type(SshDConfig, InstalledRpms, [ChkConfig, UnitFiles], optional=[IPTables, IpAddr])
def my_func(sshd_config, installed_rpms, chk_config, unit_files, ip_tables, ip_addr):
    return installed_rpms.newest("bash")

Notice that the arguments to my_func correspond to the dependencies in the @my_component_type and are in the same order.

When used, a my_component_type instance is created whose __init__ gets passed dependencies and whose __call__ gets passed the component to run if dependencies are met.

Parameters to the decorator have these forms:

Criteria

Example Decorator Arguments

Description

Required

SshDConfig, InstalledRpms

A regular argument

At Least One

[ChkConfig, UnitFiles]

An argument as a list

Optional

optional=[IPTables, IpAddr]

A list following optional=

If a parameter is required, the value provided for it is guaranteed not to be None. In the example above, sshd_config and installed_rpms will not be None.

At least one of the arguments to parameters of an “at least one” list will not be None. In the example, either or both of chk_config and unit_files will not be None.

Any or all arguments for optional parameters may be None.

The following keyword arguments may be passed to the decorator:

requires

a list of components that all components decorated with this type will implicitly require. Additional components passed to the decorator will be appended to this list.

Type

list

optional

a list of components that all components decorated with this type will implicitly depend on optionally. Additional components passed as optional to the decorator will be appended to this list.

Type

list

metadata

an arbitrary dictionary of information to associate with the component you’re decorating. It can be retrieved with get_metadata.

Type

dict

tags

a list of strings that categorize the component. Useful for formatting output or sifting through results for components you care about.

Type

list

group

GROUPS.single or GROUPS.cluster. Used to organize components into “groups” that run together with insights.core.dr.run().

cluster

if True will put the component into the GROUPS.cluster group. Defaults to False. Overrides group if True.

Type

bool

get_missing_dependencies(broker)[source]

Gets required and at-least-one dependencies not provided by the broker.

invoke(results)[source]

Handles invocation of the component. The default implementation invokes it with positional arguments based on order of dependency declaration.

process(broker)[source]

Ensures dependencies have been met before delegating to self.invoke.

insights.core.filters

The filters module allows developers to apply filters to datasources, by adding them directly or through dependent components like parsers and combiners. A filter is a simple string, and it matches if it is contained anywhere within a line.

If a datasource has filters defined, it will return only lines matching at least one of them. If a datasource has no filters, it will return all lines.

Filters can be added to components like parsers and combiners, to apply consistent filtering to multiple underlying datasources that are configured as filterable.

Filters aren’t applicable to “raw” datasources, which are created with kind=RawFileProvider and have RegistryPoint instances with raw=True.

The addition of a single filter can cause a datasource to change from returning all lines to returning just those that match. Therefore, any filtered datasource should have at least one filter in the commit introducing it so downstream components don’t inadvertently change its behavior.

The benefit of this fragility is the ability to drastically reduce in-memory footprint and archive sizes. An additional benefit is the ability to evaluate only lines known to be free of sensitive information.

Filters added to a RegistryPoint will be applied to all datasources that implement it. Filters added to a datasource implementation apply only to that implementation.

For example, a filter added to Specs.ps_auxww will apply to DefaultSpecs.ps_auxww, InsightsArchiveSpecs.ps_auxww, SosSpecs.ps_auxww, etc. But a filter added to DefaultSpecs.ps_auxww will only apply to DefaultSpecs.ps_auxww. See the modules in insights.specs for those classes.

Filtering can be disabled globally by setting the environment variable INSIGHTS_FILTERS_ENABLED=False. This means that no datasources will be filtered even if filters are defined for them.

insights.core.filters.add_filter(component, patterns)[source]

Add a filter or list of filters to a component. When the component is a datasource, the filter will be directly added to that datasouce. In cases when the component is a parser or combiner, the filter will be added to underlying filterable datasources by traversing dependency graph. A filter is a simple string, and it matches if it is contained anywhere within a line.

Parameters
  • component (component) -- The component to filter, can be datasource, parser or combiner.

  • patterns (str, [str]) -- A string, list of strings, or set of strings to add to the datasource’s filters.

insights.core.filters.apply_filters(target, lines)[source]

Applys filters to the lines of a datasource. This function is used only in integration tests. Filters are applied in an equivalent but more performant way at run time.

insights.core.filters.dump(stream=None)[source]

Dumps a string representation of FILTERS to a stream, normally an open file. If none is passed, FILTERS is dumped to a default location within the project.

insights.core.filters.dumps()[source]

Returns a string representation of the FILTERS dictionary.

insights.core.filters.get_filters(component)[source]

Get the set of filters for the given datasource.

Filters added to a RegistryPoint will be applied to all datasources that implement it. Filters added to a datasource implementation apply only to that implementation.

For example, a filter added to Specs.ps_auxww will apply to DefaultSpecs.ps_auxww, InsightsArchiveSpecs.ps_auxww, SosSpecs.ps_auxww, etc. But a filter added to DefaultSpecs.ps_auxww will only apply to DefaultSpecs.ps_auxww. See the modules in insights.specs for those classes.

Parameters

component (a datasource) -- The target datasource

Returns

The set of filters defined for the datasource

Return type

set

insights.core.filters.load(stream=None)[source]

Loads filters from a stream, normally an open file. If one is not passed, filters are loaded from a default location within the project.

insights.core.filters.loads(string)[source]

Loads the filters dictionary given a string.

insights.core.plugins

The plugins module defines the components used by the rest of insights and specializes their interfaces and execution model where required.

This module includes the following CompoentType subclasses:

It also contains the following Response subclasses that rules may return:

exception insights.core.plugins.ContentException[source]

Bases: insights.core.dr.SkipComponent

Raised whenever a datasource fails to get data.

class insights.core.plugins.PluginType(*deps, **kwargs)[source]

Bases: insights.core.dr.ComponentType

PluginType is the base class of plugin types like datasource, rule, etc. It provides a default invoke method that catches exceptions we don’t want bubbling to the top of the evaluation loop. These exceptions are commonly raised by datasource components but could be in the context of any component since most datasource runtime errors are lazy.

It’s possible for a datasource to “succeed” and return an object but for an exception to be raised when the parser tries to access the content of that object. For example, when a command datasource is evaluated, it only checks that the command exists and is executable. Invocation of the command itself is delayed until the parser asks for its value. This helps with performance and memory consumption.

invoke(broker)[source]

Handles invocation of the component. The default implementation invokes it with positional arguments based on order of dependency declaration.

class insights.core.plugins.Response(key, **kwargs)[source]

Bases: dict

Response is the base class of response types that can be returned from rules.

Subclasses must call __init__ of this class via super() and must provide the response_type class attribute.

The key_name class attribute is optional, but if one is specified, the first argument to __init__ must not be None. If key_name is None, then the first argument to __init__ should be None. It’s best to override __init__ in subclasses so users aren’t required to pass None explicitly.

adjust_for_length(key, r, kwargs)[source]

Converts the response to a string and compares its length to a max length specified in settings. If the response is too long, an error is logged, and an abbreviated response is returned instead.

get_key()[source]

Helper function that uses the response’s key_name to look up the response identifier. For a rule, this is like response.get(“error_key”).

key_name = None

key_name is something like ‘error_key’, ‘fingerprint_key’, etc. It is the key downstream systems use to look up the exact response returned by a rule.

response_type = None

response_type is something like ‘rule’, ‘metadata’, ‘fingerprint’, etc. It is how downstream systems identify the type of information returned by a rule.

validate_key(key)[source]

Called if the key_name class attribute is not None.

validate_kwargs(kwargs)[source]

Validates expected subclass attributes and constructor keyword arguments.

exception insights.core.plugins.ValidationException(msg, r=None)[source]

Bases: Exception

class insights.core.plugins.combiner(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

A decorator for a component that composes or “combines” other components.

A typical use case is hiding slight variations in related parser interfaces. Another use case is to combine several related parsers behind a single, cohesive, higher level interface.

class insights.core.plugins.component(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

class insights.core.plugins.condition(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

ComponentType used to encapsulate boolean logic you’d like to have analyzed by a rule analysis system. Conditions should return truthy values. None is also a valid return type for conditions, so rules that depend on conditions that might return None should check their validity.

class insights.core.plugins.datasource(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

Decorates a component that one or more insights.core.Parser subclasses will consume.

filterable = False
invoke(broker)[source]

Handles invocation of the component. The default implementation invokes it with positional arguments based on order of dependency declaration.

multi_output = False
raw = False
class insights.core.plugins.fact(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

ComponentType for a component that surfaces a dictionary or list of dictionaries that will be used later by cluster rules. The data from a fact is converted to a pandas Dataframe

class insights.core.plugins.incident(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

ComponentType for a component used by rules that allows automated statistical analysis.

insights.core.plugins.is_combiner(component)[source]
insights.core.plugins.is_component(obj)[source]
insights.core.plugins.is_datasource(component)[source]
insights.core.plugins.is_parser(component)[source]
insights.core.plugins.is_rule(component)[source]
insights.core.plugins.is_type(component, _type)[source]
class insights.core.plugins.make_fail(key, **kwargs)[source]

Bases: insights.core.plugins.make_response

Returned by a rule to signal that its conditions have been met.

Example:

# completely made up package
buggy = InstalledRpms.from_package("bash-3.4.23-1.el7")

@rule(InstalledRpms)
def report(installed_rpms):
   bash = installed_rpms.newest("bash")
   if bash == buggy:
       return make_fail("BASH_BUG_123", bash=bash)
   return make_pass("BASH", bash=bash)
class insights.core.plugins.make_fingerprint(key, **kwargs)[source]

Bases: insights.core.plugins.Response

key_name = 'fingerprint_key'
response_type = 'fingerprint'
class insights.core.plugins.make_info(key, **kwargs)[source]

Bases: insights.core.plugins.Response

Returned by a rule to surface information about a system.

Example:

@rule(InstalledRpms)
def report(rpms):
   bash = rpms.newest("bash")
   return make_info("BASH_VERSION", bash=bash.nvra)
key_name = 'info_key'
response_type = 'info'
class insights.core.plugins.make_metadata(**kwargs)[source]

Bases: insights.core.plugins.Response

Allows a rule to convey addtional metadata about a system to downstream systems. It doesn’t convey success or failure but purely information that may be aggregated with other make_metadata responses. As such, it has no response key.

response_type = 'metadata'
class insights.core.plugins.make_metadata_key(key, value)[source]

Bases: insights.core.plugins.Response

adjust_for_length(key, r, kwargs)[source]

Converts the response to a string and compares its length to a max length specified in settings. If the response is too long, an error is logged, and an abbreviated response is returned instead.

key_name = 'key'
response_type = 'metadata_key'
class insights.core.plugins.make_pass(key, **kwargs)[source]

Bases: insights.core.plugins.Response

Returned by a rule to signal that its conditions explicitly have not been met. In other words, the rule has all of the information it needs to determine that the system it’s analyzing is not in the state the rule was meant to catch.

An example rule might check whether a system is vulnerable to a well defined exploit or has a bug in a specific version of a package. If it can say for sure “the system does not have this exploit” or “the system does not have the buggy version of the package installed”, then it should return an instance of make_pass.

Example:

# completely made up package
buggy = InstalledRpms.from_package("bash-3.4.23-1.el7")

@rule(InstalledRpms)
def report(installed_rpms):
   bash = installed_rpms.newest("bash")
   if bash == buggy:
       return make_fail("BASH_BUG_123", bash=bash)
   return make_pass("BASH", bash=bash)
key_name = 'pass_key'
response_type = 'pass'
class insights.core.plugins.make_response(key, **kwargs)[source]

Bases: insights.core.plugins.Response

Returned by a rule to signal that its conditions have been met.

Example:

# completely made up package
buggy = InstalledRpms.from_package("bash-3.4.23-1.el7")

@rule(InstalledRpms)
def report(installed_rpms):
   bash = installed_rpms.newest("bash")
   if bash == buggy:
       return make_response("BASH_BUG_123", bash=bash)
   return make_pass("BASH", bash=bash)

Deprecated since version Use: make_fail instead.

key_name = 'error_key'
response_type = 'rule'
class insights.core.plugins.metadata(*args, **kwargs)[source]

Bases: insights.core.plugins.parser

Used for old cluster uber-archives.

Deprecated since version 1.x.

Warning

Do not use this component type.

requires = ['metadata.json']
class insights.core.plugins.parser(*args, **kwargs)[source]

Bases: insights.core.plugins.PluginType

Decorates a component responsible for parsing the output of a datasource. @parser should accept multiple arguments, the first will ALWAYS be the datasource the parser component should handle. Any subsequent argument will be a component used to determine if the parser should fire. @parser should only decorate subclasses of insights.core.Parser.

Warning

If a Parser component handles a datasource that returns a list, a Parser instance will be created for each element of the list. Combiners or rules that depend on the Parser will be passed the list of instances and not a single parser instance. By default, if any parser in the list succeeds, those parsers are passed on to dependents, even if others fail. If all parsers should succeed or fail together, pass continue_on_error=False.

invoke(broker)[source]

Handles invocation of the component. The default implementation invokes it with positional arguments based on order of dependency declaration.

class insights.core.plugins.remoteresource(*deps, **kwargs)[source]

Bases: insights.core.plugins.PluginType

ComponentType for a component for remote web resources.

class insights.core.plugins.rule(*args, **kwargs)[source]

Bases: insights.core.plugins.PluginType

Decorator for components that encapsulate some logic that depends on the data model of a system. Rules can depend on datasource instances, parser instances, combiner instances, or anything else.

For example:

@rule(SshDConfig, InstalledRpms, [ChkConfig, UnitFiles], optional=[IPTables, IpAddr])
def report(sshd_config, installed_rpms, chk_config, unit_files, ip_tables, ip_addr):
    # ...
    # ... some complicated logic
    # ...
    bash = installed_rpms.newest("bash")
    return make_pass("BASH", bash=bash)

Notice that the arguments to report correspond to the dependencies in the @rule decorator and are in the same order.

Parameters to the decorator have these forms:

Criteria

Example Decorator Arguments

Description

Required

SshDConfig, InstalledRpms

Regular arguments

At Least One

[ChkConfig, UnitFiles]

An argument as a list

Optional

optional=[IPTables, IpAddr]

A list following optional=

If a parameter is required, the value provided for it is guaranteed not to be None. In the example above, sshd_config and installed_rpms will not be None.

At least one of the arguments to parameters of an “at least one” list will not be None. In the example, either or both of chk_config and unit_files will not be None.

Any or all arguments for optional parameters may be None.

The following keyword arguments may be passed to the decorator:

Keyword Arguments
  • requires (list) -- a list of components that all components decorated with this type will require. Instead of using requires=[...], just pass dependencies as variable arguments to @rule as in the example above.

  • optional (list) -- a list of components that all components decorated with this type will implicitly depend on optionally. Additional components passed as optional to the decorator will be appended to this list.

  • metadata (dict) -- an arbitrary dictionary of information to associate with the component you’re decorating. It can be retrieved with get_metadata.

  • tags (list) -- a list of strings that categorize the component. Useful for formatting output or sifting through results for components you care about.

  • group -- GROUPS.single or GROUPS.cluster. Used to organize components into “groups” that run together with insights.core.dr.run().

  • cluster (bool) -- if True will put the component into the GROUPS.cluster group. Defaults to False. Overrides group if True.

  • content (string or dict) -- a jinja2 template or dictionary of jinja2 templates. The Response subclasses rules can return are dictionaries. make_pass, make_fail, and make_response all accept first a key and then a list of arbitrary keyword arguments. If content is a dictionary, the key is used to look up the template that the rest of the keyword argments will be interpolated into. If content is a string, then it is used for all return values of the rule. If content isn’t defined but a CONTENT variable is declared in the module, it will be used for every rule in the module and also can be a string or list of dictionaries

  • links (dict) -- a dictionary with strings as keys and lists of urls as values. The keys categorize the urls, e.g. “kcs” for kcs urls and “bugzilla” for bugzilla urls.

content = None
process(broker)[source]

Ensures dependencies have been met before delegating to self.invoke.

insights.core.remote_resource

class insights.core.remote_resource.CachedRemoteResource[source]

Bases: insights.core.remote_resource.RemoteResource

RemoteResource subclass that sets up caching for subsequent Web resource requests.

expire_after

Amount of time in seconds that the cache will expire

Type

float

backend

Type of storage for cache DictCache1, FileCache or RedisCache

Type

str

redis_host

Hostname of redis instance if RedisCache backend is specified

Type

str

redis_port

Port used to contact the redis instance if RedisCache backend is specified

Type

int

file_cache_path

Path to where file cache will be stored if FileCache backend is specified

Type

string

Examples

>>> from insights.core.remote_resource import CachedRemoteResource
>>> crr = CachedRemoteResource()
>>> rtn = crr.get("http://google.com")
>>> print (rtn.content)
backend = 'DictCache'
expire_after = 180
file_cache_path = '.web_cache'
redis_host = 'localhost'
redis_port = 6379
class insights.core.remote_resource.DefaultHeuristic(expire_after)[source]

Bases: cachecontrol.heuristics.BaseHeuristic

BaseHeuristic subclass that sets the default caching headers if not supplied by the remote service.

default_cache_vars

Message content warning that the response from the remote server did not return proper HTTP cache headers so we will use default cache settings

Type

str

server_cache_headers

Message content warning that we are using cache settings returned by the remote server.

Type

str

default_cache_vars = 'Remote service caching headers not set correctly, using default caching'
server_cache_headers = 'Caching being done based on caching headers returned by remote service'
update_headers(response)[source]

Returns the updated caching headers.

Parameters

response (HttpResponse) -- The response from the remote service

Returns

(HttpResponse.Headers): Http caching headers

Return type

response

warning(response)[source]

Return a valid 1xx warning header value describing the cache adjustments.

The response is provided too allow warnings like 113 http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need to explicitly say response is over 24 hours old.

class insights.core.remote_resource.RemoteResource(session=None)[source]

Bases: object

RemoteResource class for accessing external Web resources.

timeout

Time in seconds for the requests.get api call to wait before returning a timeout exception

Type

float

Examples

>>> from insights.core.remote_resource import RemoteResource
>>> rr = RemoteResource()
>>> rtn = rr.get("http://google.com")
>>> print (rtn.content)
get(url, params={}, headers={}, auth=(), certificate_path=None)[source]

Returns the response payload from the request to the given URL.

Parameters
  • url (str) -- The URL for the WEB API that the request is being made too.

  • params (dict) -- Dictionary containing the query string parameters.

  • headers (dict) -- HTTP Headers that may be needed for the request.

  • auth (tuple) -- User ID and password for Basic Auth

  • certificate_path (str) -- Path to the ssl certificate.

Returns

(HttpResponse): Response object from requests.get api request

Return type

response

timeout = 10

insights.core.spec_factory

class insights.core.spec_factory.CommandOutputProvider(cmd, ctx, args=None, split=True, keep_rc=False, ds=None, timeout=None, inherit_env=None)[source]

Bases: insights.core.spec_factory.ContentProvider

Class used in datasources to return output from commands.

create_args()[source]
create_env()[source]
load()[source]
validate()[source]
write(dst)[source]
class insights.core.spec_factory.ContentProvider[source]

Bases: object

property content
load()[source]
property path
stream()[source]

Returns a generator of lines instead of a list of lines.

class insights.core.spec_factory.DatasourceProvider(content, relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.ContentProvider

load()[source]
write(dst)[source]
class insights.core.spec_factory.FileProvider(relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.ContentProvider

validate()[source]
class insights.core.spec_factory.RawFileProvider(relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.FileProvider

Class used in datasources that returns the contents of a file a single string. The file is not filtered.

load()[source]
write(dst)[source]
class insights.core.spec_factory.RegistryPoint(metadata=None, multi_output=False, raw=False, filterable=False)[source]

Bases: object

insights.core.spec_factory.SAFE_ENV = {'LANG': 'C.UTF-8', 'LC_ALL': 'C', 'PATH': '/bin:/usr/bin:/sbin:/usr/sbin:/usr/share/Modules/bin'}

A minimal set of environment variables for use in subprocess calls

class insights.core.spec_factory.SerializedOutputProvider(relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.TextFileProvider

create_args()[source]
class insights.core.spec_factory.SerializedRawOutputProvider(relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.RawFileProvider

class insights.core.spec_factory.SpecDescriptor(func)[source]

Bases: object

class insights.core.spec_factory.SpecSet[source]

Bases: object

The base class for all spec declarations. Extend this class and define your datasources directly or with a SpecFactory.

context_handlers = {}
registry = {}
class insights.core.spec_factory.SpecSetMeta(name, bases, dct)[source]

Bases: type

The metaclass that converts RegistryPoint markers to registry point datasources and hooks implementations for them into the registry.

class insights.core.spec_factory.TextFileProvider(relative_path, root='/', ds=None, ctx=None)[source]

Bases: insights.core.spec_factory.FileProvider

Class used in datasources that returns the contents of a file a list of lines. Each line is filtered if filters are defined for the datasource.

create_args()[source]
load()[source]
write(dst)[source]
class insights.core.spec_factory.command_with_args(cmd, provider, context=<class 'insights.core.context.HostContext'>, deps=None, split=True, keep_rc=False, timeout=None, inherit_env=None, **kwargs)[source]

Bases: object

Execute a command that has dynamic arguments

Parameters
  • cmd (list of lists) -- the command to execute. Breaking apart a command string that might require arguments.

  • provider (str or tuple) -- argument string or a tuple of argument strings.

  • context (ExecutionContext) -- the context under which the datasource should run.

  • split (bool) -- whether the output of the command should be split into a list of lines

  • keep_rc (bool) -- whether to return the error code returned by the process executing the command. If False, any return code other than zero with raise a CalledProcessError. If True, the return code and output are always returned.

  • timeout (int) -- Number of seconds to wait for the command to complete. If the timeout is reached before the command returns, a CalledProcessError is raised. If None, timeout is infinite.

  • inherit_env (list) -- The list of environment variables to inherit from the calling process when the command is invoked.

Returns

A datasource that returns the output of a command that takes

specified arguments passed by the provider.

Return type

function

insights.core.spec_factory.deserialize_command_output(_type, data, root)[source]
insights.core.spec_factory.deserialize_datasource_provider(_type, data, root)[source]
insights.core.spec_factory.deserialize_raw_file_provider(_type, data, root)[source]
insights.core.spec_factory.deserialize_text_provider(_type, data, root)[source]
insights.core.spec_factory.enc(s)[source]
insights.core.spec_factory.escape(s)[source]
class insights.core.spec_factory.find(spec, pattern)[source]

Bases: object

Helper class for extracting specific lines from a datasource for direct consumption by a rule.

service_starts = find(Specs.audit_log, "SERVICE_START")

@rule(service_starts)
def report(starts):
    return make_info("SERVICE_STARTS", num_starts=len(starts))
Parameters
  • spec (datasource) -- some datasource, ideally filterable.

  • pattern (string / list) -- a string or list of strings to match (no patterns supported)

Returns

A dict where each key is a command, path, or spec name, and each value is a non-empty list of matching lines. Only paths with matching lines are included.

Raises

dr.SkipComponent if no paths have matching lines. --

class insights.core.spec_factory.first_file(paths, context=None, deps=[], kind=<class 'insights.core.spec_factory.TextFileProvider'>, **kwargs)[source]

Bases: object

Creates a datasource that returns the first existing and readable file in files.

Parameters
  • files (str) -- list of paths to find and read

  • context (ExecutionContext) -- the context under which the datasource should run.

  • kind (FileProvider) -- One of TextFileProvider or RawFileProvider.

Returns

A datasource that returns the first file in files that exists

and is readable

Return type

function

class insights.core.spec_factory.first_of(deps)[source]

Bases: object

Given a list of dependencies, returns the first of the list that exists in the broker. At least one must be present, or this component won’t fire.

class insights.core.spec_factory.foreach_collect(provider, path, ignore=None, context=<class 'insights.core.context.HostContext'>, deps=[], kind=<class 'insights.core.spec_factory.TextFileProvider'>, **kwargs)[source]

Bases: object

Subtitutes each element in provider into path and collects the files at the resulting paths.

Parameters
  • provider (list) -- a list of elements or tuples.

  • path (str) -- a path template with substitution parameters.

  • context (ExecutionContext) -- the context under which the datasource should run.

  • kind (FileProvider) -- one of TextFileProvider or RawFileProvider

Returns

A datasource that returns a list of file contents created by

substituting each element of provider into the path template.

Return type

function

class insights.core.spec_factory.foreach_execute(provider, cmd, context=<class 'insights.core.context.HostContext'>, deps=[], split=True, keep_rc=False, timeout=None, inherit_env=[], **kwargs)[source]

Bases: object

Execute a command for each element in provider. Provider is the output of a different datasource that returns a list of single elements or a list of tuples. The command should have %s substitution parameters equal to the number of elements in each tuple of the provider.

Parameters
  • provider (list) -- a list of elements or tuples.

  • cmd (list of lists) -- a command with substitution parameters. Breaking apart a command string that might contain multiple commands separated by a pipe, getting them ready for subproc operations. IE. A command with filters applied

  • context (ExecutionContext) -- the context under which the datasource should run.

  • split (bool) -- whether the output of the command should be split into a list of lines

  • keep_rc (bool) -- whether to return the error code returned by the process executing the command. If False, any return code other than zero with raise a CalledProcessError. If True, the return code and output are always returned.

  • timeout (int) -- Number of seconds to wait for the command to complete. If the timeout is reached before the command returns, a CalledProcessError is raised. If None, timeout is infinite.

  • inherit_env (list) -- The list of environment variables to inherit from the calling process when the command is invoked.

Returns

A datasource that returns a list of outputs for each command

created by substituting each element of provider into the cmd template.

Return type

function

class insights.core.spec_factory.glob_file(patterns, ignore=None, context=None, deps=[], kind=<class 'insights.core.spec_factory.TextFileProvider'>, max_files=1000, **kwargs)[source]

Bases: object

Creates a datasource that reads all files matching the glob pattern(s).

Parameters
  • patterns (str or [str]) -- glob pattern(s) of paths to read.

  • ignore (regex) -- a regular expression that is used to filter the paths matched by pattern(s).

  • context (ExecutionContext) -- the context under which the datasource should run.

  • kind (FileProvider) -- One of TextFileProvider or RawFileProvider.

  • max_files (int) -- Maximum number of glob files to process.

Returns

A datasource that reads all files matching the glob patterns.

Return type

function

class insights.core.spec_factory.head(dep, **kwargs)[source]

Bases: object

Return the first element of any datasource that produces a list.

class insights.core.spec_factory.listdir(path, context=None, ignore=None, deps=[])[source]

Bases: object

Execute a simple directory listing of all the files and directories in path.

Parameters
  • path (str) -- directory or glob pattern to list.

  • context (ExecutionContext) -- the context under which the datasource should run.

  • ignore (str) -- regular expression defining paths to ignore.

Returns

A datasource that returns the list of files and directories

in the directory specified by path

Return type

function

insights.core.spec_factory.mangle_command(command, name_max=255)[source]

Mangle a command line string into something suitable for use as the basename of a filename. At minimum this function must remove slashes, but it also does other things to clean the basename: removing directory names from the command name, replacing many non- characters with undersores, in addition to replacing slashes with dots.

By default, curly braces, ‘{‘ and ‘}’, are replaced with underscore, set ‘has_variables’ to leave curly braces alone.

This function was copied from the function that insights-client uses to create the name it to capture the output of the command.

Here, server side, it is used to figure out what file in the archive contains the output a command. Server side, the command may contain references to variables (names matching curly braces) that will be expanded before the name is actually used as a file name.

To completly mimic the insights-client behavior, curly braces need to be replaced underscores. If the command has variable references, the curly braces must be left alone. Set has_variables, to leave curly braces alone.

This implementation of ‘has_variables’ assumes that variable names only contain that are not replaced by mangle_command.

insights.core.spec_factory.serialize_command_output(obj, root)[source]
insights.core.spec_factory.serialize_datasource_provider(obj, root)[source]
insights.core.spec_factory.serialize_raw_file_provider(obj, root)[source]
insights.core.spec_factory.serialize_text_file_provider(obj, root)[source]
class insights.core.spec_factory.simple_command(cmd, context=<class 'insights.core.context.HostContext'>, deps=[], split=True, keep_rc=False, timeout=None, inherit_env=[], **kwargs)[source]

Bases: object

Execute a simple command that has no dynamic arguments

Parameters
  • cmd (list of lists) -- the command(s) to execute. Breaking apart a command string that might contain multiple commands separated by a pipe, getting them ready for subproc operations. IE. A command with filters applied

  • context (ExecutionContext) -- the context under which the datasource should run.

  • split (bool) -- whether the output of the command should be split into a list of lines

  • keep_rc (bool) -- whether to return the error code returned by the process executing the command. If False, any return code other than zero with raise a CalledProcessError. If True, the return code and output are always returned.

  • timeout (int) -- Number of seconds to wait for the command to complete. If the timeout is reached before the command returns, a CalledProcessError is raised. If None, timeout is infinite.

  • inherit_env (list) -- The list of environment variables to inherit from the calling process when the command is invoked.

Returns

A datasource that returns the output of a command that takes

no arguments

Return type

function

class insights.core.spec_factory.simple_file(path, context=None, deps=[], kind=<class 'insights.core.spec_factory.TextFileProvider'>, **kwargs)[source]

Bases: object

Creates a datasource that reads the file at path when evaluated.

Parameters
  • path (str) -- path to the file to read

  • context (ExecutionContext) -- the context under which the datasource should run.

  • kind (FileProvider) -- One of TextFileProvider or RawFileProvider.

Returns

A datasource that reads all files matching the glob patterns.

Return type

function

insights.core.taglang

Simple language for defining predicates against a list or set of strings.

Operator Precedence:
  • ! high - opposite truth value of its predicate

  • / high - starts a regex that continues until whitespace unless quoted

  • & medium - “and” of two predicates

  • | low - “or” of two predicates

  • , low - “or” of two predicates. Synonym for |.

It supports grouping with parentheses and quoted strings/regexes surrounded with either single or double quotes.

Examples

>>> pred = parse("a | b & !c")  # means (a or (b and (not c)))
>>> pred(["a"])
True
>>> pred(["b"])
True
>>> pred(["b", "c"])
False
>>> pred = parse("/net | apache")
>>> pred(["networking"])
True
>>> pred(["mynetwork"])
True
>>> pred(["apache"])
True
>>> pred(["security"])
False
>>> pred = parse("(a | b) & c")
>>> pred(["a", "c"])
True
>>> pred(["b", "c"])
True
>>> pred(["a"])
False
>>> pred(["b"])
False
>>> pred(["c"])
False

Regular expressions start with a forward slash / and continue until whitespace unless they are quoted with either single or double quotes. This means that they can consume what would normally be considered an operator or a closing parenthesis if you aren’t careful.

For example, this is a parse error because the regex consumes the comma:
>>> pred = parse("/net, apache")
Exception
Instead, do this:
>>> pred = parse("/net , apache")
or this:
>>> pred = parse("/net | apache")
or this:
>>> pred = parse("'/net', apache")
class insights.core.taglang.And(left, right)[source]

Bases: insights.core.taglang.Predicate

The values must satisfy both the left and the right condition.

test(value)[source]
class insights.core.taglang.Eq(value)[source]

Bases: insights.core.taglang.Predicate

The value must be in the set of values.

test(values)[source]
class insights.core.taglang.Not(pred)[source]

Bases: insights.core.taglang.Predicate

The values must not satisfy the wrapped condition.

test(value)[source]
class insights.core.taglang.Or(left, right)[source]

Bases: insights.core.taglang.Predicate

The values must satisfy either the left or the right condition.

test(value)[source]
class insights.core.taglang.Predicate[source]

Bases: object

Provides __call__ for invoking the Predicate like a function without having to explictly call its test method.

class insights.core.taglang.Regex(value)[source]

Bases: insights.core.taglang.Predicate

The regex must match at least one of the values.

test(values)[source]
insights.core.taglang.negate(args)[source]
insights.core.taglang.oper(args)[source]

insights.parsers

exception insights.parsers.ParseException[source]

Bases: Exception

Exception that should be thrown from parsers that encounter exceptions they recognize while parsing. When this exception is thrown, the exception message and data are logged and no parser output data is saved.

exception insights.parsers.SkipException[source]

Bases: insights.core.dr.SkipComponent

Exception that should be thrown from parsers that are explicitly written to look for errors in input data. If the expected error is not found then the parser should throw this exception to signal to the infrastructure that the parser’s output should not be retained.

insights.parsers.calc_offset(lines, target, invert_search=False)[source]

Function to search for a line in a list starting with a target string. If target is None or an empty string then 0 is returned. This allows checking target here instead of having to check for an empty target in the calling function. Each line is stripped of leading spaces prior to comparison with each target however target is not stripped. See parse_fixed_table in this module for sample usage.

Parameters
  • lines (list) -- List of strings.

  • target (list) -- List of strings to search for at the beginning of any line in lines.

  • invert_search (boolean) -- If True this flag causes the search to continue until the first line is found not matching anything in target. An empty line is implicitly included in target. Default is False. This would typically be used if trimming trailing lines off of a file by passing reversed(lines) as the lines argument.

Returns

index into the lines indicating the location of target. If target is None or an empty string 0 is returned as the offset. If invert_search is True the index returned will point to the line after the last target was found.

Return type

int

Raises

ValueError -- Exception is raised if target string is specified and it was not found in the input lines.

Examples

>>> lines = [
... '#   ',
... 'Warning line',
... 'Error line',
... '    data 1 line',
... '    data 2 line']
>>> target = ['data']
>>> calc_offset(lines, target)
3
>>> target = ['#', 'Warning', 'Error']
>>> calc_offset(lines, target, invert_search=True)
3
insights.parsers.get_active_lines(lines, comment_char='#')[source]

Returns lines, or parts of lines, from content that are not commented out or completely empty. The resulting lines are all individually stripped.

This is useful for parsing many config files such as ifcfg.

Parameters
  • lines (list) -- List of strings to parse.

  • comment_char (str) -- String indicating that all chars following are part of a comment and will be removed from the output.

Returns

List of valid lines remaining in the input.

Return type

list

Examples

>>> lines = [
... 'First line',
... '   ',
... '# Comment line',
... 'Inline comment # comment',
... '          Whitespace          ',
... 'Last line']
>>> get_active_lines(lines)
['First line', 'Inline comment', 'Whitespace', 'Last line']

Takes a list of dictionaries and finds all the dictionaries where the keys and values match those found in the keyword arguments.

Keys in the row data have ‘ ‘ and ‘-‘ replaced with ‘_’, so they can match the keyword argument parsing. For example, the keyword argument ‘fix_up_path’ will match a key named ‘fix-up path’.

In addition, several suffixes can be added to the key name to do partial matching of values:

  • ‘__contains’ will test whether the data value contains the given value.

  • ‘__startswith’ tests if the data value starts with the given value

  • ‘__lower_value’ compares the lower-case version of the data and given values.

Parameters
  • rows (list) -- A list of dictionaries representing the data to be searched.

  • **kwargs (dict) -- keyword-value pairs corresponding to the fields that need to be found and their required values in the data rows.

Returns

The list of rows that match the search keywords. If no keyword arguments are given, no rows are returned.

Return type

(list)

Examples

>>> rows = [
...     {'domain': 'oracle', 'type': 'soft', 'item': 'nofile', 'value': 1024},
...     {'domain': 'oracle', 'type': 'hard', 'item': 'nofile', 'value': 65536},
...     {'domain': 'oracle', 'type': 'soft', 'item': 'stack', 'value': 10240},
...     {'domain': 'oracle', 'type': 'hard', 'item': 'stack', 'value': 3276},
...     {'domain': 'root', 'type': 'soft', 'item': 'nproc', 'value': -1}]
...
>>> keyword_search(rows, domain='root')
[{'domain': 'root', 'type': 'soft', 'item': 'nproc', 'value': -1}]
>>> keyword_search(rows, item__contains='c')
[{'domain': 'oracle', 'type': 'soft', 'item': 'stack', 'value': 10240},
 {'domain': 'oracle', 'type': 'hard', 'item': 'stack', 'value': 3276},
 {'domain': 'root', 'type': 'soft', 'item': 'nproc', 'value': -1}]
>>> keyword_search(rows, domain__startswith='r')
[{'domain': 'root', 'type': 'soft', 'item': 'nproc', 'value': -1}]
insights.parsers.optlist_to_dict(optlist, opt_sep=', ', kv_sep='=', strip_quotes=False)[source]

Parse an option list into a dictionary.

Takes a list of options separated by opt_sep and places them into a dictionary with the default value of True. If kv_sep option is specified then key/value options key=value are parsed. Useful for parsing options such as mount options in the format rw,ro,rsize=32168,xyz.

Parameters
  • optlist (str) -- String of options to parse.

  • opt_sep (str) -- Separater used to split options.

  • kv_sep (str) -- If not None then optlist includes key=value pairs to be split, and this str is used to split them.

  • strip_quotes (bool) -- If set, will remove matching ‘”’ and ‘”’ characters from start and end of line. No quotes are removed from inside the string and mismatched quotes are not removed.

Returns

Returns a dictionary of names present in the list. If kv_sep is not None then the values will be the str on the right-hand side of kv_sep. If kv_sep is None then each key will have a default value of True.

Return type

dict

Examples

>>> optlist = 'rw,ro,rsize=32168,xyz'
>>> optlist_to_dict(optlist)
{'rw': True, 'ro': True, 'rsize': '32168', 'xyz': True}
insights.parsers.parse_delimited_table(table_lines, delim=None, max_splits=-1, strip=True, header_delim='same as delimiter', heading_ignore=None, header_substitute=None, trailing_ignore=None, raw_line_key=None)[source]

Parses table-like text. Uses the first (non-ignored) row as the list of column names, which cannot contain the delimiter. Fields cannot contain the delimiter but can be blank if a printable delimiter is used.

Parameters
  • table_lines (list) -- List of strings with the first line containing column headings separated by spaces, and the remaining lines containing table data.

  • delim (str) -- String used in the content to separate fields. If left as None (the default), white space is used as the field separator.

  • max_splits (int) -- Maximum number of fields to create by splitting the line. After this number of fields has been found, the rest of the line is left un-split and may contain the delimiter. Lines may contain less than this number of fields.

  • strip (bool) -- If set to True, fields and headings will be stripped of leading and trailing space. If set to False, fields and headings will be left as is. The delimiter is always removed, so strip need not be set if delim is set to None (but will not change output in that case).

  • header_delim (str) -- When set, uses a different delimiter to the content for splitting the header into keywords. Set to None, this will split on white space. When left at the special value of ‘same as delimiter’, the content delimiter will be used to split the header line as well.

  • heading_ignore (list) -- Optional list of strings to search for at beginning of line. All lines before this line will be ignored. If specified then it must be present in the file or ValueError will be raised.

  • header_substitute (list) -- Optional list of tuples containing (old_string_value, new_string_value) to be used to modify header values. If whitespace is present in a column it must be replaced with non-whitespace characters in order for the table to be parsed correctly.

  • trailing_ignore (list) -- Optional list of strings to look for at the end rows of the content. Lines starting with these strings will be ignored, thereby truncating the rows of data.

  • raw_line_key (str) -- Key under which to save the raw line. If None, line is not saved.

Returns

Returns a list of dictionaries for each row of column data, keyed on the column headings in the same case as input.

Return type

list

insights.parsers.parse_fixed_table(table_lines, heading_ignore=[], header_substitute=[], trailing_ignore=[], empty_exception=False)[source]

Function to parse table data containing column headings in the first row and data in fixed positions in each remaining row of table data. Table columns must not contain spaces within the column name. Column headings are assumed to be left justified and the column data width is the width of the heading label plus all whitespace to the right of the label. This function will handle blank columns.

Parameters
  • table_lines (list) -- List of strings with the first line containing column headings separated by spaces, and the remaining lines containing table data in left justified format.

  • heading_ignore (list) -- Optional list of strings to search for at beginning of line. All lines before this line will be ignored. If specified then it must be present in the file or ValueError will be raised.

  • header_substitute (list) -- Optional list of tuples containing (old_string_value, new_string_value) to be used to modify header values. If whitespace is present in a column it must be replaced with non-whitespace characters in order for the table to be parsed correctly.

  • trailing_ignore (list) -- Optional list of strings to look for at the end rows of the content. Lines starting with these strings will be ignored, thereby truncating the rows of data.

  • empty_exception (bool) -- If True, raise a ParseException when the value if empty. False by default.

Returns

Returns a list of dict for each row of column data. Dict keys

are the column headings in the same case as input.

Return type

list

Raises
  • ValueError -- Raised if heading_ignore is specified and not found in table_lines.

  • ParseException -- Raised if there are empty values when empty_exception is True

Sample input:

Column1    Column2    Column3
data1      data 2     data   3
data4      data5      data6

Examples

>>> table_data = parse_fixed_table(table_lines)
>>> table_data
[{'Column1': 'data1', 'Column2': 'data 2', 'Column3': 'data   3'},
 {'Column1': 'data4', 'Column2': 'data5', 'Column3': 'data6'}]
insights.parsers.split_kv_pairs(lines, comment_char='#', filter_string=None, split_on='=', use_partition=False, ordered=False)[source]

Split lines of a list into key/value pairs

Use this function to filter and split all lines of a list of strings into a dictionary. Named arguments may be used to control how the line is split, how lines are filtered and the type of output returned. See parameters for more information. When splitting key/value, the first occurence of the split character is used, other occurrences of the split char in the line will be ignored. :get_active_lines() is called to strip comments and blank lines from the data.

Parameters
  • lines (list of str) -- List of the strings to be split.

  • comment_char (str) -- Char that when present in the line indicates all following chars are part of a comment. If this is present, all comments and all blank lines are removed from list before further processing. The default comment char is the # character.

  • filter_string (str) -- If the filter string is present, then only lines containing the filter will be processed, other lines will be ignored.

  • split_on (str) -- Character to use when splitting a line. Only the first occurence of the char is used when splitting, so only one split is performed at the first occurrence of split_on. The default string is =.

  • use_partition (bool) -- If this parameter is True then the python partition function will be used to split the line. If False then the pyton split function will be used. The difference is that when False, if the split character is not present in the line then the line is ignored and when True the line will be parsed regardless. Set use_partition to True if you have valid lines that do not contain the split_on character. Set use_partition to False if you want to ignore lines that do not contain the split_on character. The default value is False.

  • ordered (bool) -- If this parameter is True then the resulting dictionary will be in the same order as in the original file, a python OrderedDict type is used. If this parameter is False then the resulting dictionary is in no particular order, a base python dict type is used. The default is False.

Returns

Return value is a dictionary of the key/value pairs. If parameter keyword is True then an OrderedDict is returned, otherwise a dict is returned.

Return type

dict

Examples

>>> from .. import split_kv_pairs
>>> for line in lines:
...     print line
# Comment line
# Blank lines will also be removed
keyword1 = value1   # Inline comments
keyword2 = value2a=True, value2b=100M
keyword3     # Key with no separator
>>> split_kv_pairs(lines)
{'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'}
>>> split_kv_pairs(lines, comment_char='#')
{'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'}
>>> split_kv_pairs(lines, filter_string='keyword2')
{'keyword2': 'value2a=True, value2b=100M'}
>>> split_kv_pairs(lines, use_partition=True)
{'keyword3': '', 'keyword2': 'value2a=True, value2b=100M', 'keyword1': 'value1'}
>>> split_kv_pairs(lines, use_partition=True, ordered=True)
OrderedDict([('keyword1', 'value1'), ('keyword2', 'value2a=True, value2b=100M'), ('keyword3', '')])
insights.parsers.unsplit_lines(lines, cont_char='\\', keep_cont_char=False)[source]

Recombine lines having a continuation character at end.

Generator that recombines lines in the list that have the char cont_char at the end of a line. If cont_char is found in a line then then next line will be appended to the current line, this will continue for multiple continuation lines until the next line is found with no continuation character at the end. All lines found will be combined and returned.

If the keep_cont_char option is set to True, the continuation character will be left on the end of the line. Otherwise, by default, it is removed.

Parameters
  • lines (list) -- List of strings to be evaluated.

  • cont_char (char) -- Char to search for at end of line. Default is \.

  • keep_cont_char (bool) -- Whether to keep the continuation on the end of the line. Defaults to False, which causes the continuation character to be removed.

Yields

line (str) -- Yields unsplit lines

Examples

>>> lines = ['Line one \', '     line one part 2', 'Line two']
>>> list(unsplit_lines(lines))
['Line one      line one part 2', 'Line two']
>>> list(unsplit_lines(lines, cont_char='2'))
['Line one \', '     line one part Line two']
>>> list(unsplit_lines(lines, keep_cont_char=True)
['Line one \     line one part 2', 'Line two']

insights.parsr

parsr is a library for building parsers based on parsing expression grammars or PEGs.

You build a parser by making subparsers to match simple building blocks like numbers, strings, symbols, etc. and then composing them to reflect the higher level structure of your language.

Some means of combination are like those of regular expressions: sequences, alternatives, repetition, optional matching, etc. However, matching is always greedy. parsr also allows recursive definitions and the ability to transform the match of any subparser with a function. The parser can recognize and interpret its input at the same time.

Here’s an example that evaluates arithmetic expressions.

from insights.parsr import EOF, Forward, InSet, Many, Number, WS

def op(args):
    ans, rest = args
    for op, arg in rest:
        if op == "+":
            ans += arg
        elif op == "-":
            ans -= arg
        elif op == "*":
            ans *= arg
        else:
            ans /= arg
    return ans


LP = Char("(")
RP = Char(")")

expr = Forward()  # Forward declarations allow recursive structure
factor = WS >> (Number | (LP >> expr << RP)) << WS
term = (factor + Many(InSet("*/") + factor)).map(op)

# Notice the funny assignment of Forward definitions.
expr <= (term + Many(InSet("+-") + term)).map(op)

evaluate = expr << EOF
class insights.parsr.Char(char)[source]

Bases: insights.parsr.Parser

Char matches a single character.

a = Char("a")     # parses a single "a"
val = a("a")      # produces an "a" from the data.
val = a("b")      # raises an exception
process(pos, data, ctx)[source]
class insights.parsr.Choice(children)[source]

Bases: insights.parsr.Parser

A Choice requires at least one of its children to succeed, and it returns the value of the one that matched. Alternatives in a choice are tried left to right, so they have a definite priority. This a feature of PEGs over context free grammars.

Additional uses of | on the parser will cause it to accumulate parsers onto itself instead of creating new Choices. This has the desirable effect of increasing efficiency, but it can also have unintended consequences if a choice is used in multiple parts of a grammar as the initial element of another choice. Use a Wrapper to prevent that from happening.

abc = a | b | c   # alternation or choice.
val = abc("a")    # parses a single "a"
val = abc("b")    # parses a single "b"
val = abc("c")    # parses a single "c"
val = abc("d")    # raises an exception
process(pos, data, ctx)[source]
class insights.parsr.Context(lines, src=None)[source]

Bases: object

An instance of Context is threaded through the process call to every parser. It stores an indention stack to track hanging indents, a tag stack for grammars like xml or apache configuration, the active parser stack for error reporting, and accumulated errors for the farthest position reached.

col(pos)[source]
line(pos)[source]
set(pos, msg)[source]

Every parser that encounters an error calls set with the current position and a message. If the error is at the farthest position reached by any other parser, the active parser stack and message are accumulated onto a list of errors for that position. If the position is beyond any previous errors, the error list is cleared before the active stack and new error are recorded. This is the “farthest failure heurstic.”

class insights.parsr.EnclosedComment(s, e)[source]

Bases: insights.parsr.Parser

EnclosedComment matches a start literal, an end literal, and all characters between. It returns the content between the start and end.

Comment = EnclosedComment("/*", "*/")
process(pos, data, ctx)[source]
class insights.parsr.EndTagName(parser, ignore_case=False)[source]

Bases: insights.parsr.Wrapper

Wraps a parser that represents an end tag for grammars like xml, html, etc. The result is captured and compared to the last tag on the tag stack in the Context object. The tags must match for the parse to be successful.

process(pos, data, ctx)[source]
class insights.parsr.FollowedBy(child, follow)[source]

Bases: insights.parsr.Parser

FollowedBy takes a parser and a predicate parser. The initial parser matches only if the predicate matches the input after it. On success, input for the predicate is left unread, and the result of the first parser is returned.

ab = Char("a") & Char("b") # matches an "a" followed by a "b", but
                           # the "b" isn't consumed from the input.
val = ab("ab")             # returns "a" and leaves "b" to be
                           # consumed.
val = ab("ac")             # raises an exception and doesn't
                           # consume "a".
process(pos, data, ctx)[source]
class insights.parsr.Forward[source]

Bases: insights.parsr.Parser

Forward allows recursive grammars where a nonterminal’s definition includes itself directly or indirectly. You initially create a Forward nonterminal with regular assignment.

expr = Forward()

You later give it its real definition with the <= operator.

expr <= (term + Many(LowOps + term)).map(op)
process(pos, data, ctx)[source]
class insights.parsr.HangingString(chars, echars=None, min_length=1)[source]

Bases: insights.parsr.Parser

HangingString matches lines with indented continuations like in ini files.

Key = WS >> PosMarker(String(key_chars)) << WS
Sep = InSet(sep_chars, "Sep")
Value = WS >> (Boolean | HangingString(value_chars))
KVPair = WithIndent(Key + Opt(Sep >> Value))
process(pos, data, ctx)[source]
class insights.parsr.InSet(s, name=None)[source]

Bases: insights.parsr.Parser

InSet matches any single character from a set.

vowel = InSet("aeiou")  # or InSet(set("aeiou"))
val = vowel("a")  # okay
val = vowel("e")  # okay
val = vowel("i")  # okay
val = vowel("o")  # okay
val = vowel("u")  # okay
val = vowel("y")  # raises an exception
process(pos, data, ctx)[source]
class insights.parsr.KeepLeft(left, right)[source]

Bases: insights.parsr.Parser

KeepLeft takes two parsers. It requires them both to succeed but only returns results for the first one. It consumes input for both.

a = Char("a")
q = Char('"')

aq = a << q      # like a + q except only the result of a is
                 # returned
val = aq('a"')   # returns "a". Keeps the thing on the left of the
                 # <<
process(pos, data, ctx)[source]
class insights.parsr.KeepRight(left, right)[source]

Bases: insights.parsr.Parser

KeepRight takes two parsers. It requires them both to succeed but only returns results for the second one. It consumes input for both.

q = Char('"')
a = Char("a")

qa = q >> a      # like q + a except only the result of a is
                 # returned
val = qa('"a')   # returns "a". Keeps the thing on the right of the
                 # >>
process(pos, data, ctx)[source]
class insights.parsr.Lift(func)[source]

Bases: insights.parsr.Parser

Lift wraps a function of multiple arguments. Use it with the multiplication operator on as many parsers as function arguments, and the results of those parsers will be passed to the function. The result of a Lift parser is the result of the wrapped function.

Example:

.. code-block:: python

    def comb(a, b, c):
        return "".join([a, b, c])

    # You'd normally invoke comb like comb("x", "y", "z"), but you can
    # "lift" it for use with parsers like this:

    x = Char("x")
    y = Char("y")
    z = Char("z")
    p = Lift(comb) * x * y * z

    # The * operator separates parsers whose results will go into the
    # arguments of the lifted function. I've used Char above, but x, y,
    # and z can be arbitrarily complex.

    val = p("xyz")  # would return "xyz"
    val = p("xyx")  # raises an exception. nothing would be consumed
process(pos, data, ctx)[source]
class insights.parsr.Literal(chars, value=<object object>, ignore_case=False)[source]

Bases: insights.parsr.Parser

Match a literal string. The value keyword lets you return a python value instead of the matched input. The ignore_case keyword makes the match case insensitive.

lit = Literal("true")
val = lit("true")  # returns "true"
val = lit("True")  # raises an exception
val = lit("one")   # raises an exception

lit = Literal("true", ignore_case=True)
val = lit("true")  # returns "true"
val = lit("TRUE")  # returns "TRUE"
val = lit("one")   # raises an exception

t = Literal("true", value=True)
f = Literal("false", value=False)
val = t("true")  # returns the boolean True
val = t("True")  # raises an exception

val = f("false") # returns the boolean False
val = f("False") # raises and exception

t = Literal("true", value=True, ignore_case=True)
f = Literal("false", value=False, ignore_case=True)
val = t("true")  # returns the boolean True
val = t("True")  # returns the boolean True

val = f("false") # returns the boolean False
val = f("False") # returns the boolean False
process(pos, data, ctx)[source]
class insights.parsr.Many(parser, lower=0)[source]

Bases: insights.parsr.Parser

Many wraps another parser and requires it to match a certain number of times.

When Many matches zero occurences (lower=0), it always succeeds. Keep this in mind when using it in a list of alternatives or with FollowedBy or NotFollowedBy.

The results are returned as a list.

x = Char("x")
xs = Many(x)      # parses many (or no) x's in a row
val = xs("")      # returns []
val = xs("a")     # returns []
val = xs("x")     # returns ["x"]
val = xs("xxxxx") # returns ["x", "x", "x", "x", "x"]
val = xs("xxxxb") # returns ["x", "x", "x", "x"]

ab = Many(a + b)  # parses "abab..."
val = ab("")      # produces []
val = ab("ab")    # produces [["a", b"]]
val = ab("ba")    # produces []
val = ab("ababab")# produces [["a", b"], ["a", "b"], ["a", "b"]]

ab = Many(a | b)  # parses any combination of "a" and "b" like
                  # "aababbaba..."
val = ab("aababb")# produces ["a", "a", "b", "a", "b", "b"]
bs = Many(Char("b"), lower=1) # requires at least one "b"
process(pos, data, ctx)[source]
class insights.parsr.Map(child, func)[source]

Bases: insights.parsr.Parser

Map wraps a parser and a function. It returns the result of using the function to transform the wrapped parser’s result.

Example:

.. code-block:: python

    Digit = InSet("0123456789")
    Digits = Many(Digit, lower=1)
    Number = Digits.map(lambda x: int("".join(x)))
process(pos, data, ctx)[source]
class insights.parsr.Mark(lineno, col, value)[source]

Bases: object

An object created by PosMarker to capture a value at a position in the input. Marks can give more context to a value transformed by mapped functions.

class insights.parsr.Node[source]

Bases: object

Node is the base class of all parsers. It’s a generic tree structure with each instance containing a list of its children. Its main purpose is to simplify pretty printing.

add_child(child)[source]
set_children(children)[source]
class insights.parsr.NotFollowedBy(child, follow)[source]

Bases: insights.parsr.Parser

NotFollowedBy takes a parser and a predicate parser. The initial parser matches only if the predicate parser fails to match the input after it. On success, input for the predicate is left unread, and the result of the first parser is returned.

anb = Char("a") / Char("b") # matches an "a" not followed by a "b".
val = anb("ac")             # returns "a" and leaves "c" to be
                            # consumed
val = anb("ab")             # raises an exception and doesn't
                            # consume "a".
process(pos, data, ctx)[source]
class insights.parsr.OneLineComment(s)[source]

Bases: insights.parsr.Parser

OneLineComment matches everything from a literal to the end of a line, excluding the end of line characters themselves. It returns the content between the start literal and the end of the line.

Comment = OneLineComment("#") | OneLineComment("//")
process(pos, data, ctx)[source]
class insights.parsr.Opt(p, default=None)[source]

Bases: insights.parsr.Parser

Opt wraps a single parser and returns its value if it succeeds. It returns a default value otherwise. The input pointer is advanced only if the wrapped parser succeeds.

a = Char("a")
o = Opt(a)      # matches an "a" if its available. Still succeeds
                # otherwise but doesn't advance the read pointer.
val = o("a")    # returns "a"
val = o("b")    # returns None. Read pointer is not advanced.

o = Opt(a, default="x") # matches an "a" if its available. Returns
                        # "x" otherwise.
val = o("a")    # returns "a"
val = o("b")    # returns "x". Read pointer is not advanced.
process(pos, data, ctx)[source]
class insights.parsr.Parser[source]

Bases: insights.parsr.Node

Parser is the common base class of all Parsers.

debug(d=True)[source]

Set to True to enable diagnostic messages before and after the parser is invoked.

map(func)[source]

Return a Map parser that transforms the results of the current parser with the function func.

process(pos, data, ctx)[source]
sep_by(sep)[source]

Return a parser that matches zero or more instances of the current parser separated by instances of the parser sep.

until(pred)[source]

Return an Until parser that matches zero or more instances of the current parser until the pred parser succeeds.

class insights.parsr.PosMarker(parser)[source]

Bases: insights.parsr.Wrapper

Save the line number and column of a subparser by wrapping it in a PosMarker. The value of the parser that handled the input as well as the initial input position will be returned as a Mark.

process(pos, data, ctx)[source]
class insights.parsr.Sequence(children)[source]

Bases: insights.parsr.Parser

A Sequence requires all of its children to succeed. It returns a list of the values they matched.

Additional uses of + on the parser will cause it to accumulate parsers onto itself instead of creating new Sequences. This has the desirable effect of causing sequence results to be represented as flat lists instead of trees, but it can also have unintended consequences if a sequence is used in multiple parts of a grammar as the initial element of another sequence. Use a Wrapper to prevent that from happening.

a = Char("a")     # parses a single "a"
b = Char("b")     # parses a single "b"
c = Char("c")     # parses a single "c"

ab = a + b        # parses a single "a" followed by a single "b"
                  # (a + b) creates a "Sequence" object. Using `ab`
                  # as an element in a later sequence would modify
                  # its original definition.

abc = a + b + c   # parses "abc"
                  # (a + b) creates a "Sequence" object to which c
                  # is appended

val = ab("ab")    # produces a list ["a", "b"]
val = ab("a")     # raises an exception
val = ab("b")     # raises an exception
val = ab("ac")    # raises an exception
val = ab("cb")    # raises an exception

val = abc("abc")  # produces ["a", "b", "c"]
process(pos, data, ctx)[source]
class insights.parsr.StartTagName(parser)[source]

Bases: insights.parsr.Wrapper

Wraps a parser that represents a starting tag for grammars like xml, html, etc. The tag result is captured and put onto a tag stack in the Context object.

process(pos, data, ctx)[source]
class insights.parsr.String(chars, echars=None, min_length=1)[source]

Bases: insights.parsr.Parser

Match one or more characters in a set. Matching is greedy.

vowels = String("aeiou")
val = vowels("a")            # returns "a"
val = vowels("u")            # returns "u"
val = vowels("aaeiouuoui")   # returns "aaeiouuoui"
val = vowels("uoiea")        # returns "uoiea"
val = vowels("oouieaaea")    # returns "oouieaaea"
val = vowels("ga")           # raises an exception
process(pos, data, ctx)[source]
class insights.parsr.Until(parser, predicate)[source]

Bases: insights.parsr.Parser

Until wraps a parser and a terminal parser. It accumulates matches of the first parser until the terminal parser succeeds. Input for the terminal parser is left unread, and the results of the first parser are returned as a list.

Since Until can match zero occurences, it always succeeds. Keep this in mind when using it in a list of alternatives or with FollowedBy or NotFollowedBy.

cs = AnyChar.until(Char("y")) # parses many (or no) characters
                              # until a "y" is encountered.

val = cs("")                  # returns []
val = cs("a")                 # returns ["a"]
val = cs("x")                 # returns ["x"]
val = cs("ccccc")             # returns ["c", "c", "c", "c", "c"]
val = cs("abcdycc")           # returns ["a", "b", "c", "d"]
process(pos, data, ctx)[source]
class insights.parsr.WithIndent(parser)[source]

Bases: insights.parsr.Wrapper

Consumes whitespace until a non-whitespace character is encountered, pushes the column position onto an indentation stack in the Context, and then calls the parser it’s wrapping. The wrapped parser and any of its children can make use of the saved indentation. Returns the value of the wrapped parser.

WithIndent allows HangingString to work by giving a way to mark how indented following lines must be to count as continuations.

Key = WS >> PosMarker(String(key_chars)) << WS
Sep = InSet(sep_chars, "Sep")
Value = WS >> (Boolean | HangingString(value_chars))
KVPair = WithIndent(Key + Opt(Sep >> Value))
process(pos, data, ctx)[source]
class insights.parsr.Wrapper(parser)[source]

Bases: insights.parsr.Parser

Parser that wraps another parser. This can be used to prevent sequences and choices from accidentally accumulating other parsers when used in multiple parts of a grammar.

process(pos, data, ctx)[source]
insights.parsr.render(tree)[source]

Pretty prints a PEG.

insights.parsr.skip_none(x)[source]

Filters None values from a list. Often used with map.

insights.parsr.text_format(tree)[source]

Converts a PEG into a pretty printed string.

insights.parsr.query

insights.parsr.query defines a common data model and query language for parsers created with insights.parsr to target.

The model allows duplicate keys, and it allows values with unnamed attributes and recursive substructure. This is a common model for many kinds of configuration.

Simple key/value pairs can be represented as a key with a value that has a single attribute. Most dictionary shapes used to represent configuration are made of keys with simple values (key/single attr), lists of simple values (key/multiple attrs), or nested dictionaries (key/substructure).

Something like XML allows duplicate keys, and it allows values to have named attributes and substructure. This module doesn’t cover that case.

Entry, Directive, Section, and Result have overloaded __getitem__ functions that respond to queries. This allows their instances to be accessed like simple dictionaries, but the key passed to [] is converted to a query of immediate child instances instead of a simple lookup.

class insights.parsr.query.ChildQuery(expr)[source]

Bases: insights.parsr.query._EntryQuery

Returns True if any child node passes the query.

test(node)[source]
class insights.parsr.query.Directive(name=None, attrs=None, children=None, lineno=None, src=None)[source]

Bases: insights.parsr.query.Entry

A Directive is an Entry that represents a single option or named value. They are normally found in Section instances.

attrs
children
lineno
parent
property section
property section_name
src
class insights.parsr.query.Entry(name=None, attrs=None, children=None, lineno=None, src=None)[source]

Bases: object

Entry is the base class for the data model, which is a tree of Entry instances. Each instance has a name, attributes, a parent, and children.

attrs
children
property directives

Returns all immediate children that are instances of Directive.

find(*queries, **kwargs)[source]

Finds matching results anywhere in the configuration. The arguments are the same as those accepted by compile_queries(), and it accepts a keyword called roots that will return the ultimate root nodes of any results.

get_crumbs()[source]

Get the unique name from the current entry to the root.

get_keys()[source]

Returns the unique names of all the children as a list.

property grandchildren

Returns a flattened list of all grandchildren.

property line

Returns the original first line of text that generated the Entry.

lineno
parent
property root

Returns the furthest ancestor Entry. If the node is already the furthest ancestor, None is returned.

property section
property section_name
property sections

Returns all immediate children that are instances of Section.

select(*queries, **kwargs)[source]

select uses compile_queries() to compile queries into a query function and then passes the function, the current Entry instances children, and kwargs on to select().

src
property string_value

Returns the string representation of all attributes separated by a single whilespace.

upto(query)[source]

Go up from the current node to the first node that matches query.

property value

Returns None if no attributes exist, the first attribute if only one exists, or the string_value if more than one exists.

where(name, value=None)[source]

Selects current nodes based on name and value queries of child nodes. If any immediate children match the queries, the parent is included in the results. The :py:func:where_query function can be used to construct queries that act on the children as a whole instead of one at a time.

Example: >>> from insights.parsr.query import where_query as q >>> from insights.parsr.query import from_dict >>> r = from_dict(load_config()) >>> r = conf.status.conditions.where(q(“status”, “False”) | q(“type”, “Progressing”)) >>> r.message >>> r = conf.status.conditions.where(q(“status”, “False”) | q(“type”, “Progressing”)) >>> r.message >>> r.lastTransitionTime.values [‘2019-08-04T23:17:08Z’, ‘2019-08-04T23:32:14Z’]

class insights.parsr.query.NameQuery(expr)[source]

Bases: insights.parsr.query._EntryQuery

A query against the name of an Entry.

test(n)[source]
class insights.parsr.query.Result(children=None)[source]

Bases: insights.parsr.query.Entry

Result is an Entry whose children are the results of a query.

attrs
children
get_crumbs()[source]

Get the unique names from the current locations to the roots.

get_keys()[source]

Returns the unique names of all the grandchildren as a list.

lineno
nth(n)[source]

If the results are from a list beneath a node, get the nth element of the results for each unique parent.

Example: conf.status.conditions.nth(0) will get the 0th condition of each status.

parent
property parents

Returns all of the deduplicated parents as a list. If a child has no parent, the child itself is treated as the parent.

property roots

Returns the furthest ancestor Entry instances of all children. If a child has no furthest ancestor, the child itself is treated as a root.

select(*queries, **kwargs)[source]

select uses compile_queries() to compile queries into a query function and then passes the function, the current Entry instances children, and kwargs on to select().

src
property string_value

Returns the string value of the child if only one child exists. This helps queries behave more like dictionaries when you know only one result should exist.

property unique_values

Returns the unique values of all the children as a list.

upto(query)[source]

Go up from the current results to the first nodes that match query.

property value

Returns the value of the child if only one child exists. This helps queries behave more like dictionaries when you know only one result should exist.

property values

Returns the values of all the children as a list.

where(name, value=None)[source]

Selects current nodes based on name and value queries of child nodes. If any immediate children match the queries, the parent is included in the results. The :py:func:where_query function can be used to construct queries that act on the children as a whole instead of one at a time.

Example: >>> from insights.parsr.query import where_query as q >>> from insights.parsr.query import from_dict >>> r = from_dict(load_config()) >>> r = conf.status.conditions.where(q(“status”, “False”) | q(“type”, “Progressing”)) >>> r.message >>> r = conf.status.conditions.where(q(“status”, “False”) | q(“type”, “Progressing”)) >>> r.message >>> r.lastTransitionTime.values [‘2019-08-04T23:17:08Z’, ‘2019-08-04T23:32:14Z’]

class insights.parsr.query.Section(name=None, attrs=None, children=None, lineno=None, src=None)[source]

Bases: insights.parsr.query.Entry

A Section is an Entry composed of other Sections and Directive instances.

attrs
children
lineno
parent
property section

Returns the name of the section.

property section_name

Returns the value of the section.

src
class insights.parsr.query.SimpleQuery(expr)[source]

Bases: insights.parsr.query._EntryQuery

Automatically used in Entry.where or Result.where. SimpleQuery wraps a function or a lambda that will be passed each Entry of the current result. The passed function should return True or False.

test(node)[source]
insights.parsr.query.all_(expr)[source]

Use to express that expr must succeed on all attributes for the query to be successful. Only works against Entry attributes.

insights.parsr.query.any_(expr)[source]

Use to express that expr can succeed on any attribute for the query to be successful. Only works against Entry attributes.

insights.parsr.query.child_query(name, value=None)[source]

Converts a query into a ChildQuery that works on all child nodes at once to determine if the current node is accepted.

insights.parsr.query.compile_queries(*queries)[source]

compile_queries returns a function that will execute a list of query expressions against an Entry. The first query is run against the current entry’s children, the second query is run against the children of the children remaining from the first query, and so on.

If a query is a single object, it matches against the name of an Entry. If it’s a tuple, the first element matches against the name, and subsequent elements are tried against each individual attribute. The attribute results are or’d together and that result is anded with the name query. Any query that raises an exception is treated as False.

insights.parsr.query.from_dict(orig)[source]

from_dict is a helper function that does its best to convert a python dict into a tree of Entry instances that can be queried.

insights.parsr.query.make_child_query(name, value=None)

Converts a query into a ChildQuery that works on all child nodes at once to determine if the current node is accepted.

insights.parsr.query.pretty_format(root, indent=4)[source]

pretty_format generates a text representation of a model as a list of lines.

insights.parsr.query.select(query, nodes, deep=False, roots=False)[source]

select runs query, a function returned by compile_queries(), against a list of Entry instances. If you pass deep=True, select recursively walks each entry in the list and accumulates the results of running the query against it. If you pass roots=True, select returns the deduplicated set of final ancestors of all successful queries. Otherwise, it returns the matching entries.

insights.parsr.query.boolean

The boolean module allows delayed evaluation of boolean expressions. You wrap predicates in objects that have overloaded operators so they can be connected symbolically to express and, or, and not. This is useful if you want to build up a complicated predicate and pass it to something else for evaluation, in particular insights.parsr.query.Entry instances.

def is_even(n):
    return (n % 2) == 0

def is_positive(n):
    return n > 0

even_and_positive = pred(is_even) & pred(is_positive)

even_and_positive(6) == True
even_and_positive(-2) == False
even_and_positive(3) == False

You can also convert two parameter functions to which you want to partially apply an argument. The arguments partially applied will be those after the first argument. The first argument is the value the function should evaluate when it’s fully applied.

import operator
lt = pred2(operator.lt)  # operator.lt is lt(a, b) == (a < b)
gt = pred2(operator.gt)  # operator.gt is gt(a, b) == (a > b)

gt_five = gt(5)  # creates a function of one argument that when called
                 # returns operator.gt(x, 5)

lt_ten = lt(10)  # creates a function of one argument that when called
                 # returns operator.lt(x, 5)

gt_five_and_lt_10 = gt(5) & lt(10)
class insights.parsr.query.boolean.All(*exprs)[source]

Bases: insights.parsr.query.boolean.Boolean

test(value)[source]
insights.parsr.query.boolean.And

alias of insights.parsr.query.boolean.All

class insights.parsr.query.boolean.Any(*exprs)[source]

Bases: insights.parsr.query.boolean.Boolean

test(value)[source]
class insights.parsr.query.boolean.Boolean[source]

Bases: object

test(value)[source]
class insights.parsr.query.boolean.CaselessPredicate(func, *args)[source]

Bases: insights.parsr.query.boolean.Predicate

test(lhs)[source]
class insights.parsr.query.boolean.Not(query)[source]

Bases: insights.parsr.query.boolean.Boolean

test(value)[source]
insights.parsr.query.boolean.Or

alias of insights.parsr.query.boolean.Any

class insights.parsr.query.boolean.Predicate(func, *args)[source]

Bases: insights.parsr.query.boolean.Boolean

test(value)[source]
insights.parsr.query.boolean.pred(func, ignore_case=False)[source]
insights.parsr.query.boolean.pred2(func, ignore_case=False)[source]

insights.specs

class insights.specs.Openshift[source]

Bases: insights.core.spec_factory.SpecSet

cluster_operators = insights.specs.Openshift.cluster_operators
crds = insights.specs.Openshift.crds
crs = insights.specs.Openshift.crs
machine_configs = insights.specs.Openshift.machine_configs
machine_id = insights.specs.Openshift.machine_id
machines = insights.specs.Openshift.machines
namespaces = insights.specs.Openshift.namespaces
nodes = insights.specs.Openshift.nodes
pods = insights.specs.Openshift.pods
pvcs = insights.specs.Openshift.pvcs
storage_classes = insights.specs.Openshift.storage_classes
class insights.specs.Specs[source]

Bases: insights.core.spec_factory.SpecSet

amq_broker = insights.specs.Specs.amq_broker
audit_log = insights.specs.Specs.audit_log
auditctl_status = insights.specs.Specs.auditctl_status
auditd_conf = insights.specs.Specs.auditd_conf
autofs_conf = insights.specs.Specs.autofs_conf
avc_cache_threshold = insights.specs.Specs.avc_cache_threshold
avc_hash_stats = insights.specs.Specs.avc_hash_stats
aws_instance_id_doc = insights.specs.Specs.aws_instance_id_doc
aws_instance_id_pkcs7 = insights.specs.Specs.aws_instance_id_pkcs7
aws_instance_type = insights.specs.Specs.aws_instance_type
azure_instance_type = insights.specs.Specs.azure_instance_type
bios_uuid = insights.specs.Specs.bios_uuid
blkid = insights.specs.Specs.blkid
bond = insights.specs.Specs.bond
bond_dynamic_lb = insights.specs.Specs.bond_dynamic_lb
boot_loader_entries = insights.specs.Specs.boot_loader_entries
branch_info = insights.specs.Specs.branch_info
brctl_show = insights.specs.Specs.brctl_show
candlepin_error_log = insights.specs.Specs.candlepin_error_log
candlepin_log = insights.specs.Specs.candlepin_log
catalina_out = insights.specs.Specs.catalina_out
catalina_server_log = insights.specs.Specs.catalina_server_log
cciss = insights.specs.Specs.cciss
cdc_wdm = insights.specs.Specs.cdc_wdm
ceilometer_central_log = insights.specs.Specs.ceilometer_central_log
ceilometer_collector_log = insights.specs.Specs.ceilometer_collector_log
ceilometer_compute_log = insights.specs.Specs.ceilometer_compute_log
ceilometer_conf = insights.specs.Specs.ceilometer_conf
ceph_conf = insights.specs.Specs.ceph_conf
ceph_config_show = insights.specs.Specs.ceph_config_show
ceph_df_detail = insights.specs.Specs.ceph_df_detail
ceph_health_detail = insights.specs.Specs.ceph_health_detail
ceph_insights = insights.specs.Specs.ceph_insights
ceph_log = insights.specs.Specs.ceph_log
ceph_osd_df = insights.specs.Specs.ceph_osd_df
ceph_osd_dump = insights.specs.Specs.ceph_osd_dump
ceph_osd_ec_profile_get = insights.specs.Specs.ceph_osd_ec_profile_get
ceph_osd_ec_profile_ls = insights.specs.Specs.ceph_osd_ec_profile_ls
ceph_osd_log = insights.specs.Specs.ceph_osd_log
ceph_osd_tree = insights.specs.Specs.ceph_osd_tree
ceph_osd_tree_text = insights.specs.Specs.ceph_osd_tree_text
ceph_report = insights.specs.Specs.ceph_report
ceph_s = insights.specs.Specs.ceph_s
ceph_v = insights.specs.Specs.ceph_v
certificates_enddate = insights.specs.Specs.certificates_enddate
cgroups = insights.specs.Specs.cgroups
checkin_conf = insights.specs.Specs.checkin_conf
chkconfig = insights.specs.Specs.chkconfig
chrony_conf = insights.specs.Specs.chrony_conf
chronyc_sources = insights.specs.Specs.chronyc_sources
cib_xml = insights.specs.Specs.cib_xml
cinder_api_log = insights.specs.Specs.cinder_api_log
cinder_conf = insights.specs.Specs.cinder_conf
cinder_volume_log = insights.specs.Specs.cinder_volume_log
cloud_init_custom_network = insights.specs.Specs.cloud_init_custom_network
cloud_init_log = insights.specs.Specs.cloud_init_log
cluster_conf = insights.specs.Specs.cluster_conf
cmdline = insights.specs.Specs.cmdline
cobbler_modules_conf = insights.specs.Specs.cobbler_modules_conf
cobbler_settings = insights.specs.Specs.cobbler_settings
corosync = insights.specs.Specs.corosync
corosync_conf = insights.specs.Specs.corosync_conf
cpe = insights.specs.Specs.cpe
cpu_cores = insights.specs.Specs.cpu_cores
cpu_siblings = insights.specs.Specs.cpu_siblings
cpu_smt_active = insights.specs.Specs.cpu_smt_active
cpu_smt_control = insights.specs.Specs.cpu_smt_control
cpu_vulns = insights.specs.Specs.cpu_vulns
cpu_vulns_meltdown = insights.specs.Specs.cpu_vulns_meltdown
cpu_vulns_spec_store_bypass = insights.specs.Specs.cpu_vulns_spec_store_bypass
cpu_vulns_spectre_v1 = insights.specs.Specs.cpu_vulns_spectre_v1
cpu_vulns_spectre_v2 = insights.specs.Specs.cpu_vulns_spectre_v2
cpuinfo = insights.specs.Specs.cpuinfo
cpuinfo_max_freq = insights.specs.Specs.cpuinfo_max_freq
cpupower_frequency_info = insights.specs.Specs.cpupower_frequency_info
cpuset_cpus = insights.specs.Specs.cpuset_cpus
crt = insights.specs.Specs.crt
crypto_policies_bind = insights.specs.Specs.crypto_policies_bind
crypto_policies_config = insights.specs.Specs.crypto_policies_config
crypto_policies_opensshserver = insights.specs.Specs.crypto_policies_opensshserver
crypto_policies_state_current = insights.specs.Specs.crypto_policies_state_current
current_clocksource = insights.specs.Specs.current_clocksource
date = insights.specs.Specs.date
date_iso = insights.specs.Specs.date_iso
date_utc = insights.specs.Specs.date_utc
db2licm_l = insights.specs.Specs.db2licm_l
dcbtool_gc_dcb = insights.specs.Specs.dcbtool_gc_dcb
df__al = insights.specs.Specs.df__al
df__alP = insights.specs.Specs.df__alP
df__li = insights.specs.Specs.df__li
dig = insights.specs.Specs.dig
dig_dnssec = insights.specs.Specs.dig_dnssec
dig_edns = insights.specs.Specs.dig_edns
dig_noedns = insights.specs.Specs.dig_noedns
dirsrv = insights.specs.Specs.dirsrv
dirsrv_access = insights.specs.Specs.dirsrv_access
dirsrv_errors = insights.specs.Specs.dirsrv_errors
display_java = insights.specs.Specs.display_java
display_name = insights.specs.Specs.display_name
dmesg = insights.specs.Specs.dmesg
dmesg_log = insights.specs.Specs.dmesg_log
dmidecode = insights.specs.Specs.dmidecode
dmsetup_info = insights.specs.Specs.dmsetup_info
dnf_module_info = insights.specs.Specs.dnf_module_info
dnf_module_list = insights.specs.Specs.dnf_module_list
dnf_modules = insights.specs.Specs.dnf_modules
dnsmasq_config = insights.specs.Specs.dnsmasq_config
docker_container_inspect = insights.specs.Specs.docker_container_inspect
docker_host_machine_id = insights.specs.Specs.docker_host_machine_id
docker_image_inspect = insights.specs.Specs.docker_image_inspect
docker_info = insights.specs.Specs.docker_info
docker_list_containers = insights.specs.Specs.docker_list_containers
docker_list_images = insights.specs.Specs.docker_list_images
docker_network = insights.specs.Specs.docker_network
docker_storage = insights.specs.Specs.docker_storage
docker_storage_setup = insights.specs.Specs.docker_storage_setup
docker_sysconfig = insights.specs.Specs.docker_sysconfig
dumpe2fs_h = insights.specs.Specs.dumpe2fs_h
engine_config_all = insights.specs.Specs.engine_config_all
engine_log = insights.specs.Specs.engine_log
etc_journald_conf = insights.specs.Specs.etc_journald_conf
etc_journald_conf_d = insights.specs.Specs.etc_journald_conf_d
etc_machine_id = insights.specs.Specs.etc_machine_id
etcd_conf = insights.specs.Specs.etcd_conf
ethernet_interfaces = insights.specs.Specs.ethernet_interfaces
ethtool = insights.specs.Specs.ethtool
ethtool_S = insights.specs.Specs.ethtool_S
ethtool_T = insights.specs.Specs.ethtool_T
ethtool_a = insights.specs.Specs.ethtool_a
ethtool_c = insights.specs.Specs.ethtool_c
ethtool_g = insights.specs.Specs.ethtool_g
ethtool_i = insights.specs.Specs.ethtool_i
ethtool_k = insights.specs.Specs.ethtool_k
exim_conf = insights.specs.Specs.exim_conf
facter = insights.specs.Specs.facter
fc_match = insights.specs.Specs.fc_match
fcoeadm_i = insights.specs.Specs.fcoeadm_i
fdisk_l = insights.specs.Specs.fdisk_l
fdisk_l_sos = insights.specs.Specs.fdisk_l_sos
findmnt_lo_propagation = insights.specs.Specs.findmnt_lo_propagation
firewalld_conf = insights.specs.Specs.firewalld_conf
foreman_production_log = insights.specs.Specs.foreman_production_log
foreman_proxy_conf = insights.specs.Specs.foreman_proxy_conf
foreman_proxy_log = insights.specs.Specs.foreman_proxy_log
foreman_rake_db_migrate_status = insights.specs.Specs.foreman_rake_db_migrate_status
foreman_satellite_log = insights.specs.Specs.foreman_satellite_log
foreman_ssl_access_ssl_log = insights.specs.Specs.foreman_ssl_access_ssl_log
foreman_tasks_config = insights.specs.Specs.foreman_tasks_config
freeipa_healthcheck_log = insights.specs.Specs.freeipa_healthcheck_log
fstab = insights.specs.Specs.fstab
galera_cnf = insights.specs.Specs.galera_cnf
getcert_list = insights.specs.Specs.getcert_list
getconf_page_size = insights.specs.Specs.getconf_page_size
getenforce = insights.specs.Specs.getenforce
getsebool = insights.specs.Specs.getsebool
glance_api_conf = insights.specs.Specs.glance_api_conf
glance_api_log = insights.specs.Specs.glance_api_log
glance_cache_conf = insights.specs.Specs.glance_cache_conf
glance_registry_conf = insights.specs.Specs.glance_registry_conf
gluster_peer_status = insights.specs.Specs.gluster_peer_status
gluster_v_info = insights.specs.Specs.gluster_v_info
gluster_v_status = insights.specs.Specs.gluster_v_status
gnocchi_conf = insights.specs.Specs.gnocchi_conf
gnocchi_metricd_log = insights.specs.Specs.gnocchi_metricd_log
grub1_config_perms = insights.specs.Specs.grub1_config_perms
grub2_cfg = insights.specs.Specs.grub2_cfg
grub2_efi_cfg = insights.specs.Specs.grub2_efi_cfg
grub_conf = insights.specs.Specs.grub_conf
grub_config_perms = insights.specs.Specs.grub_config_perms
grub_efi_conf = insights.specs.Specs.grub_efi_conf
grubby_default_index = insights.specs.Specs.grubby_default_index
grubby_default_kernel = insights.specs.Specs.grubby_default_kernel
hammer_ping = insights.specs.Specs.hammer_ping
hammer_task_list = insights.specs.Specs.hammer_task_list
haproxy_cfg = insights.specs.Specs.haproxy_cfg
heat_api_log = insights.specs.Specs.heat_api_log
heat_conf = insights.specs.Specs.heat_conf
heat_crontab = insights.specs.Specs.heat_crontab
heat_crontab_container = insights.specs.Specs.heat_crontab_container
heat_engine_log = insights.specs.Specs.heat_engine_log
hostname = insights.specs.Specs.hostname
hostname_default = insights.specs.Specs.hostname_default
hostname_short = insights.specs.Specs.hostname_short
hosts = insights.specs.Specs.hosts
hponcfg_g = insights.specs.Specs.hponcfg_g
httpd24_httpd_error_log = insights.specs.Specs.httpd24_httpd_error_log
httpd_M = insights.specs.Specs.httpd_M
httpd_V = insights.specs.Specs.httpd_V
httpd_access_log = insights.specs.Specs.httpd_access_log
httpd_conf = insights.specs.Specs.httpd_conf
httpd_conf_scl_httpd24 = insights.specs.Specs.httpd_conf_scl_httpd24
httpd_conf_scl_jbcs_httpd24 = insights.specs.Specs.httpd_conf_scl_jbcs_httpd24
httpd_conf_sos = insights.specs.Specs.httpd_conf_sos
httpd_error_log = insights.specs.Specs.httpd_error_log
httpd_limits = insights.specs.Specs.httpd_limits
httpd_on_nfs = insights.specs.Specs.httpd_on_nfs
httpd_pid = insights.specs.Specs.httpd_pid
httpd_ssl_access_log = insights.specs.Specs.httpd_ssl_access_log
httpd_ssl_error_log = insights.specs.Specs.httpd_ssl_error_log
ifcfg = insights.specs.Specs.ifcfg
ifcfg_static_route = insights.specs.Specs.ifcfg_static_route
ifconfig = insights.specs.Specs.ifconfig
imagemagick_policy = insights.specs.Specs.imagemagick_policy
init_ora = insights.specs.Specs.init_ora
init_process_cgroup = insights.specs.Specs.init_process_cgroup
initscript = insights.specs.Specs.initscript
installed_rpms = insights.specs.Specs.installed_rpms
interrupts = insights.specs.Specs.interrupts
ip6tables = insights.specs.Specs.ip6tables
ip6tables_permanent = insights.specs.Specs.ip6tables_permanent
ip_addr = insights.specs.Specs.ip_addr
ip_addresses = insights.specs.Specs.ip_addresses
ip_neigh_show = insights.specs.Specs.ip_neigh_show
ip_netns_exec_namespace_lsof = insights.specs.Specs.ip_netns_exec_namespace_lsof
ip_route_show_table_all = insights.specs.Specs.ip_route_show_table_all
ipaupgrade_log = insights.specs.Specs.ipaupgrade_log
ipcs_m = insights.specs.Specs.ipcs_m
ipcs_m_p = insights.specs.Specs.ipcs_m_p
ipcs_s = insights.specs.Specs.ipcs_s
ipcs_s_i = insights.specs.Specs.ipcs_s_i
iptables = insights.specs.Specs.iptables
iptables_permanent = insights.specs.Specs.iptables_permanent
ipv4_neigh = insights.specs.Specs.ipv4_neigh
ipv6_neigh = insights.specs.Specs.ipv6_neigh
ironic_conf = insights.specs.Specs.ironic_conf
ironic_inspector_log = insights.specs.Specs.ironic_inspector_log
iscsiadm_m_session = insights.specs.Specs.iscsiadm_m_session
jbcs_httpd24_httpd_error_log = insights.specs.Specs.jbcs_httpd24_httpd_error_log
jboss_domain_server_log = insights.specs.Specs.jboss_domain_server_log
jboss_standalone_main_config = insights.specs.Specs.jboss_standalone_main_config
jboss_standalone_server_log = insights.specs.Specs.jboss_standalone_server_log
jboss_version = insights.specs.Specs.jboss_version
journal_since_boot = insights.specs.Specs.journal_since_boot
katello_service_status = insights.specs.Specs.katello_service_status
kdump_conf = insights.specs.Specs.kdump_conf
kerberos_kdc_log = insights.specs.Specs.kerberos_kdc_log
kernel_config = insights.specs.Specs.kernel_config
kexec_crash_loaded = insights.specs.Specs.kexec_crash_loaded
kexec_crash_size = insights.specs.Specs.kexec_crash_size
keystone_conf = insights.specs.Specs.keystone_conf
keystone_crontab = insights.specs.Specs.keystone_crontab
keystone_crontab_container = insights.specs.Specs.keystone_crontab_container
keystone_log = insights.specs.Specs.keystone_log
kpatch_list = insights.specs.Specs.kpatch_list
kpatch_patch_files = insights.specs.Specs.kpatch_patch_files
krb5 = insights.specs.Specs.krb5
ksmstate = insights.specs.Specs.ksmstate
kubepods_cpu_quota = insights.specs.Specs.kubepods_cpu_quota
lastupload = insights.specs.Specs.lastupload
libkeyutils = insights.specs.Specs.libkeyutils
libkeyutils_objdumps = insights.specs.Specs.libkeyutils_objdumps
libvirtd_log = insights.specs.Specs.libvirtd_log
libvirtd_qemu_log = insights.specs.Specs.libvirtd_qemu_log
limits_conf = insights.specs.Specs.limits_conf
locale = insights.specs.Specs.locale
localtime = insights.specs.Specs.localtime
logrotate_conf = insights.specs.Specs.logrotate_conf
lpstat_p = insights.specs.Specs.lpstat_p
ls_R_var_lib_nova_instances = insights.specs.Specs.ls_R_var_lib_nova_instances
ls_boot = insights.specs.Specs.ls_boot
ls_dev = insights.specs.Specs.ls_dev
ls_disk = insights.specs.Specs.ls_disk
ls_docker_volumes = insights.specs.Specs.ls_docker_volumes
ls_edac_mc = insights.specs.Specs.ls_edac_mc
ls_etc = insights.specs.Specs.ls_etc
ls_lib_firmware = insights.specs.Specs.ls_lib_firmware
ls_ocp_cni_openshift_sdn = insights.specs.Specs.ls_ocp_cni_openshift_sdn
ls_origin_local_volumes_pods = insights.specs.Specs.ls_origin_local_volumes_pods
ls_osroot = insights.specs.Specs.ls_osroot
ls_run_systemd_generator = insights.specs.Specs.ls_run_systemd_generator
ls_sys_firmware = insights.specs.Specs.ls_sys_firmware
ls_usr_lib64 = insights.specs.Specs.ls_usr_lib64
ls_usr_sbin = insights.specs.Specs.ls_usr_sbin
ls_var_lib_mongodb = insights.specs.Specs.ls_var_lib_mongodb
ls_var_lib_nova_instances = insights.specs.Specs.ls_var_lib_nova_instances
ls_var_log = insights.specs.Specs.ls_var_log
ls_var_opt_mssql = insights.specs.Specs.ls_var_opt_mssql
ls_var_opt_mssql_log = insights.specs.Specs.ls_var_opt_mssql_log
ls_var_run = insights.specs.Specs.ls_var_run
ls_var_spool_clientmq = insights.specs.Specs.ls_var_spool_clientmq
ls_var_spool_postfix_maildrop = insights.specs.Specs.ls_var_spool_postfix_maildrop
ls_var_tmp = insights.specs.Specs.ls_var_tmp
ls_var_www = insights.specs.Specs.ls_var_www
lsblk = insights.specs.Specs.lsblk
lsblk_pairs = insights.specs.Specs.lsblk_pairs
lscpu = insights.specs.Specs.lscpu
lsinitrd = insights.specs.Specs.lsinitrd
lsinitrd_lvm_conf = insights.specs.Specs.lsinitrd_lvm_conf
lsmod = insights.specs.Specs.lsmod
lsof = insights.specs.Specs.lsof
lspci = insights.specs.Specs.lspci
lssap = insights.specs.Specs.lssap
lsscsi = insights.specs.Specs.lsscsi
lvdisplay = insights.specs.Specs.lvdisplay
lvm_conf = insights.specs.Specs.lvm_conf
lvs = insights.specs.Specs.lvs
lvs_noheadings = insights.specs.Specs.lvs_noheadings
lvs_noheadings_all = insights.specs.Specs.lvs_noheadings_all
mac_addresses = insights.specs.Specs.mac_addresses
machine_id = insights.specs.Specs.machine_id
manila_conf = insights.specs.Specs.manila_conf
mariadb_log = insights.specs.Specs.mariadb_log
max_uid = insights.specs.Specs.max_uid
md5chk_files = insights.specs.Specs.md5chk_files
mdstat = insights.specs.Specs.mdstat
meminfo = insights.specs.Specs.meminfo
messages = insights.specs.Specs.messages
metadata_json = insights.specs.Specs.metadata_json
mistral_executor_log = insights.specs.Specs.mistral_executor_log
mlx4_port = insights.specs.Specs.mlx4_port
modinfo = insights.specs.Specs.modinfo
modinfo_all = insights.specs.Specs.modinfo_all
modinfo_i40e = insights.specs.Specs.modinfo_i40e
modinfo_igb = insights.specs.Specs.modinfo_igb
modinfo_ixgbe = insights.specs.Specs.modinfo_ixgbe
modinfo_veth = insights.specs.Specs.modinfo_veth
modinfo_vmxnet3 = insights.specs.Specs.modinfo_vmxnet3
modprobe = insights.specs.Specs.modprobe
module = insights.specs.Specs.module
mongod_conf = insights.specs.Specs.mongod_conf
mount = insights.specs.Specs.mount
mounts = insights.specs.Specs.mounts
mssql_conf = insights.specs.Specs.mssql_conf
multicast_querier = insights.specs.Specs.multicast_querier
multipath__v4__ll = insights.specs.Specs.multipath__v4__ll
multipath_conf = insights.specs.Specs.multipath_conf
multipath_conf_initramfs = insights.specs.Specs.multipath_conf_initramfs
mysql_log = insights.specs.Specs.mysql_log
mysqladmin_status = insights.specs.Specs.mysqladmin_status
mysqladmin_vars = insights.specs.Specs.mysqladmin_vars
mysqld_limits = insights.specs.Specs.mysqld_limits
named_checkconf_p = insights.specs.Specs.named_checkconf_p
namespace = insights.specs.Specs.namespace
netconsole = insights.specs.Specs.netconsole
netstat = insights.specs.Specs.netstat
netstat_agn = insights.specs.Specs.netstat_agn
netstat_i = insights.specs.Specs.netstat_i
netstat_s = insights.specs.Specs.netstat_s
networkmanager_dispatcher_d = insights.specs.Specs.networkmanager_dispatcher_d
neutron_conf = insights.specs.Specs.neutron_conf
neutron_dhcp_agent_ini = insights.specs.Specs.neutron_dhcp_agent_ini
neutron_l3_agent_ini = insights.specs.Specs.neutron_l3_agent_ini
neutron_l3_agent_log = insights.specs.Specs.neutron_l3_agent_log
neutron_metadata_agent_ini = insights.specs.Specs.neutron_metadata_agent_ini
neutron_metadata_agent_log = insights.specs.Specs.neutron_metadata_agent_log
neutron_ml2_conf = insights.specs.Specs.neutron_ml2_conf
neutron_ovs_agent_log = insights.specs.Specs.neutron_ovs_agent_log
neutron_plugin_ini = insights.specs.Specs.neutron_plugin_ini
neutron_server_log = insights.specs.Specs.neutron_server_log
nfs_exports = insights.specs.Specs.nfs_exports
nfs_exports_d = insights.specs.Specs.nfs_exports_d
nginx_conf = insights.specs.Specs.nginx_conf
nmcli_conn_show = insights.specs.Specs.nmcli_conn_show
nmcli_dev_show = insights.specs.Specs.nmcli_dev_show
nmcli_dev_show_sos = insights.specs.Specs.nmcli_dev_show_sos
nova_api_log = insights.specs.Specs.nova_api_log
nova_compute_log = insights.specs.Specs.nova_compute_log
nova_conf = insights.specs.Specs.nova_conf
nova_crontab = insights.specs.Specs.nova_crontab
nova_crontab_container = insights.specs.Specs.nova_crontab_container
nova_migration_uid = insights.specs.Specs.nova_migration_uid
nova_uid = insights.specs.Specs.nova_uid
nscd_conf = insights.specs.Specs.nscd_conf
nsswitch_conf = insights.specs.Specs.nsswitch_conf
ntp_conf = insights.specs.Specs.ntp_conf
ntpq_leap = insights.specs.Specs.ntpq_leap
ntpq_pn = insights.specs.Specs.ntpq_pn
ntptime = insights.specs.Specs.ntptime
numa_cpus = insights.specs.Specs.numa_cpus
numeric_user_group_name = insights.specs.Specs.numeric_user_group_name
nvme_core_io_timeout = insights.specs.Specs.nvme_core_io_timeout
oc_get_bc = insights.specs.Specs.oc_get_bc
oc_get_build = insights.specs.Specs.oc_get_build
oc_get_clusterrole_with_config = insights.specs.Specs.oc_get_clusterrole_with_config
oc_get_clusterrolebinding_with_config = insights.specs.Specs.oc_get_clusterrolebinding_with_config
oc_get_configmap = insights.specs.Specs.oc_get_configmap
oc_get_dc = insights.specs.Specs.oc_get_dc
oc_get_egressnetworkpolicy = insights.specs.Specs.oc_get_egressnetworkpolicy
oc_get_endpoints = insights.specs.Specs.oc_get_endpoints
oc_get_event = insights.specs.Specs.oc_get_event
oc_get_node = insights.specs.Specs.oc_get_node
oc_get_pod = insights.specs.Specs.oc_get_pod
oc_get_project = insights.specs.Specs.oc_get_project
oc_get_pv = insights.specs.Specs.oc_get_pv
oc_get_pvc = insights.specs.Specs.oc_get_pvc
oc_get_rc = insights.specs.Specs.oc_get_rc
oc_get_role = insights.specs.Specs.oc_get_role
oc_get_rolebinding = insights.specs.Specs.oc_get_rolebinding
oc_get_route = insights.specs.Specs.oc_get_route
oc_get_service = insights.specs.Specs.oc_get_service
odbc_ini = insights.specs.Specs.odbc_ini
odbcinst_ini = insights.specs.Specs.odbcinst_ini
openshift_certificates = insights.specs.Specs.openshift_certificates
openshift_fluentd_environ = insights.specs.Specs.openshift_fluentd_environ
openshift_hosts = insights.specs.Specs.openshift_hosts
openshift_router_environ = insights.specs.Specs.openshift_router_environ
openvswitch_daemon_log = insights.specs.Specs.openvswitch_daemon_log
openvswitch_other_config = insights.specs.Specs.openvswitch_other_config
openvswitch_server_log = insights.specs.Specs.openvswitch_server_log
os_release = insights.specs.Specs.os_release
osa_dispatcher_log = insights.specs.Specs.osa_dispatcher_log
ose_master_config = insights.specs.Specs.ose_master_config
ose_node_config = insights.specs.Specs.ose_node_config
ovirt_engine_boot_log = insights.specs.Specs.ovirt_engine_boot_log
ovirt_engine_confd = insights.specs.Specs.ovirt_engine_confd
ovirt_engine_console_log = insights.specs.Specs.ovirt_engine_console_log
ovirt_engine_server_log = insights.specs.Specs.ovirt_engine_server_log
ovirt_engine_ui_log = insights.specs.Specs.ovirt_engine_ui_log
ovs_appctl_fdb_show_bridge = insights.specs.Specs.ovs_appctl_fdb_show_bridge
ovs_ofctl_dump_flows = insights.specs.Specs.ovs_ofctl_dump_flows
ovs_vsctl_list_bridge = insights.specs.Specs.ovs_vsctl_list_bridge
ovs_vsctl_show = insights.specs.Specs.ovs_vsctl_show
ovs_vswitchd_limits = insights.specs.Specs.ovs_vswitchd_limits
pacemaker_log = insights.specs.Specs.pacemaker_log
package_provides_httpd = insights.specs.Specs.package_provides_httpd
package_provides_java = insights.specs.Specs.package_provides_java
pam_conf = insights.specs.Specs.pam_conf
parted__l = insights.specs.Specs.parted__l
partitions = insights.specs.Specs.partitions
passenger_status = insights.specs.Specs.passenger_status
password_auth = insights.specs.Specs.password_auth
pci_rport_target_disk_paths = insights.specs.Specs.pci_rport_target_disk_paths
pcs_config = insights.specs.Specs.pcs_config
pcs_quorum_status = insights.specs.Specs.pcs_quorum_status
pcs_status = insights.specs.Specs.pcs_status
pluginconf_d = insights.specs.Specs.pluginconf_d
podman_container_inspect = insights.specs.Specs.podman_container_inspect
podman_image_inspect = insights.specs.Specs.podman_image_inspect
podman_list_containers = insights.specs.Specs.podman_list_containers
podman_list_images = insights.specs.Specs.podman_list_images
postgresql_conf = insights.specs.Specs.postgresql_conf
postgresql_log = insights.specs.Specs.postgresql_log
prev_uploader_log = insights.specs.Specs.prev_uploader_log
proc_netstat = insights.specs.Specs.proc_netstat
proc_slabinfo = insights.specs.Specs.proc_slabinfo
proc_snmp_ipv4 = insights.specs.Specs.proc_snmp_ipv4
proc_snmp_ipv6 = insights.specs.Specs.proc_snmp_ipv6
proc_stat = insights.specs.Specs.proc_stat
ps_alxwww = insights.specs.Specs.ps_alxwww
ps_aux = insights.specs.Specs.ps_aux
ps_auxcww = insights.specs.Specs.ps_auxcww
ps_auxww = insights.specs.Specs.ps_auxww
ps_ef = insights.specs.Specs.ps_ef
ps_eo = insights.specs.Specs.ps_eo
pulp_worker_defaults = insights.specs.Specs.pulp_worker_defaults
puppet_ssl_cert_ca_pem = insights.specs.Specs.puppet_ssl_cert_ca_pem
puppetserver_config = insights.specs.Specs.puppetserver_config
pvs = insights.specs.Specs.pvs
pvs_noheadings = insights.specs.Specs.pvs_noheadings
pvs_noheadings_all = insights.specs.Specs.pvs_noheadings_all
qemu_conf = insights.specs.Specs.qemu_conf
qemu_xml = insights.specs.Specs.qemu_xml
qpid_stat_g = insights.specs.Specs.qpid_stat_g
qpid_stat_q = insights.specs.Specs.qpid_stat_q
qpid_stat_u = insights.specs.Specs.qpid_stat_u
qpidd_conf = insights.specs.Specs.qpidd_conf
rabbitmq_env = insights.specs.Specs.rabbitmq_env
rabbitmq_logs = insights.specs.Specs.rabbitmq_logs
rabbitmq_policies = insights.specs.Specs.rabbitmq_policies
rabbitmq_queues = insights.specs.Specs.rabbitmq_queues
rabbitmq_report = insights.specs.Specs.rabbitmq_report
rabbitmq_report_of_containers = insights.specs.Specs.rabbitmq_report_of_containers
rabbitmq_startup_err = insights.specs.Specs.rabbitmq_startup_err
rabbitmq_startup_log = insights.specs.Specs.rabbitmq_startup_log
rabbitmq_users = insights.specs.Specs.rabbitmq_users
rc_local = insights.specs.Specs.rc_local
rdma_conf = insights.specs.Specs.rdma_conf
redhat_release = insights.specs.Specs.redhat_release
resolv_conf = insights.specs.Specs.resolv_conf
rhev_data_center = insights.specs.Specs.rhev_data_center
rhn_charsets = insights.specs.Specs.rhn_charsets
rhn_conf = insights.specs.Specs.rhn_conf
rhn_entitlement_cert_xml = insights.specs.Specs.rhn_entitlement_cert_xml
rhn_hibernate_conf = insights.specs.Specs.rhn_hibernate_conf
rhn_schema_stats = insights.specs.Specs.rhn_schema_stats
rhn_schema_version = insights.specs.Specs.rhn_schema_version
rhn_search_daemon_log = insights.specs.Specs.rhn_search_daemon_log
rhn_server_satellite_log = insights.specs.Specs.rhn_server_satellite_log
rhn_server_xmlrpc_log = insights.specs.Specs.rhn_server_xmlrpc_log
rhn_taskomatic_daemon_log = insights.specs.Specs.rhn_taskomatic_daemon_log
rhosp_release = insights.specs.Specs.rhosp_release
rhsm_conf = insights.specs.Specs.rhsm_conf
rhsm_log = insights.specs.Specs.rhsm_log
rhsm_releasever = insights.specs.Specs.rhsm_releasever
rhv_log_collector_analyzer = insights.specs.Specs.rhv_log_collector_analyzer
rndc_status = insights.specs.Specs.rndc_status
root_crontab = insights.specs.Specs.root_crontab
route = insights.specs.Specs.route
rpm_V_packages = insights.specs.Specs.rpm_V_packages
rsyslog_conf = insights.specs.Specs.rsyslog_conf
running_java = insights.specs.Specs.running_java
samba = insights.specs.Specs.samba
samba_logs = insights.specs.Specs.samba_logs
sap_hdb_version = insights.specs.Specs.sap_hdb_version
sap_host_profile = insights.specs.Specs.sap_host_profile
sapcontrol_getsystemupdatelist = insights.specs.Specs.sapcontrol_getsystemupdatelist
saphostctl_getcimobject_sapinstance = insights.specs.Specs.saphostctl_getcimobject_sapinstance
saphostexec_status = insights.specs.Specs.saphostexec_status
saphostexec_version = insights.specs.Specs.saphostexec_version
sat5_insights_properties = insights.specs.Specs.sat5_insights_properties
satellite_custom_hiera = insights.specs.Specs.satellite_custom_hiera
satellite_enabled_features = insights.specs.Specs.satellite_enabled_features
satellite_mongodb_storage_engine = insights.specs.Specs.satellite_mongodb_storage_engine
satellite_version_rb = insights.specs.Specs.satellite_version_rb
scheduler = insights.specs.Specs.scheduler
scsi = insights.specs.Specs.scsi
scsi_eh_deadline = insights.specs.Specs.scsi_eh_deadline
scsi_fwver = insights.specs.Specs.scsi_fwver
sctp_asc = insights.specs.Specs.sctp_asc
sctp_eps = insights.specs.Specs.sctp_eps
sctp_snmp = insights.specs.Specs.sctp_snmp
sealert = insights.specs.Specs.sealert
secure = insights.specs.Specs.secure
selinux_config = insights.specs.Specs.selinux_config
semid = insights.specs.Specs.semid
sestatus = insights.specs.Specs.sestatus
setup_named_chroot = insights.specs.Specs.setup_named_chroot
smartctl = insights.specs.Specs.smartctl
smartpdc_settings = insights.specs.Specs.smartpdc_settings
smbstatus_S = insights.specs.Specs.smbstatus_S
smbstatus_p = insights.specs.Specs.smbstatus_p
sockstat = insights.specs.Specs.sockstat
softnet_stat = insights.specs.Specs.softnet_stat
software_collections_list = insights.specs.Specs.software_collections_list
spfile_ora = insights.specs.Specs.spfile_ora
ss = insights.specs.Specs.ss
ssh_config = insights.specs.Specs.ssh_config
ssh_foreman_config = insights.specs.Specs.ssh_foreman_config
ssh_foreman_proxy_config = insights.specs.Specs.ssh_foreman_proxy_config
sshd_config = insights.specs.Specs.sshd_config
sshd_config_perms = insights.specs.Specs.sshd_config_perms
sssd_config = insights.specs.Specs.sssd_config
sssd_logs = insights.specs.Specs.sssd_logs
subscription_manager_id = insights.specs.Specs.subscription_manager_id
subscription_manager_installed_product_ids = insights.specs.Specs.subscription_manager_installed_product_ids
subscription_manager_list_consumed = insights.specs.Specs.subscription_manager_list_consumed
subscription_manager_list_installed = insights.specs.Specs.subscription_manager_list_installed
subscription_manager_release_show = insights.specs.Specs.subscription_manager_release_show
swift_conf = insights.specs.Specs.swift_conf
swift_log = insights.specs.Specs.swift_log
swift_object_expirer_conf = insights.specs.Specs.swift_object_expirer_conf
swift_proxy_server_conf = insights.specs.Specs.swift_proxy_server_conf
sysconfig_chronyd = insights.specs.Specs.sysconfig_chronyd
sysconfig_httpd = insights.specs.Specs.sysconfig_httpd
sysconfig_irqbalance = insights.specs.Specs.sysconfig_irqbalance
sysconfig_kdump = insights.specs.Specs.sysconfig_kdump
sysconfig_libvirt_guests = insights.specs.Specs.sysconfig_libvirt_guests
sysconfig_memcached = insights.specs.Specs.sysconfig_memcached
sysconfig_mongod = insights.specs.Specs.sysconfig_mongod
sysconfig_network = insights.specs.Specs.sysconfig_network
sysconfig_ntpd = insights.specs.Specs.sysconfig_ntpd
sysconfig_sshd = insights.specs.Specs.sysconfig_sshd
sysconfig_virt_who = insights.specs.Specs.sysconfig_virt_who
sysctl = insights.specs.Specs.sysctl
sysctl_conf = insights.specs.Specs.sysctl_conf
sysctl_conf_initramfs = insights.specs.Specs.sysctl_conf_initramfs
systemctl_cat_rpcbind_socket = insights.specs.Specs.systemctl_cat_rpcbind_socket
systemctl_cinder_volume = insights.specs.Specs.systemctl_cinder_volume
systemctl_httpd = insights.specs.Specs.systemctl_httpd
systemctl_list_unit_files = insights.specs.Specs.systemctl_list_unit_files
systemctl_list_units = insights.specs.Specs.systemctl_list_units
systemctl_mariadb = insights.specs.Specs.systemctl_mariadb
systemctl_nginx = insights.specs.Specs.systemctl_nginx
systemctl_pulp_celerybeat = insights.specs.Specs.systemctl_pulp_celerybeat
systemctl_pulp_resmg = insights.specs.Specs.systemctl_pulp_resmg
systemctl_pulp_workers = insights.specs.Specs.systemctl_pulp_workers
systemctl_qdrouterd = insights.specs.Specs.systemctl_qdrouterd
systemctl_qpidd = insights.specs.Specs.systemctl_qpidd
systemctl_show_all_services = insights.specs.Specs.systemctl_show_all_services
systemctl_show_target = insights.specs.Specs.systemctl_show_target
systemctl_smartpdc = insights.specs.Specs.systemctl_smartpdc
systemd_docker = insights.specs.Specs.systemd_docker
systemd_logind_conf = insights.specs.Specs.systemd_logind_conf
systemd_openshift_node = insights.specs.Specs.systemd_openshift_node
systemd_system_conf = insights.specs.Specs.systemd_system_conf
systemd_system_origin_accounting = insights.specs.Specs.systemd_system_origin_accounting
systemid = insights.specs.Specs.systemid
systool_b_scsi_v = insights.specs.Specs.systool_b_scsi_v
tags = insights.specs.Specs.tags
teamdctl_config_dump = insights.specs.Specs.teamdctl_config_dump
teamdctl_state_dump = insights.specs.Specs.teamdctl_state_dump
thp_enabled = insights.specs.Specs.thp_enabled
thp_use_zero_page = insights.specs.Specs.thp_use_zero_page
tmpfilesd = insights.specs.Specs.tmpfilesd
tomcat_server_xml = insights.specs.Specs.tomcat_server_xml
tomcat_vdc_fallback = insights.specs.Specs.tomcat_vdc_fallback
tomcat_vdc_targeted = insights.specs.Specs.tomcat_vdc_targeted
tomcat_web_xml = insights.specs.Specs.tomcat_web_xml
tuned_adm = insights.specs.Specs.tuned_adm
tuned_conf = insights.specs.Specs.tuned_conf
udev_fc_wwpn_id_rules = insights.specs.Specs.udev_fc_wwpn_id_rules
udev_persistent_net_rules = insights.specs.Specs.udev_persistent_net_rules
uname = insights.specs.Specs.uname
up2date = insights.specs.Specs.up2date
up2date_log = insights.specs.Specs.up2date_log
uploader_log = insights.specs.Specs.uploader_log
uptime = insights.specs.Specs.uptime
usr_journald_conf_d = insights.specs.Specs.usr_journald_conf_d
var_qemu_xml = insights.specs.Specs.var_qemu_xml
vdo_status = insights.specs.Specs.vdo_status
vdsm_conf = insights.specs.Specs.vdsm_conf
vdsm_id = insights.specs.Specs.vdsm_id
vdsm_import_log = insights.specs.Specs.vdsm_import_log
vdsm_log = insights.specs.Specs.vdsm_log
vdsm_logger_conf = insights.specs.Specs.vdsm_logger_conf
version_info = insights.specs.Specs.version_info
vgdisplay = insights.specs.Specs.vgdisplay
vgs = insights.specs.Specs.vgs
vgs_noheadings = insights.specs.Specs.vgs_noheadings
vgs_noheadings_all = insights.specs.Specs.vgs_noheadings_all
virsh_list_all = insights.specs.Specs.virsh_list_all
virt_uuid_facts = insights.specs.Specs.virt_uuid_facts
virt_what = insights.specs.Specs.virt_what
virt_who_conf = insights.specs.Specs.virt_who_conf
virtlogd_conf = insights.specs.Specs.virtlogd_conf
vma_ra_enabled = insights.specs.Specs.vma_ra_enabled
vmcore_dmesg = insights.specs.Specs.vmcore_dmesg
vmware_tools_conf = insights.specs.Specs.vmware_tools_conf
vsftpd = insights.specs.Specs.vsftpd
vsftpd_conf = insights.specs.Specs.vsftpd_conf
woopsie = insights.specs.Specs.woopsie
x86_ibpb_enabled = insights.specs.Specs.x86_ibpb_enabled
x86_ibrs_enabled = insights.specs.Specs.x86_ibrs_enabled
x86_pti_enabled = insights.specs.Specs.x86_pti_enabled
x86_retp_enabled = insights.specs.Specs.x86_retp_enabled
xfs_info = insights.specs.Specs.xfs_info
xinetd_conf = insights.specs.Specs.xinetd_conf
yum_conf = insights.specs.Specs.yum_conf
yum_list_installed = insights.specs.Specs.yum_list_installed
yum_log = insights.specs.Specs.yum_log
yum_repolist = insights.specs.Specs.yum_repolist
yum_repos_d = insights.specs.Specs.yum_repos_d
zdump_v = insights.specs.Specs.zdump_v
zipl_conf = insights.specs.Specs.zipl_conf

insights.specs.default

This module defines all datasources used by standard Red Hat Insight components.

To define data sources that override the components in this file, create a insights.core.spec_factory.SpecFactory with “insights.specs” as the constructor argument. Data sources created with that factory will override components in this file with the same name keyword argument. This allows overriding the data sources that standard Insights Parsers resolve against.

class insights.specs.default.DefaultSpecs[source]

Bases: insights.specs.Specs

amq_broker = <insights.core.spec_factory.glob_file object>
audit_log = <insights.core.spec_factory.simple_file object>
auditctl_status = <insights.core.spec_factory.simple_command object>
auditd_conf = <insights.core.spec_factory.simple_file object>
autofs_conf = <insights.core.spec_factory.simple_file object>
avc_cache_threshold = <insights.core.spec_factory.simple_file object>
avc_hash_stats = <insights.core.spec_factory.simple_file object>
aws_instance_id_doc = <insights.core.spec_factory.simple_command object>
aws_instance_id_pkcs7 = <insights.core.spec_factory.simple_command object>
aws_instance_type = <insights.core.spec_factory.simple_command object>
azure_instance_type = <insights.core.spec_factory.simple_command object>
bios_uuid = <insights.core.spec_factory.simple_command object>
blkid = <insights.core.spec_factory.simple_command object>
block()[source]

Path: /sys/block directories starting with . or ram or dm- or loop

block_devices = <insights.core.spec_factory.listdir object>
bond = <insights.core.spec_factory.glob_file object>
bond_dynamic_lb = <insights.core.spec_factory.glob_file object>
boot_loader_entries = <insights.core.spec_factory.glob_file object>
branch_info = <insights.core.spec_factory.simple_file object>
brctl_show = <insights.core.spec_factory.simple_command object>
candlepin_error_log = <insights.core.spec_factory.simple_file object>
candlepin_log = <insights.core.spec_factory.simple_file object>
catalina_out = <insights.core.spec_factory.foreach_collect object>
catalina_server_log = <insights.core.spec_factory.foreach_collect object>
cciss = <insights.core.spec_factory.glob_file object>
cdc_wdm = <insights.core.spec_factory.simple_file object>
ceilometer_central_log = <insights.core.spec_factory.simple_file object>
ceilometer_collector_log = <insights.core.spec_factory.first_file object>
ceilometer_compute_log = <insights.core.spec_factory.first_file object>
ceilometer_conf = <insights.core.spec_factory.first_file object>
ceph_conf = <insights.core.spec_factory.first_file object>
ceph_config_show = <insights.core.spec_factory.foreach_execute object>
ceph_df_detail = <insights.core.spec_factory.simple_command object>
ceph_health_detail = <insights.core.spec_factory.simple_command object>
ceph_insights = <insights.core.spec_factory.simple_command object>
ceph_log = <insights.core.spec_factory.glob_file object>
ceph_osd_df = <insights.core.spec_factory.simple_command object>
ceph_osd_dump = <insights.core.spec_factory.simple_command object>
ceph_osd_ec_profile_get = <insights.core.spec_factory.foreach_execute object>
ceph_osd_ec_profile_ls = <insights.core.spec_factory.simple_command object>
ceph_osd_log = <insights.core.spec_factory.glob_file object>
ceph_osd_tree = <insights.core.spec_factory.simple_command object>
ceph_s = <insights.core.spec_factory.simple_command object>
ceph_socket_files = <insights.core.spec_factory.listdir object>
ceph_v = <insights.core.spec_factory.simple_command object>
certificates_enddate = <insights.core.spec_factory.simple_command object>
cgroups = <insights.core.spec_factory.simple_file object>
checkin_conf = <insights.core.spec_factory.simple_file object>
chkconfig = <insights.core.spec_factory.simple_command object>
chrony_conf = <insights.core.spec_factory.simple_file object>
chronyc_sources = <insights.core.spec_factory.simple_command object>
cib_xml = <insights.core.spec_factory.simple_file object>
cinder_api_log = <insights.core.spec_factory.first_file object>
cinder_conf = <insights.core.spec_factory.first_file object>
cinder_volume_log = <insights.core.spec_factory.simple_file object>
cloud_init_custom_network = <insights.core.spec_factory.simple_file object>
cloud_init_log = <insights.core.spec_factory.simple_file object>
cluster_conf = <insights.core.spec_factory.simple_file object>
cmdline = <insights.core.spec_factory.simple_file object>
cobbler_modules_conf = <insights.core.spec_factory.first_file object>
cobbler_settings = <insights.core.spec_factory.first_file object>
corosync = <insights.core.spec_factory.simple_file object>
corosync_conf = <insights.core.spec_factory.simple_file object>
cpe = <insights.core.spec_factory.simple_file object>
cpu_cores = <insights.core.spec_factory.glob_file object>
cpu_siblings = <insights.core.spec_factory.glob_file object>
cpu_smt_active = <insights.core.spec_factory.simple_file object>
cpu_smt_control = <insights.core.spec_factory.simple_file object>
cpu_vulns = <insights.core.spec_factory.glob_file object>
cpu_vulns_meltdown = <insights.core.spec_factory.simple_file object>
cpu_vulns_spec_store_bypass = <insights.core.spec_factory.simple_file object>
cpu_vulns_spectre_v1 = <insights.core.spec_factory.simple_file object>
cpu_vulns_spectre_v2 = <insights.core.spec_factory.simple_file object>
cpuinfo = <insights.core.spec_factory.simple_file object>
cpuinfo_max_freq = <insights.core.spec_factory.simple_file object>
cpupower_frequency_info = <insights.core.spec_factory.simple_command object>
cpuset_cpus = <insights.core.spec_factory.simple_file object>
crt = <insights.core.spec_factory.simple_command object>
crypto_policies_bind = <insights.core.spec_factory.simple_file object>
crypto_policies_config = <insights.core.spec_factory.simple_file object>
crypto_policies_opensshserver = <insights.core.spec_factory.simple_file object>
crypto_policies_state_current = <insights.core.spec_factory.simple_file object>
current_clocksource = <insights.core.spec_factory.simple_file object>
date = <insights.core.spec_factory.simple_command object>
date_iso = <insights.core.spec_factory.simple_command object>
date_utc = <insights.core.spec_factory.simple_command object>
db2licm_l = <insights.core.spec_factory.simple_command object>
dcbtool_gc_dcb = <insights.core.spec_factory.foreach_execute object>
df__al = <insights.core.spec_factory.simple_command object>
df__alP = <insights.core.spec_factory.simple_command object>
df__li = <insights.core.spec_factory.simple_command object>
dig = <insights.core.spec_factory.simple_command object>
dig_dnssec = <insights.core.spec_factory.simple_command object>
dig_edns = <insights.core.spec_factory.simple_command object>
dig_noedns = <insights.core.spec_factory.simple_command object>
dirsrv = <insights.core.spec_factory.simple_file object>
dirsrv_access = <insights.core.spec_factory.glob_file object>
dirsrv_errors = <insights.core.spec_factory.glob_file object>
display_java = <insights.core.spec_factory.simple_command object>
dmesg = <insights.core.spec_factory.simple_command object>
dmesg_log = <insights.core.spec_factory.simple_file object>
dmidecode = <insights.core.spec_factory.simple_command object>
dmsetup_info = <insights.core.spec_factory.simple_command object>
dnf_module_info = <insights.core.spec_factory.command_with_args object>
dnf_module_list = <insights.core.spec_factory.simple_command object>
dnf_module_names()[source]
dnf_modules = <insights.core.spec_factory.glob_file object>
dnsmasq_config = <insights.core.spec_factory.glob_file object>
docker_container_ids()[source]

Command: docker_container_ids

docker_container_inspect = <insights.core.spec_factory.foreach_execute object>
docker_host_machine_id = <insights.core.spec_factory.simple_file object>
docker_image_ids()[source]

Command: docker_image_ids

docker_image_inspect = <insights.core.spec_factory.foreach_execute object>
docker_info = <insights.core.spec_factory.simple_command object>
docker_installed_rpms()[source]

Command: /bin/rpm -qa --root %s --qf %s

docker_list_containers = <insights.core.spec_factory.simple_command object>
docker_list_images = <insights.core.spec_factory.simple_command object>
docker_network = <insights.core.spec_factory.simple_file object>
docker_storage = <insights.core.spec_factory.simple_file object>
docker_storage_setup = <insights.core.spec_factory.simple_file object>
docker_sysconfig = <insights.core.spec_factory.simple_file object>
dumpdev()[source]
dumpe2fs_h = <insights.core.spec_factory.foreach_execute object>
engine_config_all = <insights.core.spec_factory.simple_command object>
engine_log = <insights.core.spec_factory.simple_file object>
etc_journald_conf = <insights.core.spec_factory.simple_file object>
etc_journald_conf_d = <insights.core.spec_factory.glob_file object>
etc_machine_id = <insights.core.spec_factory.simple_file object>
etcd_conf = <insights.core.spec_factory.simple_file object>
ethernet_interfaces = <insights.core.spec_factory.listdir object>
ethtool = <insights.core.spec_factory.foreach_execute object>
ethtool_S = <insights.core.spec_factory.foreach_execute object>
ethtool_T = <insights.core.spec_factory.foreach_execute object>
ethtool_a = <insights.core.spec_factory.foreach_execute object>
ethtool_c = <insights.core.spec_factory.foreach_execute object>
ethtool_g = <insights.core.spec_factory.foreach_execute object>
ethtool_i = <insights.core.spec_factory.foreach_execute object>
ethtool_k = <insights.core.spec_factory.foreach_execute object>
exim_conf = <insights.core.spec_factory.simple_file object>
facter = <insights.core.spec_factory.simple_command object>
fc_match = <insights.core.spec_factory.simple_command object>
fcoeadm_i = <insights.core.spec_factory.simple_command object>
fdisk_l = <insights.core.spec_factory.simple_command object>
findmnt_lo_propagation = <insights.core.spec_factory.simple_command object>
firewalld_conf = <insights.core.spec_factory.simple_file object>
foreman_production_log = <insights.core.spec_factory.simple_file object>
foreman_proxy_conf = <insights.core.spec_factory.simple_file object>
foreman_proxy_log = <insights.core.spec_factory.simple_file object>
foreman_rake_db_migrate_status = <insights.core.spec_factory.simple_command object>
foreman_satellite_log = <insights.core.spec_factory.simple_file object>
foreman_ssl_access_ssl_log = <insights.core.spec_factory.simple_file object>
foreman_tasks_config = <insights.core.spec_factory.first_file object>
freeipa_healthcheck_log = <insights.core.spec_factory.simple_file object>
fstab = <insights.core.spec_factory.simple_file object>
galera_cnf = <insights.core.spec_factory.first_file object>
getconf_page_size = <insights.core.spec_factory.simple_command object>
getenforce = <insights.core.spec_factory.simple_command object>
getsebool = <insights.core.spec_factory.simple_command object>
glance_api_conf = <insights.core.spec_factory.first_file object>
glance_api_log = <insights.core.spec_factory.first_file object>
glance_cache_conf = <insights.core.spec_factory.first_file object>
glance_registry_conf = <insights.core.spec_factory.simple_file object>
gluster_peer_status = <insights.core.spec_factory.simple_command object>
gluster_v_info = <insights.core.spec_factory.simple_command object>
gluster_v_status = <insights.core.spec_factory.simple_command object>
gnocchi_conf = <insights.core.spec_factory.first_file object>
gnocchi_metricd_log = <insights.core.spec_factory.first_file object>
grub1_config_perms = <insights.core.spec_factory.simple_command object>
grub2_cfg = <insights.core.spec_factory.simple_file object>
grub2_efi_cfg = <insights.core.spec_factory.simple_file object>
grub_conf = <insights.core.spec_factory.simple_file object>
grub_config_perms = <insights.core.spec_factory.simple_command object>
grub_efi_conf = <insights.core.spec_factory.simple_file object>
grubby_default_index = <insights.core.spec_factory.simple_command object>
grubby_default_kernel = <insights.core.spec_factory.simple_command object>
hammer_ping = <insights.core.spec_factory.simple_command object>
hammer_task_list = <insights.core.spec_factory.simple_command object>
haproxy_cfg = <insights.core.spec_factory.first_file object>
heat_api_log = <insights.core.spec_factory.first_file object>
heat_conf = <insights.core.spec_factory.first_file object>
heat_crontab = <insights.core.spec_factory.simple_command object>
heat_crontab_container = <insights.core.spec_factory.simple_command object>
heat_engine_log = <insights.core.spec_factory.first_file object>
host_installed_rpms = <insights.core.spec_factory.simple_command object>
hostname = <insights.core.spec_factory.simple_command object>
hostname_default = <insights.core.spec_factory.simple_command object>
hostname_short = <insights.core.spec_factory.simple_command object>
hosts = <insights.core.spec_factory.simple_file object>
hponcfg_g = <insights.core.spec_factory.simple_command object>
httpd24_httpd_error_log = <insights.core.spec_factory.simple_file object>
httpd_M = <insights.core.spec_factory.foreach_execute object>
httpd_V = <insights.core.spec_factory.foreach_execute object>
httpd_access_log = <insights.core.spec_factory.simple_file object>
httpd_cmd()[source]

Command: httpd_command

httpd_conf = <insights.core.spec_factory.glob_file object>
httpd_conf_scl_httpd24 = <insights.core.spec_factory.glob_file object>
httpd_conf_scl_jbcs_httpd24 = <insights.core.spec_factory.glob_file object>
httpd_error_log = <insights.core.spec_factory.simple_file object>
httpd_limits = <insights.core.spec_factory.foreach_collect object>
httpd_on_nfs()[source]
httpd_pid = <insights.core.spec_factory.simple_command object>
httpd_ssl_access_log = <insights.core.spec_factory.simple_file object>
httpd_ssl_error_log = <insights.core.spec_factory.simple_file object>
ifcfg = <insights.core.spec_factory.glob_file object>
ifcfg_static_route = <insights.core.spec_factory.glob_file object>
ifconfig = <insights.core.spec_factory.simple_command object>
imagemagick_policy = <insights.core.spec_factory.glob_file object>
init_ora = <insights.core.spec_factory.simple_file object>
init_process_cgroup = <insights.core.spec_factory.simple_file object>
initscript = <insights.core.spec_factory.glob_file object>
installed_rpms = <insights.core.spec_factory.first_of object>
interrupts = <insights.core.spec_factory.simple_file object>
ip6tables = <insights.core.spec_factory.simple_command object>
ip6tables_permanent = <insights.core.spec_factory.simple_file object>
ip_addr = <insights.core.spec_factory.simple_command object>
ip_addresses = <insights.core.spec_factory.simple_command object>
ip_netns_exec_namespace_lsof = <insights.core.spec_factory.foreach_execute object>
ip_route_show_table_all = <insights.core.spec_factory.simple_command object>
ipaupgrade_log = <insights.core.spec_factory.simple_file object>
ipcs_m = <insights.core.spec_factory.simple_command object>
ipcs_m_p = <insights.core.spec_factory.simple_command object>
ipcs_s = <insights.core.spec_factory.simple_command object>
ipcs_s_i = <insights.core.spec_factory.foreach_execute object>
iptables = <insights.core.spec_factory.simple_command object>
iptables_permanent = <insights.core.spec_factory.simple_file object>
ipv4_neigh = <insights.core.spec_factory.simple_command object>
ipv6_neigh = <insights.core.spec_factory.simple_command object>
ironic_conf = <insights.core.spec_factory.first_file object>
ironic_inspector_log = <insights.core.spec_factory.simple_file object>
is_aws()[source]
is_azure()[source]
is_ceph_monitor()[source]
is_sat()[source]
iscsiadm_m_session = <insights.core.spec_factory.simple_command object>
jbcs_httpd24_httpd_error_log = <insights.core.spec_factory.simple_file object>
jboss_domain_server_log = <insights.core.spec_factory.foreach_collect object>
jboss_domain_server_log_dir()[source]

Command: JBoss domain server log directory

jboss_home()[source]

Command: JBoss home progress command content paths

jboss_standalone_main_config = <insights.core.spec_factory.foreach_collect object>
jboss_standalone_main_config_files()[source]

Command: JBoss standalone main config files

jboss_version = <insights.core.spec_factory.foreach_collect object>
katello_service_status = <insights.core.spec_factory.simple_command object>
kdump_conf = <insights.core.spec_factory.simple_file object>
kerberos_kdc_log = <insights.core.spec_factory.simple_file object>
kernel_config = <insights.core.spec_factory.glob_file object>
kexec_crash_loaded = <insights.core.spec_factory.simple_file object>
kexec_crash_size = <insights.core.spec_factory.simple_file object>
keystone_conf = <insights.core.spec_factory.first_file object>
keystone_crontab = <insights.core.spec_factory.simple_command object>
keystone_crontab_container = <insights.core.spec_factory.simple_command object>
keystone_log = <insights.core.spec_factory.first_file object>
kpatch_list = <insights.core.spec_factory.simple_command object>
kpatch_patch_files = <insights.core.spec_factory.command_with_args object>
kpatch_patches_running_kernel_dir()[source]
krb5 = <insights.core.spec_factory.glob_file object>
ksmstate = <insights.core.spec_factory.simple_file object>
kubepods_cpu_quota = <insights.core.spec_factory.glob_file object>
last_upload_globs = ['/etc/redhat-access-insights/.lastupload', '/etc/insights-client/.lastupload']
lastupload = <insights.core.spec_factory.glob_file object>
libkeyutils = <insights.core.spec_factory.simple_command object>
libkeyutils_objdumps = <insights.core.spec_factory.simple_command object>
libvirtd_log = <insights.core.spec_factory.simple_file object>
libvirtd_qemu_log = <insights.core.spec_factory.glob_file object>
limits_conf = <insights.core.spec_factory.glob_file object>
locale = <insights.core.spec_factory.simple_command object>
localtime = <insights.core.spec_factory.simple_command object>
logrotate_conf = <insights.core.spec_factory.glob_file object>
lpstat_p = <insights.core.spec_factory.simple_command object>
ls_R_var_lib_nova_instances = <insights.core.spec_factory.simple_command object>
ls_boot = <insights.core.spec_factory.simple_command object>
ls_dev = <insights.core.spec_factory.simple_command object>
ls_disk = <insights.core.spec_factory.simple_command object>
ls_docker_volumes = <insights.core.spec_factory.simple_command object>
ls_edac_mc = <insights.core.spec_factory.simple_command object>
ls_etc = <insights.core.spec_factory.simple_command object>
ls_lib_firmware = <insights.core.spec_factory.simple_command object>
ls_ocp_cni_openshift_sdn = <insights.core.spec_factory.simple_command object>
ls_origin_local_volumes_pods = <insights.core.spec_factory.simple_command object>
ls_osroot = <insights.core.spec_factory.simple_command object>
ls_run_systemd_generator = <insights.core.spec_factory.simple_command object>
ls_sys_firmware = <insights.core.spec_factory.simple_command object>
ls_usr_lib64 = <insights.core.spec_factory.simple_command object>
ls_usr_sbin = <insights.core.spec_factory.simple_command object>
ls_var_lib_mongodb = <insights.core.spec_factory.simple_command object>
ls_var_lib_nova_instances = <insights.core.spec_factory.simple_command object>
ls_var_log = <insights.core.spec_factory.simple_command object>
ls_var_opt_mssql = <insights.core.spec_factory.simple_command object>
ls_var_opt_mssql_log = <insights.core.spec_factory.simple_command object>
ls_var_run = <insights.core.spec_factory.simple_command object>
ls_var_spool_clientmq = <insights.core.spec_factory.simple_command object>
ls_var_spool_postfix_maildrop = <insights.core.spec_factory.simple_command object>
ls_var_tmp = <insights.core.spec_factory.simple_command object>
ls_var_www = <insights.core.spec_factory.simple_command object>
lsblk = <insights.core.spec_factory.simple_command object>
lsblk_pairs = <insights.core.spec_factory.simple_command object>
lscpu = <insights.core.spec_factory.simple_command object>
lsinitrd = <insights.core.spec_factory.simple_command object>
lsinitrd_lvm_conf = <insights.core.spec_factory.first_of object>
lsmod = <insights.core.spec_factory.simple_command object>
lsmod_all_names()[source]
lsmod_only_names()[source]
lsof = <insights.core.spec_factory.simple_command object>
lspci = <insights.core.spec_factory.simple_command object>
lssap = <insights.core.spec_factory.simple_command object>
lsscsi = <insights.core.spec_factory.simple_command object>
lvdisplay = <insights.core.spec_factory.simple_command object>
lvm_conf = <insights.core.spec_factory.simple_file object>
lvs = None
lvs_noheadings = <insights.core.spec_factory.simple_command object>
lvs_noheadings_all = <insights.core.spec_factory.simple_command object>
mac_addresses = <insights.core.spec_factory.glob_file object>
machine_id = <insights.core.spec_factory.first_file object>
manila_conf = <insights.core.spec_factory.first_file object>
mariadb_log = <insights.core.spec_factory.simple_file object>
max_uid = <insights.core.spec_factory.simple_command object>
md5chk_files = <insights.core.spec_factory.foreach_execute object>
mdstat = <insights.core.spec_factory.simple_file object>
meminfo = <insights.core.spec_factory.first_file object>
messages = <insights.core.spec_factory.simple_file object>
metadata_json = <insights.core.spec_factory.simple_file object>
mistral_executor_log = <insights.core.spec_factory.simple_file object>
mlx4_port = <insights.core.spec_factory.glob_file object>
modinfo = <insights.core.spec_factory.foreach_execute object>
modinfo_all = <insights.core.spec_factory.command_with_args object>
modinfo_i40e = <insights.core.spec_factory.simple_command object>
modinfo_igb = <insights.core.spec_factory.simple_command object>
modinfo_ixgbe = <insights.core.spec_factory.simple_command object>
modinfo_veth = <insights.core.spec_factory.simple_command object>
modinfo_vmxnet3 = <insights.core.spec_factory.simple_command object>
modprobe = <insights.core.spec_factory.glob_file object>
mongod_conf = <insights.core.spec_factory.glob_file object>
mount = <insights.core.spec_factory.simple_command object>
mounts = <insights.core.spec_factory.simple_file object>
mssql_conf = <insights.core.spec_factory.simple_file object>
multicast_querier = <insights.core.spec_factory.simple_command object>
multipath__v4__ll = <insights.core.spec_factory.simple_command object>
multipath_conf = <insights.core.spec_factory.simple_file object>
multipath_conf_initramfs = <insights.core.spec_factory.simple_command object>
mysql_log = <insights.core.spec_factory.glob_file object>
mysqladmin_status = <insights.core.spec_factory.simple_command object>
mysqladmin_vars = <insights.core.spec_factory.simple_command object>
mysqld_limits = <insights.core.spec_factory.foreach_collect object>
mysqld_pid = <insights.core.spec_factory.simple_command object>
named_checkconf_p = <insights.core.spec_factory.simple_command object>
namespace = <insights.core.spec_factory.simple_command object>
netconsole = <insights.core.spec_factory.simple_file object>
netstat = <insights.core.spec_factory.simple_command object>
netstat_agn = <insights.core.spec_factory.simple_command object>
netstat_i = <insights.core.spec_factory.simple_command object>
netstat_s = <insights.core.spec_factory.simple_command object>
networkmanager_dispatcher_d = <insights.core.spec_factory.glob_file object>
neutron_conf = <insights.core.spec_factory.first_file object>
neutron_dhcp_agent_ini = <insights.core.spec_factory.first_file object>
neutron_l3_agent_ini = <insights.core.spec_factory.first_file object>
neutron_l3_agent_log = <insights.core.spec_factory.simple_file object>
neutron_metadata_agent_ini = <insights.core.spec_factory.first_file object>
neutron_metadata_agent_log = <insights.core.spec_factory.first_file object>
neutron_ml2_conf = <insights.core.spec_factory.first_file object>
neutron_ovs_agent_log = <insights.core.spec_factory.first_file object>
neutron_plugin_ini = <insights.core.spec_factory.first_file object>
neutron_server_log = <insights.core.spec_factory.first_file object>
nfs_exports = <insights.core.spec_factory.simple_file object>
nfs_exports_d = <insights.core.spec_factory.glob_file object>
nginx_conf = <insights.core.spec_factory.glob_file object>
nmcli_conn_show = <insights.core.spec_factory.simple_command object>
nmcli_dev_show = <insights.core.spec_factory.simple_command object>
nova_api_log = <insights.core.spec_factory.first_file object>
nova_compute_log = <insights.core.spec_factory.first_file object>
nova_conf = <insights.core.spec_factory.first_file object>
nova_crontab = <insights.core.spec_factory.simple_command object>
nova_crontab_container = <insights.core.spec_factory.simple_command object>
nova_migration_uid = <insights.core.spec_factory.simple_command object>
nova_uid = <insights.core.spec_factory.simple_command object>
nscd_conf = <insights.core.spec_factory.simple_file object>
nsswitch_conf = <insights.core.spec_factory.simple_file object>
ntp_conf = <insights.core.spec_factory.simple_file object>
ntpq_leap = <insights.core.spec_factory.simple_command object>
ntpq_pn = <insights.core.spec_factory.simple_command object>
ntptime = <insights.core.spec_factory.simple_command object>
numa_cpus = <insights.core.spec_factory.glob_file object>
numeric_user_group_name = <insights.core.spec_factory.simple_command object>
nvme_core_io_timeout = <insights.core.spec_factory.simple_file object>
oc_get_bc = <insights.core.spec_factory.simple_command object>
oc_get_build = <insights.core.spec_factory.simple_command object>
oc_get_clusterrole_with_config = <insights.core.spec_factory.simple_command object>
oc_get_clusterrolebinding_with_config = <insights.core.spec_factory.simple_command object>
oc_get_configmap = <insights.core.spec_factory.simple_command object>
oc_get_dc = <insights.core.spec_factory.simple_command object>
oc_get_egressnetworkpolicy = <insights.core.spec_factory.simple_command object>
oc_get_endpoints = <insights.core.spec_factory.simple_command object>
oc_get_event = <insights.core.spec_factory.simple_command object>
oc_get_node = <insights.core.spec_factory.simple_command object>
oc_get_pod = <insights.core.spec_factory.simple_command object>
oc_get_project = <insights.core.spec_factory.simple_command object>
oc_get_pv = <insights.core.spec_factory.simple_command object>
oc_get_pvc = <insights.core.spec_factory.simple_command object>
oc_get_rc = <insights.core.spec_factory.simple_command object>
oc_get_role = <insights.core.spec_factory.simple_command object>
oc_get_rolebinding = <insights.core.spec_factory.simple_command object>
oc_get_route = <insights.core.spec_factory.simple_command object>
oc_get_service = <insights.core.spec_factory.simple_command object>
odbc_ini = <insights.core.spec_factory.simple_file object>
odbcinst_ini = <insights.core.spec_factory.simple_file object>
openshift_certificates = <insights.core.spec_factory.foreach_execute object>
openshift_fluentd_environ = <insights.core.spec_factory.foreach_collect object>
openshift_fluentd_pid = <insights.core.spec_factory.simple_command object>
openshift_hosts = <insights.core.spec_factory.simple_file object>
openshift_router_environ = <insights.core.spec_factory.foreach_collect object>
openshift_router_pid = <insights.core.spec_factory.simple_command object>
openvswitch_daemon_log = <insights.core.spec_factory.simple_file object>
openvswitch_other_config = <insights.core.spec_factory.simple_command object>
openvswitch_server_log = <insights.core.spec_factory.simple_file object>
os_release = <insights.core.spec_factory.simple_file object>
osa_dispatcher_log = <insights.core.spec_factory.first_file object>
ose_master_config = <insights.core.spec_factory.simple_file object>
ose_node_config = <insights.core.spec_factory.simple_file object>
ovirt_engine_boot_log = <insights.core.spec_factory.simple_file object>
ovirt_engine_confd = <insights.core.spec_factory.glob_file object>
ovirt_engine_console_log = <insights.core.spec_factory.simple_file object>
ovirt_engine_server_log = <insights.core.spec_factory.simple_file object>
ovirt_engine_ui_log = <insights.core.spec_factory.simple_file object>
ovs_appctl_fdb_show_bridge = <insights.core.spec_factory.foreach_execute object>
ovs_ofctl_dump_flows = <insights.core.spec_factory.foreach_execute object>
ovs_vsctl_list_br = <insights.core.spec_factory.simple_command object>
ovs_vsctl_list_bridge = <insights.core.spec_factory.simple_command object>
ovs_vsctl_show = <insights.core.spec_factory.simple_command object>
ovs_vswitchd_limits = <insights.core.spec_factory.foreach_collect object>
ovs_vswitchd_pid = <insights.core.spec_factory.simple_command object>
pacemaker_log = <insights.core.spec_factory.first_file object>
package_and_httpd()[source]

Command: package_and_httpd

package_and_java()[source]

Command: package_and_java

package_provides_httpd = <insights.core.spec_factory.foreach_execute object>
package_provides_java = <insights.core.spec_factory.foreach_execute object>
pam_conf = <insights.core.spec_factory.simple_file object>
parted__l = <insights.core.spec_factory.simple_command object>
partitions = <insights.core.spec_factory.simple_file object>
passenger_status = <insights.core.spec_factory.simple_command object>
password_auth = <insights.core.spec_factory.simple_file object>
pci_rport_target_disk_paths = <insights.core.spec_factory.simple_command object>
pcs_config = <insights.core.spec_factory.simple_command object>
pcs_quorum_status = <insights.core.spec_factory.simple_command object>
pcs_status = <insights.core.spec_factory.simple_command object>
pluginconf_d = <insights.core.spec_factory.glob_file object>
postgresql_conf = <insights.core.spec_factory.first_file object>
postgresql_log = <insights.core.spec_factory.first_of object>
prev_uploader_log = <insights.core.spec_factory.simple_file object>
proc_netstat = <insights.core.spec_factory.simple_file object>
proc_slabinfo = <insights.core.spec_factory.simple_file object>
proc_snmp_ipv4 = <insights.core.spec_factory.simple_file object>
proc_snmp_ipv6 = <insights.core.spec_factory.simple_file object>
proc_stat = <insights.core.spec_factory.simple_file object>
ps_alxwww = <insights.core.spec_factory.simple_command object>
ps_aux = <insights.core.spec_factory.simple_command object>
ps_auxcww = <insights.core.spec_factory.simple_command object>
ps_auxww = <insights.core.spec_factory.simple_command object>
ps_ef = <insights.core.spec_factory.simple_command object>
ps_eo = <insights.core.spec_factory.simple_command object>
pulp_worker_defaults = <insights.core.spec_factory.simple_file object>
puppetserver_config = <insights.core.spec_factory.simple_file object>
pvs = <insights.core.spec_factory.simple_command object>
pvs_noheadings = <insights.core.spec_factory.simple_command object>
pvs_noheadings_all = <insights.core.spec_factory.simple_command object>
qemu_conf = <insights.core.spec_factory.simple_file object>
qemu_xml = <insights.core.spec_factory.glob_file object>
qpid_stat_g = <insights.core.spec_factory.simple_command object>
qpid_stat_q = <insights.core.spec_factory.simple_command object>
qpid_stat_u = <insights.core.spec_factory.simple_command object>
qpidd_conf = <insights.core.spec_factory.simple_file object>
rabbitmq_env = <insights.core.spec_factory.simple_file object>
rabbitmq_logs = <insights.core.spec_factory.glob_file object>
rabbitmq_policies = <insights.core.spec_factory.simple_command object>
rabbitmq_queues = <insights.core.spec_factory.simple_command object>
rabbitmq_report = <insights.core.spec_factory.simple_command object>
rabbitmq_startup_err = <insights.core.spec_factory.simple_file object>
rabbitmq_startup_log = <insights.core.spec_factory.simple_file object>
rabbitmq_users = <insights.core.spec_factory.simple_command object>
rc_local = <insights.core.spec_factory.simple_file object>
rdma_conf = <insights.core.spec_factory.simple_file object>
redhat_release = <insights.core.spec_factory.simple_file object>
resolv_conf = <insights.core.spec_factory.simple_file object>
rhev_data_center()[source]
rhn_charsets = <insights.core.spec_factory.simple_command object>
rhn_conf = <insights.core.spec_factory.first_file object>
rhn_entitlement_cert_xml = <insights.core.spec_factory.first_of object>
rhn_hibernate_conf = <insights.core.spec_factory.first_file object>
rhn_schema_stats = <insights.core.spec_factory.simple_command object>
rhn_schema_version = <insights.core.spec_factory.simple_command object>
rhn_search_daemon_log = <insights.core.spec_factory.first_file object>
rhn_server_satellite_log = <insights.core.spec_factory.simple_file object>
rhn_server_xmlrpc_log = <insights.core.spec_factory.first_file object>
rhn_taskomatic_daemon_log = <insights.core.spec_factory.first_file object>
rhosp_release = <insights.core.spec_factory.simple_file object>
rhsm_conf = <insights.core.spec_factory.simple_file object>
rhsm_log = <insights.core.spec_factory.simple_file object>
rhsm_releasever = <insights.core.spec_factory.simple_file object>
rhv_log_collector_analyzer = <insights.core.spec_factory.simple_command object>
rndc_status = <insights.core.spec_factory.simple_command object>
root_crontab = <insights.core.spec_factory.simple_command object>
route = <insights.core.spec_factory.simple_command object>
rpm_V_packages = <insights.core.spec_factory.simple_command object>
rpm_format = '\\{"name":"%{NAME}","epoch":"%{EPOCH}","version":"%{VERSION}","release":"%{RELEASE}","arch":"%{ARCH}","installtime":"%{INSTALLTIME:date}","buildtime":"%{BUILDTIME}","vendor":"%{VENDOR}","buildhost":"%{BUILDHOST}","sigpgp":"%{SIGPGP:pgpsig}"\\}\n'
rsyslog_conf = <insights.core.spec_factory.simple_file object>
samba = <insights.core.spec_factory.simple_file object>
sap_hdb_version = <insights.core.spec_factory.foreach_execute object>
sap_host_profile = <insights.core.spec_factory.simple_file object>
sap_sid()[source]

Get the SID

Returns

List of SID.

Return type

(list)

sap_sid_nr()[source]

Get the SID and Instance Number

Typical output of saphostctrl_listinstances:: # /usr/sap/hostctrl/exe/saphostctrl -function ListInstances Inst Info : SR1 - 01 - liuxc-rhel7-hana-ent - 749, patch 418, changelist 1816226

Returns

List of tuple of SID and Instance Number.

Return type

(list)

sapcontrol_getsystemupdatelist = <insights.core.spec_factory.foreach_execute object>
saphostctl_getcimobject_sapinstance = <insights.core.spec_factory.simple_command object>
saphostctrl_listinstances = <insights.core.spec_factory.simple_command object>
saphostexec_status = <insights.core.spec_factory.simple_command object>
saphostexec_version = <insights.core.spec_factory.simple_command object>
sat5_insights_properties = <insights.core.spec_factory.simple_file object>
satellite_custom_hiera = <insights.core.spec_factory.simple_file object>
satellite_enabled_features = <insights.core.spec_factory.simple_command object>
satellite_mongodb_storage_engine = <insights.core.spec_factory.simple_command object>
satellite_version_rb = <insights.core.spec_factory.simple_file object>
scheduler = <insights.core.spec_factory.foreach_collect object>
scsi = <insights.core.spec_factory.simple_file object>
scsi_eh_deadline = <insights.core.spec_factory.glob_file object>
scsi_fwver = <insights.core.spec_factory.glob_file object>
sctp_asc = <insights.core.spec_factory.simple_file object>
sctp_eps = <insights.core.spec_factory.simple_file object>
sctp_snmp = <insights.core.spec_factory.simple_file object>
sealert = <insights.core.spec_factory.simple_command object>
secure = <insights.core.spec_factory.simple_file object>
selinux_config = <insights.core.spec_factory.simple_file object>
semid()[source]

Command: semids

sestatus = <insights.core.spec_factory.simple_command object>
setup_named_chroot = <insights.core.spec_factory.simple_file object>
smartctl = <insights.core.spec_factory.foreach_execute object>
smartpdc_settings = <insights.core.spec_factory.simple_file object>
smbstatus_S = <insights.core.spec_factory.simple_command object>
smbstatus_p = <insights.core.spec_factory.simple_command object>
sockstat = <insights.core.spec_factory.simple_file object>
softnet_stat = <insights.core.spec_factory.simple_file object>
software_collections_list = <insights.core.spec_factory.simple_command object>
spfile_ora = <insights.core.spec_factory.glob_file object>
ss = <insights.core.spec_factory.simple_command object>
ssh_config = <insights.core.spec_factory.simple_file object>
ssh_foreman_config = <insights.core.spec_factory.simple_file object>
ssh_foreman_proxy_config = <insights.core.spec_factory.simple_file object>
sshd_config = <insights.core.spec_factory.simple_file object>
sshd_config_perms = <insights.core.spec_factory.simple_command object>
sssd_config = <insights.core.spec_factory.simple_file object>
subscription_manager_id = <insights.core.spec_factory.simple_command object>
subscription_manager_installed_product_ids = <insights.core.spec_factory.simple_command object>
subscription_manager_release_show = <insights.core.spec_factory.simple_command object>
swift_conf = <insights.core.spec_factory.first_file object>
swift_log = <insights.core.spec_factory.first_file object>
swift_object_expirer_conf = <insights.core.spec_factory.first_file object>
swift_proxy_server_conf = <insights.core.spec_factory.first_file object>
sysconfig_chronyd = <insights.core.spec_factory.simple_file object>
sysconfig_httpd = <insights.core.spec_factory.simple_file object>
sysconfig_irqbalance = <insights.core.spec_factory.simple_file object>
sysconfig_kdump = <insights.core.spec_factory.simple_file object>
sysconfig_libvirt_guests = <insights.core.spec_factory.simple_file object>
sysconfig_memcached = <insights.core.spec_factory.first_file object>
sysconfig_mongod = <insights.core.spec_factory.glob_file object>
sysconfig_network = <insights.core.spec_factory.simple_file object>
sysconfig_ntpd = <insights.core.spec_factory.simple_file object>
sysconfig_sshd = <insights.core.spec_factory.simple_file object>
sysconfig_virt_who = <insights.core.spec_factory.simple_file object>
sysctl = <insights.core.spec_factory.simple_command object>
sysctl_conf = <insights.core.spec_factory.simple_file object>
sysctl_conf_files = <insights.core.spec_factory.listdir object>
sysctl_conf_initramfs = <insights.core.spec_factory.foreach_execute object>
systemctl_cat_rpcbind_socket = <insights.core.spec_factory.simple_command object>
systemctl_cinder_volume = <insights.core.spec_factory.simple_command object>
systemctl_httpd = <insights.core.spec_factory.simple_command object>
systemctl_list_unit_files = <insights.core.spec_factory.simple_command object>
systemctl_list_units = <insights.core.spec_factory.simple_command object>
systemctl_mariadb = <insights.core.spec_factory.simple_command object>
systemctl_nginx = <insights.core.spec_factory.simple_command object>
systemctl_pulp_celerybeat = <insights.core.spec_factory.simple_command object>
systemctl_pulp_resmg = <insights.core.spec_factory.simple_command object>
systemctl_pulp_workers = <insights.core.spec_factory.simple_command object>
systemctl_qdrouterd = <insights.core.spec_factory.simple_command object>
systemctl_qpidd = <insights.core.spec_factory.simple_command object>
systemctl_show_all_services = <insights.core.spec_factory.simple_command object>
systemctl_show_target = <insights.core.spec_factory.simple_command object>
systemctl_smartpdc = <insights.core.spec_factory.simple_command object>
systemd_docker = <insights.core.spec_factory.simple_command object>
systemd_logind_conf = <insights.core.spec_factory.simple_file object>
systemd_openshift_node = <insights.core.spec_factory.simple_command object>
systemd_system_conf = <insights.core.spec_factory.simple_file object>
systemd_system_origin_accounting = <insights.core.spec_factory.simple_file object>
systemid = <insights.core.spec_factory.first_of object>
systool_b_scsi_v = <insights.core.spec_factory.simple_command object>
tags = <insights.core.spec_factory.simple_file object>
teamdctl_config_dump = <insights.core.spec_factory.foreach_execute object>
teamdctl_state_dump = <insights.core.spec_factory.foreach_execute object>
thp_enabled = <insights.core.spec_factory.simple_file object>
thp_use_zero_page = <insights.core.spec_factory.simple_file object>
tmpfilesd = <insights.core.spec_factory.glob_file object>
tomcat_base()[source]

Path: Tomcat base path

tomcat_home_base()[source]

Command: tomcat_home_base_paths

tomcat_server_xml = <insights.core.spec_factory.first_of object>
tomcat_vdc_fallback = <insights.core.spec_factory.simple_command object>
tomcat_vdc_targeted = <insights.core.spec_factory.foreach_execute object>
tomcat_web_xml = <insights.core.spec_factory.first_of object>
tuned_adm = <insights.core.spec_factory.simple_command object>
tuned_conf = <insights.core.spec_factory.simple_file object>
udev_fc_wwpn_id_rules = <insights.core.spec_factory.simple_file object>
udev_persistent_net_rules = <insights.core.spec_factory.simple_file object>
uname = <insights.core.spec_factory.simple_command object>
up2date = <insights.core.spec_factory.simple_file object>
up2date_log = <insights.core.spec_factory.simple_file object>
uploader_log = <insights.core.spec_factory.simple_file object>
uptime = <insights.core.spec_factory.simple_command object>
usr_journald_conf_d = <insights.core.spec_factory.glob_file object>
vdo_status = <insights.core.spec_factory.simple_command object>
vdsm_conf = <insights.core.spec_factory.simple_file object>
vdsm_id = <insights.core.spec_factory.simple_file object>
vdsm_log = <insights.core.spec_factory.simple_file object>
vdsm_logger_conf = <insights.core.spec_factory.simple_file object>
vgdisplay = <insights.core.spec_factory.simple_command object>
vgs = None
vgs_noheadings = <insights.core.spec_factory.simple_command object>
vgs_noheadings_all = <insights.core.spec_factory.simple_command object>
virsh_list_all = <insights.core.spec_factory.simple_command object>
virt_uuid_facts = <insights.core.spec_factory.simple_file object>
virt_what = <insights.core.spec_factory.simple_command object>
virt_who_conf = <insights.core.spec_factory.glob_file object>
virtlogd_conf = <insights.core.spec_factory.simple_file object>
vma_ra_enabled = <insights.core.spec_factory.simple_file object>
vmcore_dmesg = <insights.core.spec_factory.glob_file object>
vmware_tools_conf = <insights.core.spec_factory.simple_file object>
vsftpd = <insights.core.spec_factory.simple_file object>
vsftpd_conf = <insights.core.spec_factory.simple_file object>
woopsie = <insights.core.spec_factory.simple_command object>
x86_ibpb_enabled = <insights.core.spec_factory.simple_file object>
x86_ibrs_enabled = <insights.core.spec_factory.simple_file object>
x86_pti_enabled = <insights.core.spec_factory.simple_file object>
x86_retp_enabled = <insights.core.spec_factory.simple_file object>
xfs_info = <insights.core.spec_factory.foreach_execute object>
xfs_mounts()[source]
xinetd_conf = <insights.core.spec_factory.glob_file object>
yum_conf = <insights.core.spec_factory.simple_file object>
yum_list_installed = <insights.core.spec_factory.simple_command object>
yum_log = <insights.core.spec_factory.simple_file object>
yum_repolist = <insights.core.spec_factory.simple_command object>
yum_repos_d = <insights.core.spec_factory.glob_file object>
zdump_v = <insights.core.spec_factory.simple_command object>
zipl_conf = <insights.core.spec_factory.simple_file object>
insights.specs.default.format_rpm(idx=None)
insights.specs.default.get_cmd_and_package_in_ps(broker, target_command)[source]
insights.specs.default.get_owner(filename)[source]

insights.specs.insights_archive

class insights.specs.insights_archive.InsightsArchiveSpecs[source]

Bases: insights.specs.Specs

all_installed_rpms = <insights.core.spec_factory.glob_file object>
auditctl_status = <insights.core.spec_factory.simple_file object>
aws_instance_id_doc = <insights.core.spec_factory.simple_file object>
aws_instance_id_pkcs7 = <insights.core.spec_factory.simple_file object>
aws_instance_type = <insights.core.spec_factory.simple_file object>
azure_instance_type = <insights.core.spec_factory.simple_file object>
bios_uuid = <insights.core.spec_factory.simple_file object>
blkid = <insights.core.spec_factory.simple_file object>
brctl_show = <insights.core.spec_factory.simple_file object>
ceph_df_detail = <insights.core.spec_factory.first_file object>
ceph_health_detail = <insights.core.spec_factory.first_file object>
ceph_insights = <insights.core.spec_factory.simple_file object>
ceph_osd_df = <insights.core.spec_factory.first_file object>
ceph_osd_dump = <insights.core.spec_factory.first_file object>
ceph_osd_ec_profile_get = <insights.core.spec_factory.simple_file object>
ceph_osd_ec_profile_ls = <insights.core.spec_factory.simple_file object>
ceph_osd_tree = <insights.core.spec_factory.first_file object>
ceph_s = <insights.core.spec_factory.first_file object>
ceph_v = <insights.core.spec_factory.simple_file object>
certificates_enddate = <insights.core.spec_factory.simple_file object>
chkconfig = <insights.core.spec_factory.simple_file object>
chronyc_sources = <insights.core.spec_factory.simple_file object>
cpupower_frequency_info = <insights.core.spec_factory.simple_file object>
crt = <insights.core.spec_factory.simple_file object>
date = <insights.core.spec_factory.simple_file object>
date_iso = <insights.core.spec_factory.simple_file object>
date_utc = <insights.core.spec_factory.simple_file object>
db2licm_l = <insights.core.spec_factory.simple_file object>
df__al = <insights.core.spec_factory.simple_file object>
df__alP = <insights.core.spec_factory.simple_file object>
df__li = <insights.core.spec_factory.simple_file object>
dig = <insights.core.spec_factory.simple_file object>
dig_dnssec = <insights.core.spec_factory.simple_file object>
dig_edns = <insights.core.spec_factory.simple_file object>
dig_noedns = <insights.core.spec_factory.simple_file object>
display_java = <insights.core.spec_factory.simple_file object>
display_name = <insights.core.spec_factory.simple_file object>
dmesg = <insights.core.spec_factory.simple_file object>
dmidecode = <insights.core.spec_factory.simple_file object>
dmsetup_info = <insights.core.spec_factory.simple_file object>
dnf_module_info = <insights.core.spec_factory.glob_file object>
dnf_module_list = <insights.core.spec_factory.simple_file object>
docker_info = <insights.core.spec_factory.simple_file object>
docker_list_containers = <insights.core.spec_factory.simple_file object>
docker_list_images = <insights.core.spec_factory.simple_file object>
engine_config_all = <insights.core.spec_factory.simple_file object>
ethtool = <insights.core.spec_factory.glob_file object>
ethtool_S = <insights.core.spec_factory.glob_file object>
ethtool_T = <insights.core.spec_factory.glob_file object>
ethtool_a = <insights.core.spec_factory.glob_file object>
ethtool_c = <insights.core.spec_factory.glob_file object>
ethtool_g = <insights.core.spec_factory.glob_file object>
ethtool_i = <insights.core.spec_factory.glob_file object>
ethtool_k = <insights.core.spec_factory.glob_file object>
facter = <insights.core.spec_factory.simple_file object>
fc_match = <insights.core.spec_factory.simple_file object>
fcoeadm_i = <insights.core.spec_factory.simple_file object>
fdisk_l = <insights.core.spec_factory.simple_file object>
findmnt_lo_propagation = <insights.core.spec_factory.simple_file object>
foreman_rake_db_migrate_status = <insights.core.spec_factory.simple_file object>
getcert_list = <insights.core.spec_factory.simple_file object>
getconf_page_size = <insights.core.spec_factory.simple_file object>
getenforce = <insights.core.spec_factory.simple_file object>
getsebool = <insights.core.spec_factory.simple_file object>
gluster_peer_status = <insights.core.spec_factory.simple_file object>
gluster_v_info = <insights.core.spec_factory.simple_file object>
gluster_v_status = <insights.core.spec_factory.simple_file object>
grub1_config_perms = <insights.core.spec_factory.simple_file object>
grub_config_perms = <insights.core.spec_factory.simple_file object>
grubby_default_index = <insights.core.spec_factory.simple_file object>
grubby_default_kernel = <insights.core.spec_factory.simple_file object>
hammer_ping = <insights.core.spec_factory.simple_file object>
hammer_task_list = <insights.core.spec_factory.simple_file object>
heat_crontab = <insights.core.spec_factory.simple_file object>
heat_crontab_container = <insights.core.spec_factory.simple_file object>
hostname = <insights.core.spec_factory.simple_file object>
hostname_default = <insights.core.spec_factory.simple_file object>
hostname_short = <insights.core.spec_factory.simple_file object>
hponcfg_g = <insights.core.spec_factory.simple_file object>
httpd_M = <insights.core.spec_factory.glob_file object>
httpd_V = <insights.core.spec_factory.glob_file object>
httpd_on_nfs = <insights.core.spec_factory.simple_file object>
httpd_pid = <insights.core.spec_factory.simple_file object>
ifconfig = <insights.core.spec_factory.simple_file object>
installed_rpms = <insights.core.spec_factory.head object>
ip6tables = <insights.core.spec_factory.simple_file object>
ip_addr = <insights.core.spec_factory.simple_file object>
ip_addresses = <insights.core.spec_factory.simple_file object>
ip_netns_exec_namespace_lsof = <insights.core.spec_factory.glob_file object>
ip_route_show_table_all = <insights.core.spec_factory.simple_file object>
ipcs_m = <insights.core.spec_factory.simple_file object>
ipcs_m_p = <insights.core.spec_factory.simple_file object>
ipcs_s = <insights.core.spec_factory.simple_file object>
iptables = <insights.core.spec_factory.simple_file object>
ipv4_neigh = <insights.core.spec_factory.simple_file object>
ipv6_neigh = <insights.core.spec_factory.simple_file object>
iscsiadm_m_session = <insights.core.spec_factory.simple_file object>
katello_service_status = <insights.core.spec_factory.simple_file object>
keystone_crontab = <insights.core.spec_factory.simple_file object>
keystone_crontab_container = <insights.core.spec_factory.simple_file object>
kpatch_list = <insights.core.spec_factory.simple_file object>
kpatch_patch_files = <insights.core.spec_factory.simple_file object>
libkeyutils = <insights.core.spec_factory.simple_file object>
libkeyutils_objdumps = <insights.core.spec_factory.simple_file object>
locale = <insights.core.spec_factory.simple_file object>
localtime = <insights.core.spec_factory.simple_file object>
lpstat_p = <insights.core.spec_factory.simple_file object>
ls_R_var_lib_nova_instances = <insights.core.spec_factory.simple_file object>
ls_boot = <insights.core.spec_factory.simple_file object>
ls_dev = <insights.core.spec_factory.simple_file object>
ls_disk = <insights.core.spec_factory.simple_file object>
ls_docker_volumes = <insights.core.spec_factory.simple_file object>
ls_edac_mc = <insights.core.spec_factory.simple_file object>
ls_etc = <insights.core.spec_factory.simple_file object>
ls_lib_firmware = <insights.core.spec_factory.simple_file object>
ls_ocp_cni_openshift_sdn = <insights.core.spec_factory.simple_file object>
ls_origin_local_volumes_pods = <insights.core.spec_factory.simple_file object>
ls_osroot = <insights.core.spec_factory.simple_file object>
ls_run_systemd_generator = <insights.core.spec_factory.simple_file object>
ls_sys_firmware = <insights.core.spec_factory.simple_file object>
ls_usr_lib64 = <insights.core.spec_factory.simple_file object>
ls_usr_sbin = <insights.core.spec_factory.simple_file object>
ls_var_lib_mongodb = <insights.core.spec_factory.simple_file object>
ls_var_lib_nova_instances = <insights.core.spec_factory.simple_file object>
ls_var_log = <insights.core.spec_factory.simple_file object>
ls_var_opt_mssql = <insights.core.spec_factory.simple_file object>
ls_var_opt_mssql_log = <insights.core.spec_factory.simple_file object>
ls_var_run = <insights.core.spec_factory.simple_file object>
ls_var_spool_clientmq = <insights.core.spec_factory.simple_file object>
ls_var_spool_postfix_maildrop = <insights.core.spec_factory.simple_file object>
ls_var_tmp = <insights.core.spec_factory.simple_file object>
ls_var_www = <insights.core.spec_factory.simple_file object>
lsblk = <insights.core.spec_factory.simple_file object>
lsblk_pairs = <insights.core.spec_factory.simple_file object>
lscpu = <insights.core.spec_factory.simple_file object>
lsinitrd = <insights.core.spec_factory.simple_file object>
lsmod = <insights.core.spec_factory.simple_file object>
lsof = <insights.core.spec_factory.simple_file object>
lspci = <insights.core.spec_factory.simple_file object>
lssap = <insights.core.spec_factory.simple_file object>
lsscsi = <insights.core.spec_factory.simple_file object>
lvdisplay = <insights.core.spec_factory.simple_file object>
lvs_noheadings = <insights.core.spec_factory.simple_file object>
lvs_noheadings_all = <insights.core.spec_factory.simple_file object>
max_uid = <insights.core.spec_factory.simple_file object>
md5chk_files = <insights.core.spec_factory.glob_file object>
modinfo = <insights.core.spec_factory.glob_file object>
modinfo_i40e = <insights.core.spec_factory.simple_file object>
modinfo_igb = <insights.core.spec_factory.simple_file object>
modinfo_ixgbe = <insights.core.spec_factory.simple_file object>
modinfo_veth = <insights.core.spec_factory.simple_file object>
modinfo_vmxnet3 = <insights.core.spec_factory.simple_file object>
mount = <insights.core.spec_factory.simple_file object>
multicast_querier = <insights.core.spec_factory.simple_file object>
multipath__v4__ll = <insights.core.spec_factory.simple_file object>
multipath_conf_initramfs = <insights.core.spec_factory.simple_file object>
mysqladmin_status = <insights.core.spec_factory.simple_file object>
mysqladmin_vars = <insights.core.spec_factory.simple_file object>
named_checkconf_p = <insights.core.spec_factory.simple_file object>
namespace = <insights.core.spec_factory.simple_file object>
netstat = <insights.core.spec_factory.simple_file object>
netstat_agn = <insights.core.spec_factory.simple_file object>
netstat_i = <insights.core.spec_factory.simple_file object>
netstat_s = <insights.core.spec_factory.simple_file object>
nmcli_conn_show = <insights.core.spec_factory.simple_file object>
nmcli_dev_show = <insights.core.spec_factory.simple_file object>
nova_crontab = <insights.core.spec_factory.simple_file object>
nova_crontab_container = <insights.core.spec_factory.simple_file object>
nova_migration_uid = <insights.core.spec_factory.simple_file object>
nova_uid = <insights.core.spec_factory.simple_file object>
ntpq_leap = <insights.core.spec_factory.simple_file object>
ntpq_pn = <insights.core.spec_factory.simple_file object>
ntptime = <insights.core.spec_factory.simple_file object>
numeric_user_group_name = <insights.core.spec_factory.simple_file object>
oc_get_bc = <insights.core.spec_factory.simple_file object>
oc_get_build = <insights.core.spec_factory.simple_file object>
oc_get_clusterrole_with_config = <insights.core.spec_factory.simple_file object>
oc_get_clusterrolebinding_with_config = <insights.core.spec_factory.simple_file object>
oc_get_configmap = <insights.core.spec_factory.simple_file object>
oc_get_dc = <insights.core.spec_factory.simple_file object>
oc_get_egressnetworkpolicy = <insights.core.spec_factory.simple_file object>
oc_get_endpoints = <insights.core.spec_factory.simple_file object>
oc_get_event = <insights.core.spec_factory.simple_file object>
oc_get_node = <insights.core.spec_factory.simple_file object>
oc_get_pod = <insights.core.spec_factory.simple_file object>
oc_get_project = <insights.core.spec_factory.simple_file object>
oc_get_pv = <insights.core.spec_factory.simple_file object>
oc_get_pvc = <insights.core.spec_factory.simple_file object>
oc_get_rc = <insights.core.spec_factory.simple_file object>
oc_get_role = <insights.core.spec_factory.simple_file object>
oc_get_rolebinding = <insights.core.spec_factory.simple_file object>
oc_get_route = <insights.core.spec_factory.simple_file object>
oc_get_service = <insights.core.spec_factory.simple_file object>
openvswitch_other_config = <insights.core.spec_factory.simple_file object>
ovs_appctl_fdb_show_bridge = <insights.core.spec_factory.glob_file object>
ovs_ofctl_dump_flows = <insights.core.spec_factory.glob_file object>
ovs_vsctl_list_bridge = <insights.core.spec_factory.simple_file object>
ovs_vsctl_show = <insights.core.spec_factory.simple_file object>
parted__l = <insights.core.spec_factory.simple_file object>
passenger_status = <insights.core.spec_factory.simple_file object>
pci_rport_target_disk_paths = <insights.core.spec_factory.simple_file object>
pcs_config = <insights.core.spec_factory.simple_file object>
pcs_quorum_status = <insights.core.spec_factory.simple_file object>
pcs_status = <insights.core.spec_factory.simple_file object>
ps_alxwww = <insights.core.spec_factory.simple_file object>
ps_aux = <insights.core.spec_factory.simple_file object>
ps_auxcww = <insights.core.spec_factory.simple_file object>
ps_auxww = <insights.core.spec_factory.simple_file object>
ps_ef = <insights.core.spec_factory.simple_file object>
ps_eo = <insights.core.spec_factory.simple_file object>
pvs = <insights.core.spec_factory.simple_file object>
pvs_noheadings = <insights.core.spec_factory.simple_file object>
pvs_noheadings_all = <insights.core.spec_factory.simple_file object>
qpid_stat_g = <insights.core.spec_factory.simple_file object>
qpid_stat_q = <insights.core.spec_factory.simple_file object>
qpid_stat_u = <insights.core.spec_factory.simple_file object>
rabbitmq_policies = <insights.core.spec_factory.simple_file object>
rabbitmq_queues = <insights.core.spec_factory.simple_file object>
rabbitmq_report = <insights.core.spec_factory.simple_file object>
rabbitmq_users = <insights.core.spec_factory.simple_file object>
rhev_data_center = <insights.core.spec_factory.simple_file object>
rhn_charsets = <insights.core.spec_factory.simple_file object>
rhn_schema_stats = <insights.core.spec_factory.simple_file object>
rhn_schema_version = <insights.core.spec_factory.simple_file object>
rhv_log_collector_analyzer = <insights.core.spec_factory.simple_file object>
rndc_status = <insights.core.spec_factory.simple_file object>
root_crontab = <insights.core.spec_factory.simple_file object>
route = <insights.core.spec_factory.simple_file object>
rpm_V_packages = <insights.core.spec_factory.simple_file object>
sapcontrol_getsystemupdatelist = <insights.core.spec_factory.simple_file object>
saphostctl_getcimobject_sapinstance = <insights.core.spec_factory.simple_file object>
saphostexec_status = <insights.core.spec_factory.simple_file object>
saphostexec_version = <insights.core.spec_factory.simple_file object>
satellite_enabled_features = <insights.core.spec_factory.simple_file object>
satellite_mongodb_storage_engine = <insights.core.spec_factory.simple_file object>
sealert = <insights.core.spec_factory.simple_file object>
sestatus = <insights.core.spec_factory.simple_file object>
smbstatus_S = <insights.core.spec_factory.simple_file object>
smbstatus_p = <insights.core.spec_factory.simple_file object>
software_collections_list = <insights.core.spec_factory.simple_file object>
ss = <insights.core.spec_factory.simple_file object>
sshd_config_perms = <insights.core.spec_factory.simple_file object>
subscription_manager_id = <insights.core.spec_factory.simple_file object>
subscription_manager_installed_product_ids = <insights.core.spec_factory.simple_file object>
subscription_manager_release_show = <insights.core.spec_factory.simple_file object>
sysctl = <insights.core.spec_factory.simple_file object>
sysctl_conf_initramfs = <insights.core.spec_factory.glob_file object>
systemctl_cat_rpcbind_socket = <insights.core.spec_factory.simple_file object>
systemctl_cinder_volume = <insights.core.spec_factory.simple_file object>
systemctl_httpd = <insights.core.spec_factory.simple_file object>
systemctl_list_unit_files = <insights.core.spec_factory.simple_file object>
systemctl_list_units = <insights.core.spec_factory.simple_file object>
systemctl_mariadb = <insights.core.spec_factory.simple_file object>
systemctl_nginx = <insights.core.spec_factory.simple_file object>
systemctl_pulp_celerybeat = <insights.core.spec_factory.simple_file object>
systemctl_pulp_resmg = <insights.core.spec_factory.simple_file object>
systemctl_pulp_workers = <insights.core.spec_factory.simple_file object>
systemctl_qdrouterd = <insights.core.spec_factory.simple_file object>
systemctl_qpidd = <insights.core.spec_factory.simple_file object>
systemctl_show_all_services = <insights.core.spec_factory.simple_file object>
systemctl_show_target = <insights.core.spec_factory.simple_file object>
systemctl_smartpdc = <insights.core.spec_factory.simple_file object>
systemd_docker = <insights.core.spec_factory.first_file object>
systemd_openshift_node = <insights.core.spec_factory.first_file object>
systool_b_scsi_v = <insights.core.spec_factory.simple_file object>
teamdctl_config_dump = <insights.core.spec_factory.glob_file object>
teamdctl_state_dump = <insights.core.spec_factory.glob_file object>
tomcat_vdc_fallback = <insights.core.spec_factory.simple_file object>
tuned_adm = <insights.core.spec_factory.simple_file object>
uname = <insights.core.spec_factory.simple_file object>
uptime = <insights.core.spec_factory.simple_file object>
vdo_status = <insights.core.spec_factory.simple_file object>
version_info = <insights.core.spec_factory.simple_file object>
vgdisplay = <insights.core.spec_factory.simple_file object>
vgs_noheadings = <insights.core.spec_factory.simple_file object>
vgs_noheadings_all = <insights.core.spec_factory.simple_file object>
virsh_list_all = <insights.core.spec_factory.simple_file object>
virt_what = <insights.core.spec_factory.simple_file object>
woopsie = <insights.core.spec_factory.simple_file object>
yum_list_installed = <insights.core.spec_factory.simple_file object>
yum_repolist = <insights.core.spec_factory.first_file object>
zdump_v = <insights.core.spec_factory.simple_file object>

insights.specs.sos_archive

class insights.specs.sos_archive.SosSpecs[source]

Bases: insights.specs.Specs

auditctl_status = <insights.core.spec_factory.simple_file object>
blkid = <insights.core.spec_factory.first_file object>
candlepin_error_log = <insights.core.spec_factory.first_of object>
candlepin_log = <insights.core.spec_factory.first_of object>
catalina_out = <insights.core.spec_factory.glob_file object>
catalina_server_log = <insights.core.spec_factory.glob_file object>
ceph_health_detail = <insights.core.spec_factory.simple_file object>
ceph_osd_tree_text = <insights.core.spec_factory.simple_file object>
ceph_report = <insights.core.spec_factory.simple_file object>
chkconfig = <insights.core.spec_factory.first_file object>
cpupower_frequency_info = <insights.core.spec_factory.simple_file object>
date = <insights.core.spec_factory.first_of object>
df__al = <insights.core.spec_factory.first_file object>
display_java = <insights.core.spec_factory.simple_file object>
dmesg = <insights.core.spec_factory.first_file object>
dmidecode = <insights.core.spec_factory.simple_file object>
dmsetup_info = <insights.core.spec_factory.simple_file object>
docker_image_inspect = <insights.core.spec_factory.glob_file object>
docker_info = <insights.core.spec_factory.simple_file object>
docker_list_containers = <insights.core.spec_factory.first_file object>
docker_list_images = <insights.core.spec_factory.simple_file object>
dumpe2fs_h = <insights.core.spec_factory.glob_file object>
ethtool = <insights.core.spec_factory.glob_file object>
ethtool_S = <insights.core.spec_factory.glob_file object>
ethtool_T = <insights.core.spec_factory.glob_file object>
ethtool_a = <insights.core.spec_factory.glob_file object>
ethtool_c = <insights.core.spec_factory.glob_file object>
ethtool_g = <insights.core.spec_factory.glob_file object>
ethtool_i = <insights.core.spec_factory.glob_file object>
ethtool_k = <insights.core.spec_factory.glob_file object>
fdisk_l_sos = <insights.core.spec_factory.first_of object>
foreman_production_log = <insights.core.spec_factory.first_of object>
foreman_proxy_conf = <insights.core.spec_factory.first_of object>
foreman_proxy_log = <insights.core.spec_factory.first_of object>
foreman_satellite_log = <insights.core.spec_factory.first_of object>
foreman_ssl_access_ssl_log = <insights.core.spec_factory.first_file object>
getcert_list = <insights.core.spec_factory.first_file object>
gluster_peer_status = <insights.core.spec_factory.simple_file object>
gluster_v_info = <insights.core.spec_factory.simple_file object>
gluster_v_status = <insights.core.spec_factory.simple_file object>
hammer_ping = <insights.core.spec_factory.first_file object>
hostname = <insights.core.spec_factory.first_file object>
hostname_default = <insights.core.spec_factory.first_file object>
hostname_short = <insights.core.spec_factory.first_file object>
httpd_M = <insights.core.spec_factory.simple_file object>
installed_rpms = <insights.core.spec_factory.first_file object>
ip_addr = <insights.core.spec_factory.first_of object>
ip_neigh_show = <insights.core.spec_factory.first_file object>
ip_route_show_table_all = <insights.core.spec_factory.simple_file object>
iptables = <insights.core.spec_factory.first_file object>
journal_since_boot = <insights.core.spec_factory.first_of object>
locale = <insights.core.spec_factory.simple_file object>
ls_boot = <insights.core.spec_factory.simple_file object>
ls_dev = <insights.core.spec_factory.first_file object>
lsblk = <insights.core.spec_factory.first_file object>
lscpu = <insights.core.spec_factory.simple_file object>
lsinitrd = <insights.core.spec_factory.simple_file object>
lsmod = <insights.core.spec_factory.simple_file object>
lsof = <insights.core.spec_factory.simple_file object>
lspci = <insights.core.spec_factory.first_of object>
lsscsi = <insights.core.spec_factory.simple_file object>
lvs = <insights.core.spec_factory.first_file object>
modinfo_all = <insights.core.spec_factory.glob_file object>
mount = <insights.core.spec_factory.simple_file object>
multipath__v4__ll = <insights.core.spec_factory.first_file object>
netstat = <insights.core.spec_factory.first_file object>
netstat_agn = <insights.core.spec_factory.first_of object>
netstat_s = <insights.core.spec_factory.simple_file object>
nmcli_dev_show = <insights.core.spec_factory.simple_file object>
nmcli_dev_show_sos = <insights.core.spec_factory.glob_file object>
ntptime = <insights.core.spec_factory.simple_file object>
openvswitch_other_config = <insights.core.spec_factory.simple_file object>
ovs_vsctl_show = <insights.core.spec_factory.simple_file object>
pcs_config = <insights.core.spec_factory.simple_file object>
pcs_quorum_status = <insights.core.spec_factory.simple_file object>
pcs_status = <insights.core.spec_factory.simple_file object>
podman_image_inspect = <insights.core.spec_factory.glob_file object>
podman_list_containers = <insights.core.spec_factory.first_file object>
podman_list_images = <insights.core.spec_factory.simple_file object>
ps_alxwww = <insights.core.spec_factory.simple_file object>
ps_aux = <insights.core.spec_factory.first_file object>
ps_auxcww = <insights.core.spec_factory.first_file object>
ps_auxww = <insights.core.spec_factory.first_file object>
puppet_ssl_cert_ca_pem = <insights.core.spec_factory.first_file object>
pvs = <insights.core.spec_factory.first_file object>
qpid_stat_q = <insights.core.spec_factory.first_file object>
qpid_stat_u = <insights.core.spec_factory.first_file object>
rabbitmq_report = <insights.core.spec_factory.simple_file object>
rabbitmq_report_of_containers = <insights.core.spec_factory.glob_file object>
rhn_charsets = <insights.core.spec_factory.first_file object>
root_crontab = <insights.core.spec_factory.first_file object>
route = <insights.core.spec_factory.simple_file object>
samba_logs = <insights.core.spec_factory.glob_file object>
sestatus = <insights.core.spec_factory.simple_file object>
sssd_logs = <insights.core.spec_factory.glob_file object>
subscription_manager_list_consumed = <insights.core.spec_factory.first_file object>
subscription_manager_list_installed = <insights.core.spec_factory.first_file object>
sysctl = <insights.core.spec_factory.simple_file object>
systemctl_list_unit_files = <insights.core.spec_factory.simple_file object>
systemctl_list_units = <insights.core.spec_factory.first_file object>
systemctl_show_all_services = <insights.core.spec_factory.simple_file object>
teamdctl_config_dump = <insights.core.spec_factory.glob_file object>
teamdctl_state_dump = <insights.core.spec_factory.glob_file object>
uname = <insights.core.spec_factory.simple_file object>
uptime = <insights.core.spec_factory.first_of object>
var_qemu_xml = <insights.core.spec_factory.glob_file object>
vdsm_import_log = <insights.core.spec_factory.glob_file object>
vgdisplay = <insights.core.spec_factory.first_file object>
vgs = <insights.core.spec_factory.first_file object>
xfs_info = <insights.core.spec_factory.glob_file object>
yum_repolist = <insights.core.spec_factory.simple_file object>

insights.specs.jdr_archive

This module defines all datasources for JDR report

class insights.specs.jdr_archive.JDRSpecs[source]

Bases: insights.specs.Specs

A class for all the JDR report datasources

jboss_domain_server_log = <insights.core.spec_factory.foreach_collect object>
jboss_domain_servers = <insights.core.spec_factory.listdir object>
jboss_standalone_conf_file()[source]

Get which jboss standalone conf file is using from server log

jboss_standalone_main_config = <insights.core.spec_factory.foreach_collect object>
jboss_standalone_server_log = <insights.core.spec_factory.glob_file object>

insights.tests

class insights.tests.InputData(name=None, hostname=None)[source]

Bases: object

Helper class used with integrate. The role of this class is to represent data files sent to parsers and rules in a format similar to what lays on disk.

Example Usage:

input_data = InputData()
input_data.add("messages", "this is some messages content")
input_data.add("uname", "this is some uname content")

If release is specified when InputData is constructed, it will be added to every mock record when added. This is useful for testing parsers that rely on context.release.

If path is specified when calling the add method, the record will contain the specified value in the context.path field. This is useful for testing pattern-like file parsers.

add(spec, content, path=None, do_filter=True)[source]
add_component(comp, obj)[source]

Allow adding arbitrary objects as components. This allows tests to mock components that have external dependencies so their dependents can be integration tested.

clone(name)[source]
get(key, default)[source]
items()[source]
insights.tests.archive_provider(component, test_func=<function deep_compare>, stride=1)[source]

Decorator used to register generator functions that yield InputData and expected response tuples. These generators will be consumed by py.test such that:

  • Each InputData will be passed into an integrate() function

  • The result will be compared [1] against the expected value from the tuple.

Parameters
  • component ((str)) -- The component to be tested.

  • test_func (function) -- A custom comparison function with the parameters (result, expected). This will override the use of the compare() [1] function.

  • stride (int) -- yield every stride InputData object rather than the full set. This is used to provide a faster execution path in some test setups.

  • insights.tests.deep_compare() ([1]) --

insights.tests.context_wrap(lines, path='path', hostname='hostname.example.com', release='Red Hat Enterprise Linux Server release 7.2 (Maipo)', version='-1.-1', machine_id='machine_id', strip=True, split=True, **kwargs)[source]
insights.tests.create_metadata(system_id, product)[source]
insights.tests.deep_compare(result, expected)[source]

Deep compare rule reducer results when testing.

insights.tests.integrate(input_data, component)[source]
insights.tests.redhat_release(major, minor='')[source]

Helper function to construct a redhat-release string for a specific RHEL major and minor version. Only constructs redhat-releases for RHEL major releases 4, 5, 6 & 7

Parameters
  • major -- RHEL major number. Accepts str, int or float (as major.minor)

  • minor -- RHEL minor number. Optional and accepts str or int

For example, to construct a redhat-release for:

RHEL4U9:  redhat_release('4.9') or (4.9) or (4, 9)
RHEL5 GA: redhat_release('5')   or (5.0) or (5, 0) or (5)
RHEL6.6:  redhat_release('6.6') or (6.6) or (6, 6)
RHEL7.1:  redhat_release('7.1') or (7.1) or (7, 1)

Limitation with float args: (x.10) will be parsed as minor = 1

insights.tests.run_input_data(component, input_data)[source]
insights.tests.run_test(component, input_data, expected=None)[source]

insights.tools

The cat module allows you to execute an insights datasource and write its output to stdout. A string representation of the datasource is written to stderr before the output.

>>> insights-cat hostname
CommandOutputProvider("/usr/bin/hostname -f")
alonzo

Pass -q if you want only the datasource information.

>>> insights-cat -q ethtool
CommandOutputProvider("/sbin/ethtool docker0")
CommandOutputProvider("/sbin/ethtool enp0s31f6")
CommandOutputProvider("/sbin/ethtool lo")
CommandOutputProvider("/sbin/ethtool tun0")
CommandOutputProvider("/sbin/ethtool virbr0")
CommandOutputProvider("/sbin/ethtool virbr0-nic")
CommandOutputProvider("/sbin/ethtool wlp3s0")
insights.tools.cat.configure(config)[source]
insights.tools.cat.configure_logging(debug)[source]
insights.tools.cat.create_broker(root=None)[source]
insights.tools.cat.dump_error(spec, broker)[source]
insights.tools.cat.dump_spec(value, quiet=False, no_header=False)[source]
insights.tools.cat.get_spec(fqdn)[source]
insights.tools.cat.load_plugins(raw)[source]
insights.tools.cat.main()[source]
insights.tools.cat.parse_args()[source]
insights.tools.cat.parse_plugins(raw)[source]
insights.tools.cat.run(spec, archive=None, quiet=False, no_header=False)[source]

The inspect module allows you to execute an insights component (parser, combiner, rule or datasource) dropping you into an ipython session where you can inspect the outcome.

>>> insights-inspect insights.parsers.hostname.Hostname
IPython Console Usage Info:
Enter 'Hostname.' and tab to get a list of properties
Example:
In [1]: Hostname.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Python 3.6.6 (default, Jul 19 2018, 14:25:17)
Type "copyright", "credits" or "license" for more information.
IPython 5.8.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
In [1]: Hostname.fqdn
Out[1]: 'lhuett.usersys.redhat.com'"
>>> insights-inspect insights.combiners.hostname.hostname
IPython Console Usage Info:
Enter 'hostname.' and tab to get a list of properties
Example:
In [1]: hostname.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Python 3.6.6 (default, Jul 19 2018, 14:25:17)
Type "copyright", "credits" or "license" for more information.
IPython 5.8.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
In [1]: hostname.fqdn
Out[1]: 'lhuett.usersys.redhat.com'
>>> insights-inspect insights.specs.Specs.hostname
IPython Console Usage Info:
Enter 'hostname.' and tab to get a list of properties
Example:
In [1]: hostname.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Python 3.6.6 (default, Jul 19 2018, 14:25:17)
Type "copyright", "credits" or "license" for more information.
IPython 5.8.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
In [1]: hostname.cmd
Out[1]: '/usr/bin/hostname -f'
>>> insights-inspect examples.rules.bash_version.report
IPython Console Usage Info:
Enter 'report.' and tab to get a list of properties
Example:
In [1]: report.<property_name>
Out[1]: <property value>
To exit ipython enter 'exit' and hit enter or use 'CTL D'
Python 3.6.6 (default, Jul 19 2018, 14:25:17)
Type "copyright", "credits" or "license" for more information.
IPython 5.8.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
In [1]: report.values()
Out[1]: dict_values([0:bash-4.2.46-31.el7, 'pass', 'BASH_VERSION'])
In [2]: broker.keys()
Out[2]: dict_keys([<class 'insights.core.context.HostArchiveContext'>,
<insights.core.spec_factory.glob_file object at 0x7f0303c93390>,
<insights.core.spec_factory.head object at 0x7f0303cabef0>,
insights.specs.Specs.installed_rpms,
<class 'insights.parsers.installed_rpms.InstalledRpms'>,
<function report at 0x7f0303c679d8>])
In [3]: import insights
In [4]: p = broker[insights.parsers.installed_rpms.InstalledRpms]
In [5]: a = p.get_max('bash')
In [6]: a.nevra
Out[5]: 'bash-0:4.2.46-31.el7.x86_64'
insights.tools.insights_inspect.configure(config)[source]
insights.tools.insights_inspect.configure_logging(debug)[source]
insights.tools.insights_inspect.create_broker(root=None)[source]
insights.tools.insights_inspect.dump_error(component, broker)[source]
insights.tools.insights_inspect.get_component(fqdn)[source]
insights.tools.insights_inspect.get_ipshell()[source]
insights.tools.insights_inspect.main()[source]
insights.tools.insights_inspect.parse_args()[source]
insights.tools.insights_inspect.run(component, archive=None)[source]

Allow users to interrogate components.

insights-info foo bar baz will search for all datasources that might handle foo, bar, or baz files or commands along with all components that could be activated if they were present and valid.

insights-info -i insights.specs.Specs.hosts will display dependency information about the hosts datasource.

insights-info -d insights.parsers.hosts.Hosts will display the pydoc information about the Hosts parser.

There are several other options to the script. insights-info -h for more info.

insights.tools.query.apply_configuration(path)[source]
insights.tools.query.create_broker(components)[source]
insights.tools.query.dry_run(graph=defaultdict(<class 'set'>, {insights.specs.Openshift.cluster_operators: set(), insights.specs.Openshift.crds: set(), insights.specs.Openshift.crs: set(), insights.specs.Openshift.machine_configs: set(), insights.specs.Openshift.machines: set(), insights.specs.Openshift.machine_id: set(), insights.specs.Openshift.namespaces: set(), insights.specs.Openshift.nodes: set(), insights.specs.Openshift.pods: set(), insights.specs.Openshift.pvcs: set(), insights.specs.Openshift.storage_classes: set(), insights.specs.Specs.amq_broker: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.auditctl_status: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.auditd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.audit_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.autofs_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.avc_hash_stats: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.avc_cache_threshold: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.aws_instance_id_doc: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.aws_instance_id_pkcs7: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.aws_instance_type: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.azure_instance_type: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.bios_uuid: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.blkid: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.bond: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.bond_dynamic_lb: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.boot_loader_entries: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.branch_info: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.brctl_show: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.candlepin_error_log: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>}, insights.specs.Specs.candlepin_log: {<insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cdc_wdm: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.checkin_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.catalina_out: {<insights.core.spec_factory.foreach_collect object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.catalina_server_log: {<insights.core.spec_factory.foreach_collect object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.cciss: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ceilometer_central_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ceilometer_collector_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceilometer_compute_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceilometer_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceph_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceph_config_show: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.ceph_df_detail: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ceph_health_detail: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ceph_insights: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ceph_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ceph_osd_df: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ceph_osd_dump: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ceph_osd_ec_profile_get: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.ceph_osd_ec_profile_ls: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ceph_osd_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ceph_osd_tree: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceph_osd_tree_text: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ceph_report: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ceph_s: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.ceph_v: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.certificates_enddate: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.cgroups: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.chkconfig: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.chrony_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.chronyc_sources: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.cib_xml: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cinder_api_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.cinder_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.cinder_volume_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cloud_init_custom_network: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cloud_init_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cluster_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cmdline: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cobbler_modules_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.cobbler_settings: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.corosync: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.corosync_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpe: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_cores: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.cpu_siblings: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.cpu_smt_active: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_smt_control: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_vulns: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.cpu_vulns_meltdown: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_vulns_spectre_v1: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_vulns_spectre_v2: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpu_vulns_spec_store_bypass: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpuinfo_max_freq: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpuinfo: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.cpupower_frequency_info: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.cpuset_cpus: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.crypto_policies_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.crypto_policies_state_current: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.crypto_policies_opensshserver: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.crypto_policies_bind: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.crt: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.current_clocksource: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.date_iso: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.date: {<insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.date_utc: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.db2licm_l: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dcbtool_gc_dcb: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.df__alP: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.df__al: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.df__li: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.dig_dnssec: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dig_edns: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dig_noedns: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dig: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.dirsrv: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dirsrv_access: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.dirsrv_errors: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.display_java: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.display_name: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dmesg: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.dmesg_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dmidecode: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dmsetup_info: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dnf_modules: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.dnf_module_list: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.dnf_module_info: {<insights.core.spec_factory.command_with_args object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.dnsmasq_config: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.docker_container_inspect: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.docker_host_machine_id: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_image_inspect: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.docker_info: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_list_containers: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.docker_list_images: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_network: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_storage: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_storage_setup: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.docker_sysconfig: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.dumpe2fs_h: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.engine_config_all: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.engine_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.etc_journald_conf_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.etc_journald_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.etc_machine_id: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.etcd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ethernet_interfaces: {<insights.core.spec_factory.listdir object>}, insights.specs.Specs.ethtool_a: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.ethtool_c: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool_g: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool_i: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool_k: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool_S: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ethtool_T: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.exim_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.facter: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.fc_match: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.fcoeadm_i: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.fdisk_l: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.fdisk_l_sos: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.findmnt_lo_propagation: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.firewalld_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.foreman_production_log: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>}, insights.specs.Specs.foreman_proxy_conf: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>}, insights.specs.Specs.foreman_proxy_log: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>}, insights.specs.Specs.foreman_satellite_log: {<insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.foreman_ssl_access_ssl_log: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.foreman_rake_db_migrate_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.foreman_tasks_config: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.freeipa_healthcheck_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.fstab: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.galera_cnf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.getcert_list: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.getconf_page_size: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.getenforce: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.getsebool: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.glance_api_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.glance_api_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.glance_cache_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.glance_registry_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.gluster_v_info: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.gluster_v_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.gluster_peer_status: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.gnocchi_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.gnocchi_metricd_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.grub_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.grub_config_perms: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.grub_efi_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.grub1_config_perms: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.grub2_cfg: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.grub2_efi_cfg: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.grubby_default_index: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.grubby_default_kernel: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.hammer_ping: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.hammer_task_list: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.satellite_enabled_features: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.haproxy_cfg: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.heat_api_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.heat_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.heat_crontab: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.heat_crontab_container: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.heat_engine_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.hostname: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.hostname_default: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.hostname_short: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.hosts: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.hponcfg_g: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.httpd_access_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.httpd_conf_sos: set(), insights.specs.Specs.httpd_conf_scl_httpd24: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.httpd_conf_scl_jbcs_httpd24: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.httpd_error_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd24_httpd_error_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.jbcs_httpd24_httpd_error_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_limits: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.httpd_M: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_on_nfs: {<function DefaultSpecs.httpd_on_nfs>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_pid: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_ssl_access_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_ssl_error_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.httpd_V: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.virt_uuid_facts: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ifcfg: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ifcfg_static_route: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ifconfig: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.imagemagick_policy: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.init_ora: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.initscript: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.init_process_cgroup: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.installed_rpms: {<insights.core.spec_factory.head object>, <insights.core.spec_factory.first_of object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.interrupts: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ip6tables_permanent: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ip6tables: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ip_addr: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ip_addresses: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ipaupgrade_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ipcs_m: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ipcs_m_p: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ipcs_s: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ipcs_s_i: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.ip_netns_exec_namespace_lsof: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ip_route_show_table_all: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ip_s_link: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_of object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.iptables_permanent: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.iptables: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.ipv4_neigh: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ipv6_neigh: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ip_neigh_show: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ironic_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ironic_inspector_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.iscsiadm_m_session: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.jboss_domain_server_log: {<insights.core.spec_factory.foreach_collect object>, <insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.jboss_standalone_server_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.jboss_standalone_main_config: {<insights.core.spec_factory.foreach_collect object>, <insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.jboss_version: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.journal_since_boot: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.katello_service_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.kdump_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.kerberos_kdc_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.kernel_config: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.kexec_crash_loaded: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.kexec_crash_size: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.keystone_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.keystone_crontab: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.keystone_crontab_container: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.keystone_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.kpatch_list: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.kpatch_patch_files: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.command_with_args object>}, insights.specs.Specs.krb5: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ksmstate: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.kubepods_cpu_quota: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.lastupload: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.libkeyutils_objdumps: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.libkeyutils: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.libvirtd_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.libvirtd_qemu_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.limits_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.locale: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.localtime: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.logrotate_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.lpstat_p: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_boot: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ls_dev: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.ls_disk: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_docker_volumes: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_edac_mc: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ls_etc: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_lib_firmware: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_ocp_cni_openshift_sdn: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_origin_local_volumes_pods: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ls_osroot: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_run_systemd_generator: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_R_var_lib_nova_instances: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_sys_firmware: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_usr_lib64: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_usr_sbin: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_lib_mongodb: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_lib_nova_instances: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ls_var_log: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_opt_mssql: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_opt_mssql_log: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_run: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_spool_clientmq: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_spool_postfix_maildrop: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_tmp: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ls_var_www: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lsblk: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.lsblk_pairs: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.lscpu: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lsinitrd: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.lsinitrd_lvm_conf: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.lsmod: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lsof: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.lspci: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.lssap: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lsscsi: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lvdisplay: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lvm_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lvs_noheadings: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lvs_noheadings_all: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.lvs: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.mac_addresses: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.machine_id: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.manila_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.mariadb_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.max_uid: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.md5chk_files: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.mdstat: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.meminfo: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.messages: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.metadata_json: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.mistral_executor_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.mlx4_port: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.modinfo_i40e: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.modinfo_igb: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.modinfo_ixgbe: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.modinfo_veth: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.modinfo_vmxnet3: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.modinfo: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.modinfo_all: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.command_with_args object>}, insights.specs.Specs.modprobe: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.module: set(), insights.specs.Specs.mongod_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.mount: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.mounts: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.mssql_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.multicast_querier: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.multipath_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.multipath_conf_initramfs: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.multipath__v4__ll: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.mysqladmin_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.mysqladmin_vars: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.mysql_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.mysqld_limits: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.named_checkconf_p: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.namespace: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.netconsole: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.netstat_agn: {<insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.netstat_i: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.netstat: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.netstat_s: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.networkmanager_dispatcher_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.neutron_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_dhcp_agent_ini: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_l3_agent_ini: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_l3_agent_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.neutron_metadata_agent_ini: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_metadata_agent_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_ml2_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_ovs_agent_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_plugin_ini: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.neutron_server_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.nfnetlink_queue: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nfs_exports_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.nfs_exports: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nginx_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.nmcli_conn_show: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.nmcli_dev_show: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nmcli_dev_show_sos: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.nova_api_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.nova_compute_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.nova_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.nova_crontab: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.nova_crontab_container: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nova_uid: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nova_migration_uid: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nscd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nsswitch_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ntp_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ntpq_leap: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ntpq_pn: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ntptime: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.numa_cpus: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.numeric_user_group_name: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.nvme_core_io_timeout: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_bc: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_build: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_clusterrole_with_config: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_clusterrolebinding_with_config: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_dc: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_egressnetworkpolicy: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_endpoints: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_event: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_node: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_pod: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_project: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_pvc: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_pv: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_rc: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_rolebinding: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_role: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.oc_get_route: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_service: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.oc_get_configmap: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.odbc_ini: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.odbcinst_ini: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.openvswitch_other_config: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.core.spec_factory.RegistryPoint: set(), insights.specs.Specs.openshift_certificates: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.openshift_fluentd_environ: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.openshift_hosts: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.openshift_router_environ: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.openvswitch_daemon_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.openvswitch_server_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.osa_dispatcher_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.ose_master_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ose_node_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.os_release: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ovirt_engine_boot_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ovirt_engine_confd: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ovirt_engine_console_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ovirt_engine_server_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ovirt_engine_ui_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ovs_appctl_fdb_show_bridge: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.ovs_ofctl_dump_flows: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ovs_vsctl_list_bridge: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ovs_vsctl_show: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.ovs_vswitchd_limits: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.pacemaker_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.package_provides_java: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.package_provides_httpd: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.pam_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.parted__l: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.partitions: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.passenger_status: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.password_auth: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pci_rport_target_disk_paths: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pcs_config: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pcs_quorum_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.pcs_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pluginconf_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.podman_container_inspect: set(), insights.specs.Specs.podman_image_inspect: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.podman_list_containers: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.podman_list_images: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.postgresql_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.postgresql_log: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.prev_uploader_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.proc_netstat: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.proc_slabinfo: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.proc_snmp_ipv4: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.proc_snmp_ipv6: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.proc_stat: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ps_alxwww: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ps_aux: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ps_auxcww: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ps_auxww: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.ps_ef: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ps_eo: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pulp_worker_defaults: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.puppet_ssl_cert_ca_pem: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.puppetserver_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pvs_noheadings: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pvs_noheadings_all: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.pvs: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.qemu_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.qemu_xml: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.qpid_stat_g: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.qpid_stat_q: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.qpid_stat_u: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.qpidd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rabbitmq_env: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rabbitmq_logs: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.rabbitmq_policies: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rabbitmq_queues: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rabbitmq_report: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rabbitmq_report_of_containers: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.rabbitmq_startup_err: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rabbitmq_startup_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rabbitmq_users: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rc_local: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rdma_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.readlink_e_etc_mtab: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.redhat_release: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.resolv_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhev_data_center: {<insights.core.spec_factory.simple_file object>, <function DefaultSpecs.rhev_data_center>}, insights.specs.Specs.rhv_log_collector_analyzer: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rhn_charsets: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhn_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.rhn_entitlement_cert_xml: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.rhn_hibernate_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.rhn_schema_stats: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhn_schema_version: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rhn_search_daemon_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.rhn_server_satellite_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhn_server_xmlrpc_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.rhn_taskomatic_daemon_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.rhosp_release: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhsm_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhsm_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rhsm_releasever: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rndc_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.root_crontab: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.route: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.rpm_V_packages: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.rsyslog_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.running_java: set(), insights.specs.Specs.samba: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sap_hdb_version: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.sap_host_profile: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sapcontrol_getsystemupdatelist: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.saphostctl_getcimobject_sapinstance: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.saphostexec_status: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.saphostexec_version: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sat5_insights_properties: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.satellite_mongodb_storage_engine: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.satellite_version_rb: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.satellite_custom_hiera: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.scheduler: {<insights.core.spec_factory.foreach_collect object>}, insights.specs.Specs.scsi: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sctp_asc: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sctp_eps: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sctp_snmp: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.scsi_eh_deadline: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.scsi_fwver: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.sealert: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.secure: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.selinux_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.semid: {<function DefaultSpecs.semid>}, insights.specs.Specs.sestatus: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.setup_named_chroot: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.smartctl: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.smartpdc_settings: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.smbstatus_p: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.smbstatus_S: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.sockstat: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.softnet_stat: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.software_collections_list: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.spfile_ora: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.ssh_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ssh_foreman_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ssh_foreman_proxy_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sshd_config_perms: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.sshd_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.ss: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sssd_config: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sssd_logs: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.samba_logs: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.subscription_manager_id: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.subscription_manager_list_consumed: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.subscription_manager_list_installed: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.subscription_manager_installed_product_ids: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.subscription_manager_release_show: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.swift_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.swift_log: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.swift_object_expirer_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.swift_proxy_server_conf: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.sysconfig_chronyd: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_httpd: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_irqbalance: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_kdump: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_libvirt_guests: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_memcached: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.sysconfig_mongod: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.sysconfig_network: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_ntpd: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_prelink: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_sshd: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysconfig_virt_who: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysctl_conf_initramfs: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.sysctl_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.sysctl: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_cat_rpcbind_socket: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_cinder_volume: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_httpd: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_nginx: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_list_unit_files: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_list_units: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.systemctl_mariadb: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_pulp_workers: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_pulp_resmg: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_pulp_celerybeat: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemctl_qpidd: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_qdrouterd: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_show_all_services: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_show_target: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.systemctl_smartpdc: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemd_docker: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.systemd_logind_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemd_openshift_node: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.first_file object>}, insights.specs.Specs.systemd_system_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemd_system_origin_accounting: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.systemid: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.systool_b_scsi_v: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.tags: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.teamdctl_config_dump: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.teamdctl_state_dump: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.thp_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.thp_use_zero_page: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.tmpfilesd: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.tomcat_server_xml: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.tomcat_vdc_fallback: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.tomcat_vdc_targeted: {<insights.core.spec_factory.foreach_execute object>}, insights.specs.Specs.tomcat_web_xml: {<insights.core.spec_factory.first_of object>}, insights.specs.Specs.tuned_adm: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.tuned_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.udev_persistent_net_rules: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.udev_fc_wwpn_id_rules: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.uname: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.up2date: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.up2date_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.uploader_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.uptime: {<insights.core.spec_factory.first_of object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.usr_journald_conf_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.var_qemu_xml: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.vdsm_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vdsm_id: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vdsm_import_log: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.vdsm_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vdsm_logger_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.version_info: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vdo_status: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.vgdisplay: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.vgs_noheadings: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vgs_noheadings_all: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vgs: {<insights.core.spec_factory.first_file object>}, insights.specs.Specs.virsh_list_all: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.virt_what: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.virt_who_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.virtlogd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vma_ra_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vmcore_dmesg: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.vmware_tools_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vsftpd_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.vsftpd: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.woopsie: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.x86_pti_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.x86_ibpb_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.x86_ibrs_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.x86_retp_enabled: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.xfs_info: {<insights.core.spec_factory.foreach_execute object>, <insights.core.spec_factory.glob_file object>}, insights.specs.Specs.xinetd_conf: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.yum_conf: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.yum_list_installed: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.yum_log: {<insights.core.spec_factory.simple_file object>}, insights.specs.Specs.yum_repolist: {<insights.core.spec_factory.first_file object>, <insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_file object>}, insights.specs.Specs.yum_repos_d: {<insights.core.spec_factory.glob_file object>}, insights.specs.Specs.zdump_v: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_command object>}, insights.specs.Specs.zipl_conf: {<insights.core.spec_factory.simple_file object>}, <class 'insights.parsers.mount.Mount'>: {insights.specs.Specs.mount}, <class 'insights.parsers.mount.ProcMounts'>: {insights.specs.Specs.mounts}, <class 'insights.parsers.dnf_module.DnfModuleList'>: {insights.specs.Specs.dnf_module_list}, <class 'insights.parsers.dnf_module.DnfModuleInfo'>: {insights.specs.Specs.dnf_module_info}, <class 'insights.parsers.uname.Uname'>: {insights.specs.Specs.uname}, <class 'insights.parsers.installed_rpms.InstalledRpms'>: {insights.specs.Specs.installed_rpms}, <class 'insights.parsers.dmidecode.DMIDecode'>: {insights.specs.Specs.dmidecode}, <class 'insights.parsers.yum.YumRepoList'>: {insights.specs.Specs.yum_repolist}, <class 'insights.combiners.cloud_provider.CloudProvider'>: {<class 'insights.parsers.yum.YumRepoList'>, <class 'insights.parsers.dmidecode.DMIDecode'>, <class 'insights.parsers.installed_rpms.InstalledRpms'>}, <class 'insights.parsers.satellite_version.Satellite6Version'>: {insights.specs.Specs.satellite_version_rb}, <class 'insights.combiners.satellite_version.SatelliteVersion'>: {<class 'insights.parsers.satellite_version.Satellite6Version'>, <class 'insights.parsers.installed_rpms.InstalledRpms'>}, <class 'insights.combiners.satellite_version.CapsuleVersion'>: {<class 'insights.parsers.installed_rpms.InstalledRpms'>}, <class 'insights.parsers.redhat_release.RedhatRelease'>: {insights.specs.Specs.redhat_release}, <function redhat_release>: {<class 'insights.parsers.redhat_release.RedhatRelease'>, <class 'insights.parsers.uname.Uname'>}, <class 'insights.combiners.redhat_release.RedHatRelease'>: {<class 'insights.parsers.redhat_release.RedhatRelease'>, <class 'insights.parsers.uname.Uname'>}, <class 'insights.components.rhel_version.IsRhel6'>: {<class 'insights.combiners.redhat_release.RedHatRelease'>}, <class 'insights.components.rhel_version.IsRhel7'>: {<class 'insights.combiners.redhat_release.RedHatRelease'>}, <class 'insights.components.rhel_version.IsRhel8'>: {<class 'insights.combiners.redhat_release.RedHatRelease'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.is_aws>: {<class 'insights.combiners.cloud_provider.CloudProvider'>}, <insights.core.spec_factory.simple_command object>: {<function DefaultSpecs.is_aws>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<function DefaultSpecs.is_aws>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<function DefaultSpecs.is_aws>, <class 'insights.core.context.HostContext'>}, <function DefaultSpecs.is_azure>: {<class 'insights.combiners.cloud_provider.CloudProvider'>}, <insights.core.spec_factory.simple_command object>: {<function DefaultSpecs.is_azure>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.tomcat_base>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.tomcat_base>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.tomcat_base>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.listdir object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.listdir object>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.is_ceph_monitor>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.is_ceph_monitor>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.components.rhel_version.IsRhel8'>, <class 'insights.core.context.HostContext'>}, <function DefaultSpecs.dnf_module_names>: {<class 'insights.parsers.dnf_module.DnfModuleList'>}, <insights.core.spec_factory.command_with_args object>: {<function DefaultSpecs.dnf_module_names>, <class 'insights.core.context.HostContext'>, <class 'insights.components.rhel_version.IsRhel8'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.docker_image_ids>: {<insights.core.spec_factory.simple_command object>}, <function DefaultSpecs.docker_container_ids>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.docker_image_ids>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.docker_container_ids>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.dumpdev>: {<class 'insights.parsers.mount.ProcMounts'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.dumpdev>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.listdir object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_collect object>: {<insights.core.spec_factory.simple_command object>, <class 'insights.core.context.HostContext'>}, <function DefaultSpecs.is_sat>: {<class 'insights.combiners.satellite_version.SatelliteVersion'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.is_sat>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.httpd_cmd>: {<insights.core.spec_factory.simple_command object>}, <function DefaultSpecs.httpd_on_nfs>: {<class 'insights.parsers.mount.Mount'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.httpd_cmd>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.httpd_cmd>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.semid>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.semid>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.kpatch_patches_running_kernel_dir>: {<class 'insights.parsers.uname.Uname'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.command_with_args object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.kpatch_patches_running_kernel_dir>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {'/usr/lib64/libfreeblpriv3.so', '/usr/lib/libsoftokn3.so', '/usr/lib/libfreeblpriv3.so', '/etc/pki/product-default/69.pem', '/etc/pki/product/69.pem', '/usr/lib64/libsoftokn3.so', <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.ClusterArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.lsmod_only_names>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.lsmod_only_names>}, <function DefaultSpecs.lsmod_all_names>: {<function DefaultSpecs.lsmod_only_names>}, <insights.core.spec_factory.command_with_args object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.lsmod_all_names>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_collect object>: {<insights.core.spec_factory.simple_command object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.OpenShiftContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.simple_command object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.simple_command object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.simple_command object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.package_and_java>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.package_and_java>, <class 'insights.core.context.HostContext'>}, <function DefaultSpecs.package_and_httpd>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.package_and_httpd>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.rhev_data_center>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.sap_sid_nr>: {<insights.core.spec_factory.simple_command object>, <insights.core.spec_factory.simple_command object>}, <function DefaultSpecs.sap_sid>: {<function DefaultSpecs.sap_sid_nr>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.sap_sid>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.sap_sid_nr>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.listdir object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <insights.core.spec_factory.listdir object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.block>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.block>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.listdir object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.foreach_execute object>: {<insights.core.spec_factory.listdir object>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.tomcat_base>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.foreach_collect object>}, <function DefaultSpecs.tomcat_home_base>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_execute object>: {<function DefaultSpecs.tomcat_home_base>, <class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <function DefaultSpecs.xfs_mounts>: {<class 'insights.parsers.mount.Mount'>}, <insights.core.spec_factory.foreach_execute object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.xfs_mounts>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_command object>: {<class 'insights.core.context.HostContext'>}, <function DefaultSpecs.docker_installed_rpms>: {<class 'insights.core.context.DockerImageContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_command object>, <function DefaultSpecs.docker_installed_rpms>}, <function DefaultSpecs.jboss_home>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.jboss_home>}, <function DefaultSpecs.jboss_domain_server_log_dir>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.jboss_domain_server_log_dir>}, <function DefaultSpecs.jboss_standalone_main_config_files>: {<insights.core.spec_factory.simple_command object>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.HostContext'>, <function DefaultSpecs.jboss_standalone_main_config_files>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.head object>: {<insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.DockerImageContext'>, <class 'insights.core.context.ClusterArchiveContext'>, <class 'insights.core.context.SosArchiveContext'>, <class 'insights.core.context.MustGatherContext'>, <class 'insights.core.context.HostArchiveContext'>, <class 'insights.core.context.InsightsOperatorContext'>, <class 'insights.core.context.SerializedArchiveContext'>, <class 'insights.core.context.JDRContext'>, <class 'insights.core.context.HostContext'>, <class 'insights.core.context.JBossContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.HostArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.glob_file object>, <insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_of object>: {<insights.core.spec_factory.simple_file object>, <insights.core.spec_factory.simple_file object>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.first_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.simple_file object>: {<class 'insights.core.context.SosArchiveContext'>}, <insights.core.spec_factory.glob_file object>: {<class 'insights.core.context.JDRContext'>}, <function JDRSpecs.jboss_standalone_conf_file>: {<insights.core.spec_factory.glob_file object>}, <insights.core.spec_factory.foreach_collect object>: {<function JDRSpecs.jboss_standalone_conf_file>, <class 'insights.core.context.JDRContext'>}, <insights.core.spec_factory.listdir object>: {<class 'insights.core.context.JDRContext'>}, <insights.core.spec_factory.foreach_collect object>: {<class 'insights.core.context.JDRContext'>, <insights.core.spec_factory.listdir object>}, <class 'insights.parsers.alternatives.JavaAlternatives'>: {insights.specs.Specs.display_java}, <class 'insights.parsers.amq_broker.AMQBroker'>: {insights.specs.Specs.amq_broker}, <class 'insights.parsers.audit_log.AuditLog'>: {insights.specs.Specs.audit_log}, <class 'insights.parsers.auditctl_status.AuditctlStatus'>: {insights.specs.Specs.auditctl_status}, <class 'insights.parsers.auditd_conf.AuditdConf'>: {insights.specs.Specs.auditd_conf}, <class 'insights.parsers.autofs_conf.AutoFSConf'>: {insights.specs.Specs.autofs_conf}, <class 'insights.parsers.avc_cache_threshold.AvcCacheThreshold'>: {insights.specs.Specs.avc_cache_threshold}, <class 'insights.parsers.avc_hash_stats.AvcHashStats'>: {insights.specs.Specs.avc_hash_stats}, <class 'insights.parsers.aws_instance_id.AWSInstanceIdDoc'>: {insights.specs.Specs.aws_instance_id_doc}, <class 'insights.parsers.aws_instance_id.AWSInstanceIdPkcs7'>: {insights.specs.Specs.aws_instance_id_pkcs7}, <class 'insights.parsers.aws_instance_type.AWSInstanceType'>: {insights.specs.Specs.aws_instance_type}, <class 'insights.parsers.azure_instance_type.AzureInstanceType'>: {insights.specs.Specs.azure_instance_type}, <class 'insights.parsers.blkid.BlockIDInfo'>: {insights.specs.Specs.blkid}, <class 'insights.parsers.bond.Bond'>: {insights.specs.Specs.bond}, <class 'insights.parsers.bond_dynamic_lb.BondDynamicLB'>: {insights.specs.Specs.bond_dynamic_lb}, <class 'insights.parsers.branch_info.BranchInfo'>: {insights.specs.Specs.branch_info}, <class 'insights.parsers.brctl_show.BrctlShow'>: {insights.specs.Specs.brctl_show}, <class 'insights.parsers.catalina_log.CatalinaServerLog'>: {insights.specs.Specs.catalina_server_log}, <class 'insights.parsers.catalina_log.CatalinaOut'>: {insights.specs.Specs.catalina_out}, <class 'insights.parsers.cciss.Cciss'>: {insights.specs.Specs.cciss}, <class 'insights.parsers.ceilometer_conf.CeilometerConf'>: {insights.specs.Specs.ceilometer_conf}, <class 'insights.parsers.ceilometer_log.CeilometerCentralLog'>: {insights.specs.Specs.ceilometer_central_log}, <class 'insights.parsers.ceilometer_log.CeilometerCollectorLog'>: {insights.specs.Specs.ceilometer_collector_log}, <class 'insights.parsers.ceilometer_log.CeilometerComputeLog'>: {insights.specs.Specs.ceilometer_compute_log}, <class 'insights.parsers.ceph_cmd_json_parsing.CephOsdDump'>: {insights.specs.Specs.ceph_osd_dump}, <class 'insights.parsers.ceph_cmd_json_parsing.CephOsdDf'>: {insights.specs.Specs.ceph_osd_df}, <class 'insights.parsers.ceph_cmd_json_parsing.CephS'>: {insights.specs.Specs.ceph_s}, <class 'insights.parsers.ceph_cmd_json_parsing.CephDfDetail'>: {insights.specs.Specs.ceph_df_detail}, <class 'insights.parsers.ceph_cmd_json_parsing.CephHealthDetail'>: {insights.specs.Specs.ceph_health_detail}, <class 'insights.parsers.ceph_cmd_json_parsing.CephECProfileGet'>: {insights.specs.Specs.ceph_osd_ec_profile_get}, <class 'insights.parsers.ceph_cmd_json_parsing.CephCfgInfo'>: {insights.specs.Specs.ceph_config_show}, <class 'insights.parsers.ceph_cmd_json_parsing.CephOsdTree'>: {insights.specs.Specs.ceph_osd_tree}, <class 'insights.parsers.ceph_cmd_json_parsing.CephReport'>: {insights.specs.Specs.ceph_report}, <class 'insights.parsers.ceph_conf.CephConf'>: {insights.specs.Specs.ceph_conf}, <class 'insights.parsers.ceph_insights.CephInsights'>: {insights.specs.Specs.ceph_insights}, <class 'insights.parsers.ceph_log.CephLog'>: {insights.specs.Specs.ceph_log}, <class 'insights.parsers.ceph_osd_log.CephOsdLog'>: {insights.specs.Specs.ceph_osd_log}, <class 'insights.parsers.ceph_osd_tree_text.CephOsdTreeText'>: {insights.specs.Specs.ceph_osd_tree_text}, <class 'insights.parsers.ceph_version.CephVersion'>: {insights.specs.Specs.ceph_v}, <class 'insights.parsers.certificates_enddate.CertificatesEnddate'>: {insights.specs.Specs.certificates_enddate}, <class 'insights.parsers.cgroups.Cgroups'>: {insights.specs.Specs.cgroups}, <class 'insights.parsers.checkin_conf.CheckinConf'>: {insights.specs.Specs.checkin_conf}, <class 'insights.parsers.chkconfig.ChkConfig'>: {insights.specs.Specs.chkconfig}, <class 'insights.parsers.cib.CIB'>: {insights.specs.Specs.cib_xml}, <class 'insights.parsers.cinder_conf.CinderConf'>: {insights.specs.Specs.cinder_conf}, <class 'insights.parsers.cinder_log.CinderApiLog'>: {insights.specs.Specs.cinder_api_log}, <class 'insights.parsers.cinder_log.CinderVolumeLog'>: {insights.specs.Specs.cinder_volume_log}, <class 'insights.parsers.cloud_init_custom_network.CloudInitCustomNetworking'>: {insights.specs.Specs.cloud_init_custom_network}, <class 'insights.parsers.cloud_init_log.CloudInitLog'>: {insights.specs.Specs.cloud_init_log}, <class 'insights.parsers.cluster_conf.ClusterConf'>: {insights.specs.Specs.cluster_conf}, <class 'insights.parsers.cmdline.CmdLine'>: {insights.specs.Specs.cmdline}, <class 'insights.parsers.cobbler_modules_conf.CobblerModulesConf'>: {insights.specs.Specs.cobbler_modules_conf}, <class 'insights.parsers.cobbler_settings.CobblerSettings'>: {insights.specs.Specs.cobbler_settings}, <class 'insights.parsers.corosync.CoroSyncConfig'>: {insights.specs.Specs.corosync}, <class 'insights.parsers.corosync.CorosyncConf'>: {insights.specs.Specs.corosync_conf}, <class 'insights.parsers.cpu_vulns.CpuVulns'>: {insights.specs.Specs.cpu_vulns}, <class 'insights.parsers.cpuinfo.CpuInfo'>: {insights.specs.Specs.cpuinfo}, <class 'insights.parsers.cpupower_frequency_info.CpupowerFrequencyInfo'>: {insights.specs.Specs.cpupower_frequency_info}, <class 'insights.parsers.cpuset_cpus.CpusetCpus'>: {insights.specs.Specs.cpuset_cpus}, <class 'insights.parsers.crontab.HeatCrontab'>: {insights.specs.Specs.heat_crontab}, <class 'insights.parsers.crontab.KeystoneCrontab'>: {insights.specs.Specs.keystone_crontab}, <class 'insights.parsers.crontab.NovaCrontab'>: {insights.specs.Specs.nova_crontab}, <class 'insights.parsers.crontab.RootCrontab'>: {insights.specs.Specs.root_crontab}, <class 'insights.parsers.crypto_policies.CryptoPoliciesConfig'>: {insights.specs.Specs.crypto_policies_config}, <class 'insights.parsers.crypto_policies.CryptoPoliciesStateCurrent'>: {insights.specs.Specs.crypto_policies_state_current}, <class 'insights.parsers.crypto_policies.CryptoPoliciesOpensshserver'>: {insights.specs.Specs.crypto_policies_opensshserver}, <class 'insights.parsers.crypto_policies.CryptoPoliciesBind'>: {insights.specs.Specs.crypto_policies_bind}, <class 'insights.parsers.current_clocksource.CurrentClockSource'>: {insights.specs.Specs.current_clocksource}, <class 'insights.parsers.date.Date'>: {insights.specs.Specs.date}, <class 'insights.parsers.date.DateUTC'>: {insights.specs.Specs.date_utc}, <class 'insights.parsers.db2licm.DB2Info'>: {insights.specs.Specs.db2licm_l}, <class 'insights.parsers.dcbtool_gc_dcb.Dcbtool'>: {insights.specs.Specs.dcbtool_gc_dcb}, <class 'insights.parsers.df.DiskFree_LI'>: {insights.specs.Specs.df__li}, <class 'insights.parsers.df.DiskFree_ALP'>: {insights.specs.Specs.df__alP}, <class 'insights.parsers.df.DiskFree_AL'>: {insights.specs.Specs.df__al}, <class 'insights.parsers.dirsrv_logs.DirSrvAccessLog'>: {insights.specs.Specs.dirsrv_access}, <class 'insights.parsers.dirsrv_logs.DirSrvErrorsLog'>: {insights.specs.Specs.dirsrv_errors}, <class 'insights.parsers.dirsrv_sysconfig.DirsrvSysconfig'>: {insights.specs.Specs.dirsrv}, <class 'insights.parsers.dmesg.DmesgLineList'>: {insights.specs.Specs.dmesg}, <class 'insights.parsers.dmesg_log.DmesgLog'>: {insights.specs.Specs.dmesg_log}, <class 'insights.parsers.dmsetup.DmsetupInfo'>: {insights.specs.Specs.dmsetup_info}, <class 'insights.parsers.dnf_modules.DnfModules'>: {insights.specs.Specs.dnf_modules}, <class 'insights.parsers.dnsmasq_config.DnsmasqConf'>: {insights.specs.Specs.dnsmasq_config}, <function docker_host_machineid_parser>: {insights.specs.Specs.docker_host_machine_id}, <class 'insights.parsers.docker_inspect.DockerInspectImage'>: {insights.specs.Specs.docker_image_inspect}, <class 'insights.parsers.docker_inspect.DockerInspectContainer'>: {insights.specs.Specs.docker_container_inspect}, <class 'insights.parsers.docker_list.DockerListImages'>: {insights.specs.Specs.docker_list_images}, <class 'insights.parsers.docker_list.DockerListContainers'>: {insights.specs.Specs.docker_list_containers}, <class 'insights.parsers.docker_storage_setup.DockerStorageSetup'>: {insights.specs.Specs.docker_storage_setup}, <class 'insights.parsers.dockerinfo.DockerInfo'>: {insights.specs.Specs.docker_info}, <class 'insights.parsers.dumpe2fs_h.DumpE2fs'>: {insights.specs.Specs.dumpe2fs_h}, <class 'insights.parsers.engine_config.EngineConfigAll'>: {insights.specs.Specs.engine_config_all}, <class 'insights.parsers.engine_log.EngineLog'>: {insights.specs.Specs.engine_log}, <class 'insights.parsers.etcd_conf.EtcdConf'>: {insights.specs.Specs.etcd_conf}, <class 'insights.parsers.ethtool.Driver'>: {insights.specs.Specs.ethtool_i}, <class 'insights.parsers.ethtool.Features'>: {insights.specs.Specs.ethtool_k}, <class 'insights.parsers.ethtool.Pause'>: {insights.specs.Specs.ethtool_a}, <class 'insights.parsers.ethtool.CoalescingInfo'>: {insights.specs.Specs.ethtool_c}, <class 'insights.parsers.ethtool.Ring'>: {insights.specs.Specs.ethtool_g}, <class 'insights.parsers.ethtool.Statistics'>: {insights.specs.Specs.ethtool_S}, <class 'insights.parsers.ethtool.TimeStamp'>: {insights.specs.Specs.ethtool_T}, <class 'insights.parsers.ethtool.Ethtool'>: {insights.specs.Specs.ethtool}, <class 'insights.parsers.facter.Facter'>: {insights.specs.Specs.facter}, <class 'insights.parsers.fc_match.FCMatch'>: {insights.specs.Specs.fc_match}, <class 'insights.parsers.fcoeadm_i.FcoeadmI'>: {insights.specs.Specs.fcoeadm_i}, <class 'insights.parsers.findmnt.FindmntPropagation'>: {insights.specs.Specs.findmnt_lo_propagation}, <class 'insights.parsers.firewall_config.FirewallDConf'>: {insights.specs.Specs.firewalld_conf}, <class 'insights.parsers.foreman_log.ProxyLog'>: {insights.specs.Specs.foreman_proxy_log}, <class 'insights.parsers.foreman_log.SatelliteLog'>: {insights.specs.Specs.foreman_satellite_log}, <class 'insights.parsers.foreman_log.ProductionLog'>: {insights.specs.Specs.foreman_production_log}, <class 'insights.parsers.foreman_log.CandlepinLog'>: {insights.specs.Specs.candlepin_log}, <class 'insights.parsers.foreman_log.CandlepinErrorLog'>: {insights.specs.Specs.candlepin_error_log}, <class 'insights.parsers.foreman_log.ForemanSSLAccessLog'>: {insights.specs.Specs.foreman_ssl_access_ssl_log}, <class 'insights.parsers.foreman_proxy_conf.ForemanProxyConf'>: {insights.specs.Specs.foreman_proxy_conf}, <class 'insights.parsers.foreman_rake_db_migrate_status.Sat6DBMigrateStatus'>: {insights.specs.Specs.foreman_rake_db_migrate_status}, <class 'insights.parsers.foreman_tasks_config.ForemanTasksConfig'>: {insights.specs.Specs.foreman_tasks_config}, <class 'insights.parsers.freeipa_healthcheck_log.FreeIPAHealthCheckLog'>: {insights.specs.Specs.freeipa_healthcheck_log}, <class 'insights.parsers.fstab.FSTab'>: {insights.specs.Specs.fstab}, <class 'insights.parsers.galera_cnf.GaleraCnf'>: {insights.specs.Specs.galera_cnf}, <class 'insights.parsers.getcert_list.CertList'>: {insights.specs.Specs.getcert_list}, <class 'insights.parsers.getconf_pagesize.GetconfPageSize'>: {insights.specs.Specs.getconf_page_size}, <function getenforcevalue>: {insights.specs.Specs.getenforce}, <class 'insights.parsers.getsebool.Getsebool'>: {insights.specs.Specs.getsebool}, <class 'insights.parsers.glance_log.GlanceApiLog'>: {insights.specs.Specs.glance_api_log}, <class 'insights.parsers.gluster_peer_status.GlusterPeerStatus'>: {insights.specs.Specs.gluster_peer_status}, <class 'insights.parsers.gluster_vol.GlusterVolInfo'>: {insights.specs.Specs.gluster_v_info}, <class 'insights.parsers.gluster_vol.GlusterVolStatus'>: {insights.specs.Specs.gluster_v_status}, <class 'insights.parsers.gnocchi.GnocchiConf'>: {insights.specs.Specs.gnocchi_conf}, <class 'insights.parsers.gnocchi.GnocchiMetricdLog'>: {insights.specs.Specs.gnocchi_metricd_log}, <class 'insights.parsers.grub_conf.Grub1Config'>: {<class 'insights.components.rhel_version.IsRhel6'>, <class 'insights.components.rhel_version.IsRhel7'>, insights.specs.Specs.grub_conf}, <class 'insights.parsers.grub_conf.Grub1EFIConfig'>: {<class 'insights.components.rhel_version.IsRhel6'>, insights.specs.Specs.grub_efi_conf, <class 'insights.components.rhel_version.IsRhel7'>}, <class 'insights.parsers.grub_conf.Grub2Config'>: {<class 'insights.components.rhel_version.IsRhel6'>, <class 'insights.components.rhel_version.IsRhel7'>, insights.specs.Specs.grub2_cfg}, <class 'insights.parsers.grub_conf.Grub2EFIConfig'>: {<class 'insights.components.rhel_version.IsRhel6'>, insights.specs.Specs.grub2_efi_cfg, <class 'insights.components.rhel_version.IsRhel7'>}, <class 'insights.parsers.grub_conf.BootLoaderEntries'>: {<class 'insights.components.rhel_version.IsRhel8'>, insights.specs.Specs.boot_loader_entries}, <class 'insights.parsers.grubby.GrubbyDefaultIndex'>: {insights.specs.Specs.grubby_default_index}, <class 'insights.parsers.grubby.GrubbyDefaultKernel'>: {insights.specs.Specs.grubby_default_kernel}, <class 'insights.parsers.hammer_ping.HammerPing'>: {insights.specs.Specs.hammer_ping}, <class 'insights.parsers.hammer_task_list.HammerTaskList'>: {insights.specs.Specs.hammer_task_list}, <class 'insights.parsers.haproxy_cfg.HaproxyCfg'>: {insights.specs.Specs.haproxy_cfg}, <class 'insights.parsers.heat_conf.HeatConf'>: {insights.specs.Specs.heat_conf}, <class 'insights.parsers.heat_log.HeatApiLog'>: {insights.specs.Specs.heat_api_log}, <class 'insights.parsers.heat_log.HeatEngineLog'>: {insights.specs.Specs.heat_engine_log}, <class 'insights.parsers.host_vdsm_id.VDSMId'>: {insights.specs.Specs.vdsm_id}, <class 'insights.parsers.hostname.Hostname'>: {insights.specs.Specs.hostname}, <class 'insights.parsers.hostname.HostnameDefault'>: {insights.specs.Specs.hostname_default}, <class 'insights.parsers.hostname.HostnameShort'>: {insights.specs.Specs.hostname_short}, <class 'insights.parsers.hosts.Hosts'>: {insights.specs.Specs.hosts}, <class 'insights.parsers.hponcfg.HponConf'>: {insights.specs.Specs.hponcfg_g}, <class 'insights.parsers.httpd_M.HttpdM'>: {insights.specs.Specs.httpd_M}, <class 'insights.parsers.httpd_V.HttpdV'>: {insights.specs.Specs.httpd_V}, <class 'insights.parsers.httpd_conf.HttpdConf'>: {insights.specs.Specs.httpd_conf}, <class 'insights.parsers.httpd_log.HttpdSSLErrorLog'>: {insights.specs.Specs.httpd_ssl_error_log}, <class 'insights.parsers.httpd_log.HttpdErrorLog'>: {insights.specs.Specs.httpd_error_log}, <class 'insights.parsers.httpd_log.Httpd24HttpdErrorLog'>: {insights.specs.Specs.httpd24_httpd_error_log}, <class 'insights.parsers.httpd_log.JBCSHttpd24HttpdErrorLog'>: {insights.specs.Specs.jbcs_httpd24_httpd_error_log}, <class 'insights.parsers.httpd_log.HttpdSSLAccessLog'>: {insights.specs.Specs.httpd_ssl_access_log}, <class 'insights.parsers.httpd_log.HttpdAccessLog'>: {insights.specs.Specs.httpd_access_log}, <class 'insights.parsers.httpd_open_nfs.HttpdOnNFSFilesCount'>: {insights.specs.Specs.httpd_on_nfs}, <class 'insights.parsers.ifcfg.IfCFG'>: {insights.specs.Specs.ifcfg}, <class 'insights.parsers.init_process_cgroup.InitProcessCgroup'>: {insights.specs.Specs.init_process_cgroup}, <class 'insights.parsers.initscript.InitScript'>: {insights.specs.Specs.initscript}, <class 'insights.parsers.installed_product_ids.InstalledProductIDs'>: {insights.specs.Specs.subscription_manager_installed_product_ids}, <class 'insights.parsers.interrupts.Interrupts'>: {insights.specs.Specs.interrupts}, <class 'insights.parsers.ip.IpAddr'>: {insights.specs.Specs.ip_addr}, <class 'insights.parsers.ip.RouteDevices'>: {insights.specs.Specs.ip_route_show_table_all}, <class 'insights.parsers.ip.Ipv4Neigh'>: {insights.specs.Specs.ipv4_neigh}, <class 'insights.parsers.ip.Ipv6Neigh'>: {insights.specs.Specs.ipv6_neigh}, <class 'insights.parsers.ip.IpNeighShow'>: {insights.specs.Specs.ip_neigh_show}, <class 'insights.parsers.ip.IpLinkInfo'>: {insights.specs.Specs.ip_s_link}, <class 'insights.parsers.ip_netns_exec_namespace_lsof.IpNetnsExecNamespaceLsofI'>: {insights.specs.Specs.ip_netns_exec_namespace_lsof}, <class 'insights.parsers.ipaupgrade_log.IpaupgradeLog'>: {insights.specs.Specs.ipaupgrade_log}, <class 'insights.parsers.ipcs.IpcsM'>: {insights.specs.Specs.ipcs_m}, <class 'insights.parsers.ipcs.IpcsMP'>: {insights.specs.Specs.ipcs_m_p}, <class 'insights.parsers.ipcs.IpcsS'>: {insights.specs.Specs.ipcs_s}, <class 'insights.parsers.ipcs.IpcsSI'>: {insights.specs.Specs.ipcs_s_i}, <class 'insights.parsers.ipcs_sem.IpcsS'>: {insights.specs.Specs.ipcs_s}, <class 'insights.parsers.ipcs_sem.IpcsSI'>: {insights.specs.Specs.ipcs_s_i}, <class 'insights.parsers.iptables.IPTables'>: {insights.specs.Specs.iptables}, <class 'insights.parsers.iptables.IP6Tables'>: {insights.specs.Specs.ip6tables}, <class 'insights.parsers.iptables.IPTabPermanent'>: {insights.specs.Specs.iptables_permanent}, <class 'insights.parsers.iptables.IP6TabPermanent'>: {insights.specs.Specs.ip6tables_permanent}, <class 'insights.parsers.ironic_conf.IronicConf'>: {insights.specs.Specs.ironic_conf}, <class 'insights.parsers.ironic_inspector_log.IronicInspectorLog'>: {insights.specs.Specs.ironic_inspector_log}, <class 'insights.parsers.iscsiadm_mode_session.IscsiAdmModeSession'>: {insights.specs.Specs.iscsiadm_m_session}, <class 'insights.parsers.jboss_domain_log.JbossDomainServerLog'>: {insights.specs.Specs.jboss_domain_server_log}, <class 'insights.parsers.jboss_domain_log.JbossStandaloneServerLog'>: {insights.specs.Specs.jboss_standalone_server_log}, <class 'insights.parsers.jboss_standalone_main_conf.JbossStandaloneConf'>: {insights.specs.Specs.jboss_standalone_main_config}, <class 'insights.parsers.jboss_version.JbossVersion'>: {insights.specs.Specs.jboss_version}, <class 'insights.parsers.journal_since_boot.JournalSinceBoot'>: {insights.specs.Specs.journal_since_boot}, <class 'insights.parsers.journald_conf.EtcJournaldConf'>: {insights.specs.Specs.etc_journald_conf}, <class 'insights.parsers.journald_conf.EtcJournaldConfD'>: {insights.specs.Specs.etc_journald_conf_d}, <class 'insights.parsers.journald_conf.UsrJournaldConfD'>: {insights.specs.Specs.usr_journald_conf_d}, <class 'insights.parsers.katello_service_status.KatelloServiceStatus'>: {insights.specs.Specs.katello_service_status}, <class 'insights.parsers.kdump.KDumpConf'>: {insights.specs.Specs.kdump_conf}, <class 'insights.parsers.kdump.KexecCrashLoaded'>: {insights.specs.Specs.kexec_crash_loaded}, <class 'insights.parsers.kdump.KexecCrashSize'>: {insights.specs.Specs.kexec_crash_size}, <class 'insights.parsers.kernel_config.KernelConf'>: {insights.specs.Specs.kernel_config}, <class 'insights.parsers.keystone.KeystoneConf'>: {insights.specs.Specs.keystone_conf}, <class 'insights.parsers.keystone_log.KeystoneLog'>: {insights.specs.Specs.keystone_log}, <class 'insights.parsers.kpatch_list.KpatchList'>: {insights.specs.Specs.kpatch_list}, <class 'insights.parsers.kpatch_patches.KpatchPatches'>: {insights.specs.Specs.kpatch_patch_files}, <class 'insights.parsers.krb5.Krb5Configuration'>: {insights.specs.Specs.krb5}, <class 'insights.parsers.krb5kdc_log.KerberosKDCLog'>: {insights.specs.Specs.kerberos_kdc_log}, <function is_running>: {insights.specs.Specs.ksmstate}, <class 'insights.parsers.ksmstate.KSMState'>: {insights.specs.Specs.ksmstate}, <class 'insights.parsers.kubepods_cpu_quota.KubepodsCpuQuota'>: {insights.specs.Specs.kubepods_cpu_quota}, <class 'insights.parsers.libkeyutils.Libkeyutils'>: {insights.specs.Specs.libkeyutils}, <class 'insights.parsers.libkeyutils.LibkeyutilsObjdumps'>: {insights.specs.Specs.libkeyutils_objdumps}, <class 'insights.parsers.libvirtd_log.LibVirtdLog'>: {insights.specs.Specs.libvirtd_log}, <class 'insights.parsers.libvirtd_log.LibVirtdQemuLog'>: {insights.specs.Specs.libvirtd_qemu_log}, <class 'insights.parsers.limits_conf.LimitsConf'>: {insights.specs.Specs.limits_conf}, <class 'insights.parsers.logrotate_conf.LogrotateConf'>: {insights.specs.Specs.logrotate_conf}, <class 'insights.parsers.lpstat.LpstatPrinters'>: {insights.specs.Specs.lpstat_p}, <class 'insights.parsers.ls_boot.LsBoot'>: {insights.specs.Specs.ls_boot}, <class 'insights.parsers.ls_dev.LsDev'>: {insights.specs.Specs.ls_dev}, <class 'insights.parsers.ls_disk.LsDisk'>: {insights.specs.Specs.ls_disk}, <class 'insights.parsers.ls_docker_volumes.DockerVolumesDir'>: {insights.specs.Specs.ls_docker_volumes}, <class 'insights.parsers.ls_edac_mc.LsEdacMC'>: {insights.specs.Specs.ls_edac_mc}, <class 'insights.parsers.ls_etc.LsEtc'>: {insights.specs.Specs.ls_etc}, <class 'insights.parsers.ls_lib_firmware.LsLibFW'>: {insights.specs.Specs.ls_lib_firmware}, <class 'insights.parsers.ls_ocp_cni_openshift_sdn.LsOcpCniOpenshiftSdn'>: {insights.specs.Specs.ls_ocp_cni_openshift_sdn}, <class 'insights.parsers.ls_origin_local_volumes_pods.LsOriginLocalVolumePods'>: {insights.specs.Specs.ls_origin_local_volumes_pods}, <class 'insights.parsers.ls_osroot.LsOsroot'>: {insights.specs.Specs.ls_osroot}, <class 'insights.parsers.ls_run_systemd_generator.LsRunSystemdGenerator'>: {insights.specs.Specs.ls_run_systemd_generator}, <class 'insights.parsers.ls_sys_firmware.LsSysFirmware'>: {insights.specs.Specs.ls_sys_firmware}, <class 'insights.parsers.ls_usr_lib64.LsUsrLib64'>: {insights.specs.Specs.ls_usr_lib64}, <class 'insights.parsers.ls_usr_sbin.LsUsrSbin'>: {insights.specs.Specs.ls_usr_sbin}, <class 'insights.parsers.ls_var_lib_mongodb.LsVarLibMongodb'>: {insights.specs.Specs.ls_var_lib_mongodb}, <class 'insights.parsers.ls_var_lib_nova_instances.LsRVarLibNovaInstances'>: {insights.specs.Specs.ls_R_var_lib_nova_instances}, <class 'insights.parsers.ls_var_lib_nova_instances.LsVarLibNovaInstances'>: {insights.specs.Specs.ls_var_lib_nova_instances}, <class 'insights.parsers.ls_var_log.LsVarLog'>: {insights.specs.Specs.ls_var_log}, <class 'insights.parsers.ls_var_opt_mssql.LsDVarOptMSSql'>: {insights.specs.Specs.ls_var_opt_mssql}, <class 'insights.parsers.ls_var_opt_mssql_log.LsVarOptMssqlLog'>: {insights.specs.Specs.ls_var_opt_mssql_log}, <class 'insights.parsers.ls_var_run.LsVarRun'>: {insights.specs.Specs.ls_var_run}, <class 'insights.parsers.ls_var_spool_clientmq.LsVarSpoolClientmq'>: {insights.specs.Specs.ls_var_spool_clientmq}, <class 'insights.parsers.ls_var_spool_postfix_maildrop.LsVarSpoolPostfixMaildrop'>: {insights.specs.Specs.ls_var_spool_postfix_maildrop}, <class 'insights.parsers.ls_var_tmp.LsVarTmp'>: {insights.specs.Specs.ls_var_tmp}, <class 'insights.parsers.lsblk.LSBlock'>: {insights.specs.Specs.lsblk}, <class 'insights.parsers.lsblk.LSBlockPairs'>: {insights.specs.Specs.lsblk_pairs}, <class 'insights.parsers.lscpu.LsCPU'>: {insights.specs.Specs.lscpu}, <class 'insights.parsers.lsinitrd.Lsinitrd'>: {insights.specs.Specs.lsinitrd}, <class 'insights.parsers.lsmod.LsMod'>: {insights.specs.Specs.lsmod}, <class 'insights.parsers.lsof.Lsof'>: {insights.specs.Specs.lsof}, <class 'insights.parsers.lspci.LsPci'>: {insights.specs.Specs.lspci}, <class 'insights.parsers.lssap.Lssap'>: {insights.specs.Specs.lssap}, <class 'insights.parsers.lsscsi.LsSCSI'>: {insights.specs.Specs.lsscsi}, <class 'insights.parsers.lvdisplay.LvDisplay'>: {insights.specs.Specs.lvdisplay}, <class 'insights.parsers.lvm.Pvs'>: {insights.specs.Specs.pvs_noheadings}, <class 'insights.parsers.lvm.PvsAll'>: {insights.specs.Specs.pvs_noheadings_all}, <class 'insights.parsers.lvm.PvsHeadings'>: {insights.specs.Specs.pvs}, <class 'insights.parsers.lvm.Vgs'>: {insights.specs.Specs.vgs_noheadings}, <class 'insights.parsers.lvm.VgsAll'>: {insights.specs.Specs.vgs_noheadings_all}, <class 'insights.parsers.lvm.VgsHeadings'>: {insights.specs.Specs.vgs}, <class 'insights.parsers.lvm.Lvs'>: {insights.specs.Specs.lvs_noheadings}, <class 'insights.parsers.lvm.LvsAll'>: {insights.specs.Specs.lvs_noheadings_all}, <class 'insights.parsers.lvm.LvsHeadings'>: {insights.specs.Specs.lvs}, <class 'insights.parsers.lvm.LvmConf'>: {insights.specs.Specs.lvm_conf}, <class 'insights.parsers.manila_conf.ManilaConf'>: {insights.specs.Specs.manila_conf}, <class 'insights.parsers.mariadb_log.MariaDBLog'>: {insights.specs.Specs.mariadb_log}, <class 'insights.parsers.max_uid.MaxUID'>: {insights.specs.Specs.max_uid}, <class 'insights.parsers.md5check.NormalMD5'>: {insights.specs.Specs.md5chk_files}, <class 'insights.parsers.mdstat.Mdstat'>: {insights.specs.Specs.mdstat}, <class 'insights.parsers.meminfo.MemInfo'>: {insights.specs.Specs.meminfo}, <class 'insights.parsers.messages.Messages'>: {insights.specs.Specs.messages}, <class 'insights.parsers.metadata.MetadataJson'>: {insights.specs.Specs.metadata_json}, <class 'insights.parsers.mistral_log.MistralExecutorLog'>: {insights.specs.Specs.mistral_executor_log}, <class 'insights.parsers.mlx4_port.Mlx4Port'>: {insights.specs.Specs.mlx4_port}, <class 'insights.parsers.modinfo.ModInfoAll'>: {insights.specs.Specs.modinfo_all}, <class 'insights.parsers.modinfo.ModInfoEach'>: {insights.specs.Specs.modinfo}, <class 'insights.parsers.modinfo.ModInfoI40e'>: {insights.specs.Specs.modinfo_i40e}, <class 'insights.parsers.modinfo.ModInfoVmxnet3'>: {insights.specs.Specs.modinfo_vmxnet3}, <class 'insights.parsers.modinfo.ModInfoIgb'>: {insights.specs.Specs.modinfo_igb}, <class 'insights.parsers.modinfo.ModInfoIxgbe'>: {insights.specs.Specs.modinfo_ixgbe}, <class 'insights.parsers.modinfo.ModInfoVeth'>: {insights.specs.Specs.modinfo_veth}, <class 'insights.parsers.modprobe.ModProbe'>: {insights.specs.Specs.modprobe}, <class 'insights.parsers.mongod_conf.MongodbConf'>: {insights.specs.Specs.mongod_conf}, <class 'insights.parsers.mssql_conf.MsSQLConf'>: {insights.specs.Specs.mssql_conf}, <class 'insights.parsers.multicast_querier.MulticastQuerier'>: {insights.specs.Specs.multicast_querier}, <class 'insights.parsers.multipath_conf.MultipathConf'>: {insights.specs.Specs.multipath_conf}, <class 'insights.parsers.multipath_conf.MultipathConfInitramfs'>: {insights.specs.Specs.multipath_conf_initramfs}, <class 'insights.parsers.multipath_conf.MultipathConfTree'>: {insights.specs.Specs.multipath_conf}, <class 'insights.parsers.multipath_conf.MultipathConfTreeInitramfs'>: {insights.specs.Specs.multipath_conf_initramfs}, <class 'insights.parsers.multipath_v4_ll.MultipathDevices'>: {insights.specs.Specs.multipath__v4__ll}, <function get_multipath_v4_ll>: {insights.specs.Specs.multipath__v4__ll}, <class 'insights.parsers.mysql_log.MysqlLog'>: {insights.specs.Specs.mysql_log}, <class 'insights.parsers.mysqladmin.MysqladminStatus'>: {insights.specs.Specs.mysqladmin_status}, <class 'insights.parsers.mysqladmin.MysqladminVars'>: {insights.specs.Specs.mysqladmin_vars}, <class 'insights.parsers.net_namespace.NetworkNamespace'>: {insights.specs.Specs.namespace}, <class 'insights.parsers.netconsole.NetConsole'>: {insights.specs.Specs.netconsole}, <class 'insights.parsers.netstat.NetstatS'>: {insights.specs.Specs.netstat_s}, <class 'insights.parsers.netstat.NetstatAGN'>: {insights.specs.Specs.netstat_agn}, <class 'insights.parsers.netstat.Netstat'>: {insights.specs.Specs.netstat}, <class 'insights.parsers.netstat.Netstat_I'>: {insights.specs.Specs.netstat_i}, <class 'insights.parsers.netstat.SsTULPN'>: {insights.specs.Specs.ss}, <class 'insights.parsers.netstat.SsTUPNA'>: {insights.specs.Specs.ss}, <class 'insights.parsers.netstat.ProcNsat'>: {insights.specs.Specs.proc_netstat}, <class 'insights.parsers.neutron_conf.NeutronConf'>: {insights.specs.Specs.neutron_conf}, <class 'insights.parsers.neutron_dhcp_agent_conf.NeutronDhcpAgentIni'>: {insights.specs.Specs.neutron_dhcp_agent_ini}, <class 'insights.parsers.neutron_l3_agent_conf.NeutronL3AgentIni'>: {insights.specs.Specs.neutron_l3_agent_ini}, <class 'insights.parsers.neutron_l3_agent_log.NeutronL3AgentLog'>: {insights.specs.Specs.neutron_l3_agent_log}, <class 'insights.parsers.neutron_metadata_agent_conf.NeutronMetadataAgentIni'>: {insights.specs.Specs.neutron_metadata_agent_ini}, <class 'insights.parsers.neutron_metadata_agent_log.NeutronMetadataAgentLog'>: {insights.specs.Specs.neutron_metadata_agent_log}, <class 'insights.parsers.neutron_ml2_conf.NeutronMl2Conf'>: {insights.specs.Specs.neutron_ml2_conf}, <class 'insights.parsers.neutron_ovs_agent_log.NeutronOVSAgentLog'>: {insights.specs.Specs.neutron_ovs_agent_log}, <class 'insights.parsers.neutron_plugin.NeutronPlugin'>: {insights.specs.Specs.neutron_plugin_ini}, <class 'insights.parsers.neutron_server_log.NeutronServerLog'>: {insights.specs.Specs.neutron_server_log}, <class 'insights.parsers.nfnetlink_queue.NfnetLinkQueue'>: {insights.specs.Specs.nfnetlink_queue}, <class 'insights.parsers.nfs_exports.NFSExports'>: {insights.specs.Specs.nfs_exports}, <class 'insights.parsers.nfs_exports.NFSExportsD'>: {insights.specs.Specs.nfs_exports_d}, <class 'insights.parsers.nginx_conf.NginxConf'>: {insights.specs.Specs.nginx_conf}, <class 'insights.parsers.nmcli.NmcliDevShow'>: {insights.specs.Specs.nmcli_dev_show}, <class 'insights.parsers.nmcli.NmcliDevShowSos'>: {insights.specs.Specs.nmcli_dev_show_sos}, <class 'insights.parsers.nmcli.NmcliConnShow'>: {insights.specs.Specs.nmcli_conn_show}, <class 'insights.parsers.nova_conf.NovaConf'>: {insights.specs.Specs.nova_conf}, <class 'insights.parsers.nova_log.NovaApiLog'>: {insights.specs.Specs.nova_api_log}, <class 'insights.parsers.nova_log.NovaComputeLog'>: {insights.specs.Specs.nova_compute_log}, <class 'insights.parsers.nova_user_ids.NovaUID'>: {insights.specs.Specs.nova_uid}, <class 'insights.parsers.nova_user_ids.NovaMigrationUID'>: {insights.specs.Specs.nova_migration_uid}, <class 'insights.parsers.nscd_conf.NscdConf'>: {insights.specs.Specs.nscd_conf}, <class 'insights.parsers.nsswitch_conf.NSSwitchConf'>: {insights.specs.Specs.nsswitch_conf}, <class 'insights.parsers.ntp_sources.ChronycSources'>: {insights.specs.Specs.chronyc_sources}, <class 'insights.parsers.ntp_sources.NtpqLeap'>: {insights.specs.Specs.ntpq_leap}, <class 'insights.parsers.ntp_sources.NtpqPn'>: {insights.specs.Specs.ntpq_pn}, <class 'insights.parsers.numa_cpus.NUMACpus'>: {insights.specs.Specs.numa_cpus}, <class 'insights.parsers.numeric_user_group_name.NumericUserGroupName'>: {insights.specs.Specs.numeric_user_group_name}, <class 'insights.parsers.nvme_core_io_timeout.NVMeCoreIOTimeout'>: {insights.specs.Specs.nvme_core_io_timeout}, <class 'insights.parsers.odbc.ODBCIni'>: {insights.specs.Specs.odbc_ini}, <class 'insights.parsers.odbc.ODBCinstIni'>: {insights.specs.Specs.odbcinst_ini}, <class 'insights.parsers.openshift_configuration.OseNodeConfig'>: {insights.specs.Specs.ose_node_config}, <class 'insights.parsers.openshift_configuration.OseMasterConfig'>: {insights.specs.Specs.ose_master_config}, <class 'insights.parsers.openshift_get.OcGetBc'>: {insights.specs.Specs.oc_get_bc}, <class 'insights.parsers.openshift_get.OcGetBuild'>: {insights.specs.Specs.oc_get_build}, <class 'insights.parsers.openshift_get.OcGetDc'>: {insights.specs.Specs.oc_get_dc}, <class 'insights.parsers.openshift_get.OcGetEgressNetworkPolicy'>: {insights.specs.Specs.oc_get_egressnetworkpolicy}, <class 'insights.parsers.openshift_get.OcGetEndPoints'>: {insights.specs.Specs.oc_get_endpoints}, <class 'insights.parsers.openshift_get.OcGetEvent'>: {insights.specs.Specs.oc_get_event}, <class 'insights.parsers.openshift_get.OcGetNode'>: {insights.specs.Specs.oc_get_node}, <class 'insights.parsers.openshift_get.OcGetPod'>: {insights.specs.Specs.oc_get_pod}, <class 'insights.parsers.openshift_get.OcGetProject'>: {insights.specs.Specs.oc_get_project}, <class 'insights.parsers.openshift_get.OcGetPv'>: {insights.specs.Specs.oc_get_pv}, <class 'insights.parsers.openshift_get.OcGetPvc'>: {insights.specs.Specs.oc_get_pvc}, <class 'insights.parsers.openshift_get.OcGetRc'>: {insights.specs.Specs.oc_get_rc}, <class 'insights.parsers.openshift_get.OcGetRole'>: {insights.specs.Specs.oc_get_role}, <class 'insights.parsers.openshift_get.OcGetRolebinding'>: {insights.specs.Specs.oc_get_rolebinding}, <class 'insights.parsers.openshift_get.OcGetRoute'>: {insights.specs.Specs.oc_get_route}, <class 'insights.parsers.openshift_get.OcGetService'>: {insights.specs.Specs.oc_get_service}, <class 'insights.parsers.openshift_get.OcGetConfigmap'>: {insights.specs.Specs.oc_get_configmap}, <class 'insights.parsers.openshift_get_with_config.OcGetClusterRoleWithConfig'>: {insights.specs.Specs.oc_get_clusterrole_with_config}, <class 'insights.parsers.openshift_get_with_config.OcGetClusterRoleBindingWithConfig'>: {insights.specs.Specs.oc_get_clusterrolebinding_with_config}, <class 'insights.parsers.openshift_hosts.OpenShiftHosts'>: {insights.specs.Specs.openshift_hosts}, <class 'insights.parsers.openvswitch_logs.OVSDB_Server_Log'>: {insights.specs.Specs.openvswitch_server_log}, <class 'insights.parsers.openvswitch_logs.OVS_VSwitchd_Log'>: {insights.specs.Specs.openvswitch_daemon_log}, <class 'insights.parsers.openvswitch_other_config.OpenvSwitchOtherConfig'>: {insights.specs.Specs.openvswitch_other_config}, <class 'insights.parsers.oracle.OraclePfile'>: {insights.specs.Specs.init_ora}, <class 'insights.parsers.oracle.OracleSpfile'>: {insights.specs.Specs.spfile_ora}, <class 'insights.parsers.os_release.OsRelease'>: {insights.specs.Specs.os_release}, <class 'insights.parsers.osa_dispatcher_log.OSADispatcherLog'>: {insights.specs.Specs.osa_dispatcher_log}, <class 'insights.parsers.ovirt_engine_confd.OvirtEngineConfd'>: {insights.specs.Specs.ovirt_engine_confd}, <class 'insights.parsers.ovirt_engine_log.BootLog'>: {insights.specs.Specs.ovirt_engine_boot_log}, <class 'insights.parsers.ovirt_engine_log.ConsoleLog'>: {insights.specs.Specs.ovirt_engine_console_log}, <class 'insights.parsers.ovirt_engine_log.EngineLog'>: {insights.specs.Specs.engine_log}, <class 'insights.parsers.ovirt_engine_log.ServerLog'>: {insights.specs.Specs.ovirt_engine_server_log}, <class 'insights.parsers.ovirt_engine_log.UILog'>: {insights.specs.Specs.ovirt_engine_ui_log}, <class 'insights.parsers.ovs_appctl_fdb_show_bridge.OVSappctlFdbShowBridge'>: {insights.specs.Specs.ovs_appctl_fdb_show_bridge}, <class 'insights.parsers.ovs_ofctl_dump_flows.OVSofctlDumpFlows'>: {insights.specs.Specs.ovs_ofctl_dump_flows}, <class 'insights.parsers.ovs_vsctl_list_bridge.OVSvsctlListBridge'>: {insights.specs.Specs.ovs_vsctl_list_bridge}, <class 'insights.parsers.ovs_vsctl_show.OVSvsctlshow'>: {insights.specs.Specs.ovs_vsctl_show}, <class 'insights.parsers.pacemaker_log.PacemakerLog'>: {insights.specs.Specs.pacemaker_log}, <class 'insights.parsers.package_provides_httpd.PackageProvidesHttpd'>: {insights.specs.Specs.package_provides_httpd}, <class 'insights.parsers.package_provides_java.PackageProvidesJava'>: {insights.specs.Specs.package_provides_java}, <class 'insights.parsers.pam.PamConf'>: {insights.specs.Specs.pam_conf}, <class 'insights.parsers.parted.PartedL'>: {insights.specs.Specs.parted__l}, <class 'insights.parsers.partitions.Partitions'>: {insights.specs.Specs.partitions}, <class 'insights.parsers.passenger_status.PassengerStatus'>: {insights.specs.Specs.passenger_status}, <class 'insights.parsers.password.PasswordAuthPam'>: {insights.specs.Specs.password_auth}, <class 'insights.parsers.pci_rport_target_disk_paths.PciRportTargetDiskPaths'>: {insights.specs.Specs.pci_rport_target_disk_paths}, <class 'insights.parsers.pcs_config.PCSConfig'>: {insights.specs.Specs.pcs_config}, <class 'insights.parsers.pcs_quorum_status.PcsQuorumStatus'>: {insights.specs.Specs.pcs_quorum_status}, <class 'insights.parsers.pcs_status.PCSStatus'>: {insights.specs.Specs.pcs_status}, <class 'insights.parsers.pluginconf_d.PluginConfD'>: {insights.specs.Specs.pluginconf_d}, <class 'insights.parsers.pluginconf_d.PluginConfDIni'>: {insights.specs.Specs.pluginconf_d}, <class 'insights.parsers.podman_inspect.PodmanInspectImage'>: {insights.specs.Specs.podman_image_inspect}, <class 'insights.parsers.podman_inspect.PodmanInspectContainer'>: {insights.specs.Specs.podman_container_inspect}, <class 'insights.parsers.podman_list.PodmanListImages'>: {insights.specs.Specs.podman_list_images}, <class 'insights.parsers.podman_list.PodmanListContainers'>: {insights.specs.Specs.podman_list_containers}, <class 'insights.parsers.postgresql_conf.PostgreSQLConf'>: {insights.specs.Specs.postgresql_conf}, <class 'insights.parsers.postgresql_log.PostgreSQLLog'>: {insights.specs.Specs.postgresql_log}, <class 'insights.parsers.proc_environ.OpenshiftFluentdEnviron'>: {insights.specs.Specs.openshift_fluentd_environ}, <class 'insights.parsers.proc_environ.OpenshiftRouterEnviron'>: {insights.specs.Specs.openshift_router_environ}, <class 'insights.parsers.proc_limits.HttpdLimits'>: {insights.specs.Specs.httpd_limits}, <class 'insights.parsers.proc_limits.MysqldLimits'>: {insights.specs.Specs.mysqld_limits}, <class 'insights.parsers.proc_limits.OvsVswitchdLimits'>: {insights.specs.Specs.ovs_vswitchd_limits}, <class 'insights.parsers.proc_stat.ProcStat'>: {insights.specs.Specs.proc_stat}, <class 'insights.parsers.ps.PsAuxww'>: {insights.specs.Specs.ps_auxww}, <class 'insights.parsers.ps.PsEf'>: {insights.specs.Specs.ps_ef}, <class 'insights.parsers.ps.PsAuxcww'>: {insights.specs.Specs.ps_auxcww}, <class 'insights.parsers.ps.PsAux'>: {insights.specs.Specs.ps_aux}, <class 'insights.parsers.ps.PsEo'>: {insights.specs.Specs.ps_eo}, <class 'insights.parsers.ps.PsAlxwww'>: {insights.specs.Specs.ps_alxwww}, <class 'insights.parsers.pulp_worker_defaults.PulpWorkerDefaults'>: {insights.specs.Specs.pulp_worker_defaults}, <class 'insights.parsers.puppetserver_config.PuppetserverConfig'>: {insights.specs.Specs.puppetserver_config}, <class 'insights.parsers.qemu_conf.QemuConf'>: {insights.specs.Specs.qemu_conf}, <class 'insights.components.openstack.IsOpenStackCompute'>: {<class 'insights.parsers.ps.PsAuxcww'>}, <class 'insights.parsers.qemu_xml.QemuXML'>: {insights.specs.Specs.qemu_xml}, <class 'insights.parsers.qemu_xml.VarQemuXML'>: {insights.specs.Specs.var_qemu_xml}, <class 'insights.parsers.qemu_xml.OpenStackInstanceXML'>: {insights.specs.Specs.qemu_xml, <class 'insights.components.openstack.IsOpenStackCompute'>}, <class 'insights.parsers.qpid_stat.QpidStatQ'>: {insights.specs.Specs.qpid_stat_q}, <class 'insights.parsers.qpid_stat.QpidStatU'>: {insights.specs.Specs.qpid_stat_u}, <class 'insights.parsers.qpid_stat.QpidStatG'>: {insights.specs.Specs.qpid_stat_g}, <class 'insights.parsers.qpidd_conf.QpiddConf'>: {insights.specs.Specs.qpidd_conf}, <class 'insights.parsers.rabbitmq.RabbitMQReport'>: {insights.specs.Specs.rabbitmq_report}, <class 'insights.parsers.rabbitmq.RabbitMQReportOfContainers'>: {insights.specs.Specs.rabbitmq_report_of_containers}, <class 'insights.parsers.rabbitmq.RabbitMQUsers'>: {insights.specs.Specs.rabbitmq_users}, <class 'insights.parsers.rabbitmq.RabbitMQQueues'>: {insights.specs.Specs.rabbitmq_queues}, <class 'insights.parsers.rabbitmq.RabbitMQEnv'>: {insights.specs.Specs.rabbitmq_env}, <class 'insights.parsers.rabbitmq_log.RabbitMQStartupLog'>: {insights.specs.Specs.rabbitmq_startup_log}, <class 'insights.parsers.rabbitmq_log.RabbitMQStartupErrLog'>: {insights.specs.Specs.rabbitmq_startup_err}, <class 'insights.parsers.rabbitmq_log.RabbitMQLogs'>: {insights.specs.Specs.rabbitmq_logs}, <class 'insights.parsers.rc_local.RcLocal'>: {insights.specs.Specs.rc_local}, <class 'insights.parsers.rdma_config.RdmaConfig'>: {insights.specs.Specs.rdma_conf}, <class 'insights.parsers.readlink_e_mtab.ReadLinkEMtab'>: {insights.specs.Specs.readlink_e_etc_mtab}, <class 'insights.parsers.resolv_conf.ResolvConf'>: {insights.specs.Specs.resolv_conf}, <class 'insights.parsers.rhev_data_center.RhevDataCenter'>: {insights.specs.Specs.rhev_data_center}, <class 'insights.parsers.rhn_charsets.RHNCharSets'>: {insights.specs.Specs.rhn_charsets}, <class 'insights.parsers.rhn_conf.RHNConf'>: {insights.specs.Specs.rhn_conf}, <class 'insights.parsers.rhn_entitlement_cert_xml.RHNCertConf'>: {insights.specs.Specs.rhn_entitlement_cert_xml}, <class 'insights.parsers.rhn_hibernate_conf.RHNHibernateConf'>: {insights.specs.Specs.rhn_hibernate_conf}, <class 'insights.parsers.rhn_logs.TaskomaticDaemonLog'>: {insights.specs.Specs.rhn_taskomatic_daemon_log}, <class 'insights.parsers.rhn_logs.ServerXMLRPCLog'>: {insights.specs.Specs.rhn_server_xmlrpc_log}, <class 'insights.parsers.rhn_logs.SearchDaemonLog'>: {insights.specs.Specs.rhn_search_daemon_log}, <class 'insights.parsers.rhn_logs.SatelliteServerLog'>: {insights.specs.Specs.rhn_server_satellite_log}, <class 'insights.parsers.rhn_schema_stats.DBStatsLog'>: {insights.specs.Specs.rhn_schema_stats}, <function rhn_schema_version>: {insights.specs.Specs.rhn_schema_version}, <class 'insights.parsers.rhosp_release.RhospRelease'>: {insights.specs.Specs.rhosp_release}, <class 'insights.parsers.rhsm_conf.RHSMConf'>: {insights.specs.Specs.rhsm_conf}, <class 'insights.parsers.rhsm_log.RhsmLog'>: {insights.specs.Specs.rhsm_log}, <class 'insights.parsers.rhsm_releasever.RhsmReleaseVer'>: {insights.specs.Specs.rhsm_releasever}, <class 'insights.parsers.rhv_log_collector_analyzer.RhvLogCollectorJson'>: {insights.specs.Specs.rhv_log_collector_analyzer}, <class 'insights.parsers.rndc_status.RndcStatus'>: {insights.specs.Specs.rndc_status}, <class 'insights.parsers.route.Route'>: {insights.specs.Specs.route}, <class 'insights.parsers.rsyslog_conf.RsyslogConf'>: {insights.specs.Specs.rsyslog_conf}, <class 'insights.parsers.samba.SambaConfig'>: {insights.specs.Specs.samba}, <class 'insights.parsers.samba_logs.SAMBALog'>: {insights.specs.Specs.samba_logs}, <class 'insights.parsers.sap_hdb_version.HDBVersion'>: {insights.specs.Specs.sap_hdb_version}, <class 'insights.parsers.sap_host_profile.SAPHostProfile'>: {insights.specs.Specs.sap_host_profile}, <class 'insights.parsers.sapcontrol.SAPControlSystemUpdateList'>: {insights.specs.Specs.sapcontrol_getsystemupdatelist}, <class 'insights.parsers.saphostctrl.SAPHostCtrlInstances'>: {insights.specs.Specs.saphostctl_getcimobject_sapinstance}, <class 'insights.parsers.saphostexec.SAPHostExecStatus'>: {insights.specs.Specs.saphostexec_status}, <class 'insights.parsers.saphostexec.SAPHostExecVersion'>: {insights.specs.Specs.saphostexec_version}, <class 'insights.parsers.sat5_insights_properties.Sat5InsightsProperties'>: {insights.specs.Specs.sat5_insights_properties}, <class 'insights.parsers.satellite_enabled_features.SatelliteEnabledFeatures'>: {insights.specs.Specs.satellite_enabled_features}, <class 'insights.parsers.satellite_installer_configurations.CustomHiera'>: {insights.specs.Specs.satellite_custom_hiera}, <class 'insights.parsers.satellite_mongodb.MongoDBStorageEngine'>: {insights.specs.Specs.satellite_mongodb_storage_engine}, <class 'insights.parsers.scheduler.Scheduler'>: {insights.specs.Specs.scheduler}, <class 'insights.parsers.scsi.SCSI'>: {insights.specs.Specs.scsi}, <class 'insights.parsers.scsi_eh_deadline.SCSIEhDead'>: {insights.specs.Specs.scsi_eh_deadline}, <class 'insights.parsers.scsi_fwver.SCSIFWver'>: {insights.specs.Specs.scsi_fwver}, <class 'insights.parsers.sctp.SCTPEps'>: {insights.specs.Specs.sctp_eps}, <class 'insights.parsers.sctp.SCTPAsc'>: {<class 'insights.components.rhel_version.IsRhel6'>, insights.specs.Specs.sctp_asc}, <class 'insights.parsers.sctp.SCTPAsc7'>: {<class 'insights.components.rhel_version.IsRhel7'>, insights.specs.Specs.sctp_asc}, <class 'insights.parsers.sctp.SCTPSnmp'>: {insights.specs.Specs.sctp_snmp}, <class 'insights.parsers.sealert.Sealert'>: {insights.specs.Specs.sealert}, <class 'insights.parsers.secure.Secure'>: {insights.specs.Specs.secure}, <class 'insights.parsers.selinux_config.SelinuxConfig'>: {insights.specs.Specs.selinux_config}, <class 'insights.parsers.sestatus.SEStatus'>: {insights.specs.Specs.sestatus}, <class 'insights.parsers.setup_named_chroot.SetupNamedChroot'>: {insights.specs.Specs.setup_named_chroot}, <class 'insights.parsers.slabinfo.SlabInfo'>: {insights.specs.Specs.proc_slabinfo}, <class 'insights.parsers.smartctl.SMARTctl'>: {insights.specs.Specs.smartctl}, <class 'insights.parsers.smartpdc_settings.SmartpdcSettings'>: {insights.specs.Specs.smartpdc_settings}, <class 'insights.parsers.smbstatus.SmbstatusS'>: {insights.specs.Specs.smbstatus_S}, <class 'insights.parsers.smbstatus.Smbstatusp'>: {insights.specs.Specs.smbstatus_p}, <class 'insights.parsers.smt.CpuSMTActive'>: {insights.specs.Specs.cpu_smt_active}, <class 'insights.parsers.smt.CpuCoreOnline'>: {insights.specs.Specs.cpu_cores}, <class 'insights.parsers.smt.CpuSiblings'>: {insights.specs.Specs.cpu_siblings}, <class 'insights.parsers.snmp.TcpIpStats'>: {insights.specs.Specs.proc_snmp_ipv4}, <class 'insights.parsers.snmp.TcpIpStatsIPV6'>: {insights.specs.Specs.proc_snmp_ipv6}, <class 'insights.parsers.sockstat.SockStats'>: {insights.specs.Specs.sockstat}, <class 'insights.parsers.softnet_stat.SoftNetStats'>: {insights.specs.Specs.softnet_stat}, <class 'insights.parsers.software_collections_list.SoftwareCollectionsListInstalled'>: {<class 'insights.components.rhel_version.IsRhel6'>, insights.specs.Specs.software_collections_list, <class 'insights.components.rhel_version.IsRhel7'>}, <class 'insights.parsers.ssh.SshDConfig'>: {insights.specs.Specs.sshd_config}, <class 'insights.parsers.ssh_client_config.EtcSshConfig'>: {insights.specs.Specs.ssh_config}, <class 'insights.parsers.ssh_client_config.ForemanSshConfig'>: {insights.specs.Specs.ssh_foreman_config}, <class 'insights.parsers.ssh_client_config.ForemanProxySshConfig'>: {insights.specs.Specs.ssh_foreman_proxy_config}, <class 'insights.parsers.sssd_conf.SSSD_Config'>: {insights.specs.Specs.sssd_config}, <class 'insights.parsers.sssd_logs.SSSDLog'>: {insights.specs.Specs.sssd_logs}, <class 'insights.parsers.subscription_manager_list.SubscriptionManagerListConsumed'>: {insights.specs.Specs.subscription_manager_list_consumed}, <class 'insights.parsers.subscription_manager_list.SubscriptionManagerListInstalled'>: {insights.specs.Specs.subscription_manager_list_installed}, <class 'insights.parsers.subscription_manager_release.SubscriptionManagerReleaseShow'>: {insights.specs.Specs.subscription_manager_release_show}, <class 'insights.parsers.swift_conf.SwiftProxyServerConf'>: {insights.specs.Specs.swift_proxy_server_conf}, <class 'insights.parsers.swift_conf.SwiftObjectExpirerConf'>: {insights.specs.Specs.swift_object_expirer_conf}, <class 'insights.parsers.swift_conf.SwiftConf'>: {insights.specs.Specs.swift_conf}, <class 'insights.parsers.swift_log.SwiftLog'>: {insights.specs.Specs.swift_log}, <class 'insights.parsers.sys_bus.CdcWDM'>: {insights.specs.Specs.cdc_wdm}, <class 'insights.parsers.sysconfig.CorosyncSysconfig'>: {insights.specs.Specs.corosync}, <class 'insights.parsers.sysconfig.ChronydSysconfig'>: {insights.specs.Specs.sysconfig_chronyd}, <class 'insights.parsers.sysconfig.DirsrvSysconfig'>: {insights.specs.Specs.dirsrv}, <class 'insights.parsers.sysconfig.DockerStorageSetupSysconfig'>: {insights.specs.Specs.docker_storage_setup}, <class 'insights.parsers.sysconfig.DockerSysconfig'>: {insights.specs.Specs.docker_sysconfig}, <class 'insights.parsers.sysconfig.DockerSysconfigStorage'>: {insights.specs.Specs.docker_storage}, <class 'insights.parsers.sysconfig.ForemanTasksSysconfig'>: {insights.specs.Specs.foreman_tasks_config}, <class 'insights.parsers.sysconfig.HttpdSysconfig'>: {insights.specs.Specs.sysconfig_httpd}, <class 'insights.parsers.sysconfig.IrqbalanceSysconfig'>: {insights.specs.Specs.sysconfig_irqbalance}, <class 'insights.parsers.sysconfig.KdumpSysconfig'>: {insights.specs.Specs.sysconfig_kdump}, <class 'insights.parsers.sysconfig.LibvirtGuestsSysconfig'>: {insights.specs.Specs.sysconfig_libvirt_guests}, <class 'insights.parsers.sysconfig.MemcachedSysconfig'>: {insights.specs.Specs.sysconfig_memcached}, <class 'insights.parsers.sysconfig.MongodSysconfig'>: {insights.specs.Specs.sysconfig_mongod}, <class 'insights.parsers.sysconfig.NetconsoleSysconfig'>: {insights.specs.Specs.netconsole}, <class 'insights.parsers.sysconfig.NetworkSysconfig'>: {insights.specs.Specs.sysconfig_network}, <class 'insights.parsers.sysconfig.NtpdSysconfig'>: {insights.specs.Specs.sysconfig_ntpd}, <class 'insights.parsers.sysconfig.PrelinkSysconfig'>: {insights.specs.Specs.sysconfig_prelink}, <class 'insights.parsers.sysconfig.SshdSysconfig'>: {insights.specs.Specs.sysconfig_sshd}, <class 'insights.parsers.sysconfig.PuppetserverSysconfig'>: {insights.specs.Specs.puppetserver_config}, <class 'insights.parsers.sysconfig.Up2DateSysconfig'>: {insights.specs.Specs.up2date}, <class 'insights.parsers.sysconfig.VirtWhoSysconfig'>: {insights.specs.Specs.sysconfig_virt_who}, <class 'insights.parsers.sysconfig.IfCFGStaticRoute'>: {insights.specs.Specs.ifcfg_static_route}, <class 'insights.parsers.sysctl.SysctlConf'>: {insights.specs.Specs.sysctl_conf}, <class 'insights.parsers.sysctl.Sysctl'>: {insights.specs.Specs.sysctl}, <class 'insights.parsers.sysctl.SysctlConfInitramfs'>: {insights.specs.Specs.sysctl_conf_initramfs}, <class 'insights.parsers.system_time.ChronyConf'>: {insights.specs.Specs.chrony_conf}, <class 'insights.parsers.system_time.NTPConf'>: {insights.specs.Specs.ntp_conf}, <class 'insights.parsers.system_time.LocalTime'>: {insights.specs.Specs.localtime}, <class 'insights.parsers.system_time.NtpTime'>: {insights.specs.Specs.ntptime}, <class 'insights.parsers.systemctl_show.SystemctlShowServiceAll'>: {insights.specs.Specs.systemctl_show_all_services}, <class 'insights.parsers.systemctl_show.SystemctlShowTarget'>: {insights.specs.Specs.systemctl_show_target}, <class 'insights.parsers.systemctl_show.SystemctlShowCinderVolume'>: {insights.specs.Specs.systemctl_cinder_volume}, <class 'insights.parsers.systemctl_show.SystemctlShowMariaDB'>: {insights.specs.Specs.systemctl_mariadb}, <class 'insights.parsers.systemctl_show.SystemctlShowPulpWorkers'>: {insights.specs.Specs.systemctl_pulp_workers}, <class 'insights.parsers.systemctl_show.SystemctlShowPulpResourceManager'>: {insights.specs.Specs.systemctl_pulp_resmg}, <class 'insights.parsers.systemctl_show.SystemctlShowPulpCelerybeat'>: {insights.specs.Specs.systemctl_pulp_celerybeat}, <class 'insights.parsers.systemctl_show.SystemctlShowHttpd'>: {insights.specs.Specs.systemctl_httpd}, <class 'insights.parsers.systemctl_show.SystemctlShowNginx'>: {insights.specs.Specs.systemctl_nginx}, <class 'insights.parsers.systemctl_show.SystemctlShowQpidd'>: {insights.specs.Specs.systemctl_qpidd}, <class 'insights.parsers.systemctl_show.SystemctlShowQdrouterd'>: {insights.specs.Specs.systemctl_qdrouterd}, <class 'insights.parsers.systemctl_show.SystemctlShowSmartpdc'>: {insights.specs.Specs.systemctl_smartpdc}, <class 'insights.parsers.systemd.config.SystemdDocker'>: {insights.specs.Specs.systemd_docker}, <class 'insights.parsers.systemd.config.SystemdSystemConf'>: {insights.specs.Specs.systemd_system_conf}, <class 'insights.parsers.systemd.config.SystemdOriginAccounting'>: {insights.specs.Specs.systemd_system_origin_accounting}, <class 'insights.parsers.systemd.config.SystemdOpenshiftNode'>: {insights.specs.Specs.systemd_openshift_node}, <class 'insights.parsers.systemd.config.SystemdLogindConf'>: {insights.specs.Specs.systemd_logind_conf}, <class 'insights.parsers.systemd.config.SystemdRpcbindSocketConf'>: {insights.specs.Specs.systemctl_cat_rpcbind_socket}, <class 'insights.parsers.systemd.unitfiles.UnitFiles'>: {insights.specs.Specs.systemctl_list_unit_files}, <class 'insights.parsers.systemd.unitfiles.ListUnits'>: {insights.specs.Specs.systemctl_list_units}, <class 'insights.parsers.systemid.SystemID'>: {insights.specs.Specs.systemid}, <class 'insights.parsers.systool.SystoolSCSIBus'>: {insights.specs.Specs.systool_b_scsi_v}, <class 'insights.parsers.tags.Tags'>: {insights.specs.Specs.tags}, <class 'insights.parsers.teamdctl_config_dump.TeamdctlConfigDump'>: {insights.specs.Specs.teamdctl_config_dump}, <class 'insights.parsers.teamdctl_state_dump.TeamdctlStateDump'>: {insights.specs.Specs.teamdctl_state_dump}, <class 'insights.parsers.tmpfilesd.TmpFilesD'>: {insights.specs.Specs.tmpfilesd}, <class 'insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextFallback'>: {insights.specs.Specs.tomcat_vdc_fallback}, <class 'insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextTargeted'>: {insights.specs.Specs.tomcat_vdc_targeted}, <class 'insights.parsers.tomcat_xml.TomcatWebXml'>: {insights.specs.Specs.tomcat_web_xml}, <class 'insights.parsers.tomcat_xml.TomcatServerXml'>: {insights.specs.Specs.tomcat_server_xml}, <class 'insights.parsers.transparent_hugepage.ThpUseZeroPage'>: {insights.specs.Specs.thp_use_zero_page}, <class 'insights.parsers.transparent_hugepage.ThpEnabled'>: {insights.specs.Specs.thp_enabled}, <class 'insights.parsers.tuned.Tuned'>: {insights.specs.Specs.tuned_adm}, <class 'insights.parsers.tuned_conf.TunedConfIni'>: {insights.specs.Specs.tuned_conf}, <class 'insights.parsers.udev_rules.UdevRulesFCWWPN'>: {insights.specs.Specs.udev_fc_wwpn_id_rules}, <class 'insights.parsers.up2date.Up2Date'>: {insights.specs.Specs.up2date}, <class 'insights.parsers.up2date_log.Up2dateLog'>: {insights.specs.Specs.up2date_log}, <class 'insights.parsers.uptime.Uptime'>: {insights.specs.Specs.uptime}, <class 'insights.parsers.vdo_status.VDOStatus'>: {insights.specs.Specs.vdo_status}, <class 'insights.parsers.vdsm_conf.VDSMConfIni'>: {insights.specs.Specs.vdsm_conf}, <class 'insights.parsers.vdsm_conf.VDSMLoggerConf'>: {insights.specs.Specs.vdsm_logger_conf}, <class 'insights.parsers.vdsm_log.VDSMLog'>: {insights.specs.Specs.vdsm_log}, <class 'insights.parsers.vdsm_log.VDSMImportLog'>: {insights.specs.Specs.vdsm_import_log}, <class 'insights.parsers.vgdisplay.VgDisplay'>: {insights.specs.Specs.vgdisplay}, <class 'insights.parsers.virsh_list_all.VirshListAll'>: {insights.specs.Specs.virsh_list_all}, <class 'insights.parsers.virt_uuid_facts.VirtUuidFacts'>: {insights.specs.Specs.virt_uuid_facts}, <class 'insights.parsers.virt_what.VirtWhat'>: {insights.specs.Specs.virt_what}, <class 'insights.parsers.virt_who_conf.VirtWhoConf'>: {insights.specs.Specs.virt_who_conf}, <class 'insights.parsers.virtlogd_conf.VirtlogdConf'>: {insights.specs.Specs.virtlogd_conf}, <class 'insights.parsers.vma_ra_enabled_s390x.VmaRaEnabledS390x'>: {insights.specs.Specs.vma_ra_enabled}, <class 'insights.parsers.vmcore_dmesg.VMCoreDmesg'>: {insights.specs.Specs.vmcore_dmesg}, <class 'insights.parsers.vmware_tools_conf.VMwareToolsConf'>: {insights.specs.Specs.vmware_tools_conf}, <class 'insights.parsers.vsftpd.VsftpdPamConf'>: {insights.specs.Specs.vsftpd}, <class 'insights.parsers.vsftpd.VsftpdConf'>: {insights.specs.Specs.vsftpd_conf}, <class 'insights.parsers.x86_debug.X86IBPBEnabled'>: {insights.specs.Specs.x86_ibpb_enabled}, <class 'insights.parsers.x86_debug.X86IBRSEnabled'>: {insights.specs.Specs.x86_ibrs_enabled}, <class 'insights.parsers.x86_debug.X86PTIEnabled'>: {insights.specs.Specs.x86_pti_enabled}, <class 'insights.parsers.x86_debug.X86RETPEnabled'>: {insights.specs.Specs.x86_retp_enabled}, <class 'insights.parsers.xfs_info.XFSInfo'>: {insights.specs.Specs.xfs_info}, <class 'insights.parsers.xinetd_conf.XinetdConf'>: {insights.specs.Specs.xinetd_conf}, <class 'insights.parsers.yum_conf.YumConf'>: {insights.specs.Specs.yum_conf}, <class 'insights.parsers.yum_list_installed.YumListInstalled'>: {insights.specs.Specs.yum_list_installed}, <class 'insights.parsers.yum_repos_d.YumReposD'>: {insights.specs.Specs.yum_repos_d}, <class 'insights.parsers.yumlog.YumLog'>: {insights.specs.Specs.yum_log}, <class 'insights.parsers.zdump_v.ZdumpV'>: {insights.specs.Specs.zdump_v}, <class 'insights.parsers.zipl_conf.ZiplConf'>: {insights.specs.Specs.zipl_conf}, <class 'insights.combiners.ceph_osd_tree.CephOsdTree'>: {<class 'insights.parsers.ceph_osd_tree_text.CephOsdTreeText'>, <class 'insights.parsers.ceph_insights.CephInsights'>, <class 'insights.parsers.ceph_cmd_json_parsing.CephOsdTree'>}, <class 'insights.combiners.ceph_version.CephVersion'>: {<class 'insights.parsers.ceph_version.CephVersion'>, <class 'insights.parsers.ceph_cmd_json_parsing.CephReport'>, <class 'insights.parsers.ceph_insights.CephInsights'>}, <class 'insights.combiners.cpu_vulns_all.CpuVulnsAll'>: {<class 'insights.parsers.cpu_vulns.CpuVulns'>}, <class 'insights.combiners.dmesg.Dmesg'>: {<class 'insights.parsers.dmesg_log.DmesgLog'>, <class 'insights.parsers.dmesg.DmesgLineList'>}, <class 'insights.combiners.dnsmasq_conf_all.DnsmasqConfTree'>: {<class 'insights.parsers.dnsmasq_config.DnsmasqConf'>}, <class 'insights.combiners.grub_conf.BootLoaderEntries'>: {<class 'insights.parsers.grub_conf.BootLoaderEntries'>, <class 'insights.parsers.ls_sys_firmware.LsSysFirmware'>}, <class 'insights.combiners.grub_conf.GrubConf'>: {<class 'insights.parsers.grub_conf.Grub1EFIConfig'>, <class 'insights.combiners.grub_conf.BootLoaderEntries'>, <class 'insights.parsers.grub_conf.Grub2EFIConfig'>, <class 'insights.parsers.grub_conf.Grub1Config'>, <class 'insights.parsers.installed_rpms.InstalledRpms'>, <class 'insights.parsers.ls_sys_firmware.LsSysFirmware'>, <class 'insights.parsers.grub_conf.Grub2Config'>, <class 'insights.combiners.redhat_release.RedHatRelease'>, <class 'insights.parsers.cmdline.CmdLine'>}, <class 'insights.combiners.hostname.Hostname'>: {<class 'insights.parsers.systemid.SystemID'>, <class 'insights.parsers.hostname.Hostname'>, <class 'insights.parsers.hostname.HostnameShort'>, <class 'insights.parsers.hostname.HostnameDefault'>, <class 'insights.parsers.facter.Facter'>}, <function hostname>: {<class 'insights.parsers.systemid.SystemID'>, <class 'insights.parsers.hostname.Hostname'>, <class 'insights.parsers.hostname.HostnameShort'>, <class 'insights.parsers.hostname.HostnameDefault'>, <class 'insights.parsers.facter.Facter'>}, <class 'insights.combiners.httpd_conf.HttpdConfAll'>: {<class 'insights.parsers.httpd_conf.HttpdConf'>}, <class 'insights.combiners.httpd_conf._HttpdConf'>: {insights.specs.Specs.httpd_conf}, <class 'insights.combiners.httpd_conf.HttpdConfTree'>: {<class 'insights.combiners.httpd_conf._HttpdConf'>}, <class 'insights.combiners.httpd_conf._HttpdConfSclHttpd24'>: {insights.specs.Specs.httpd_conf_scl_httpd24}, <class 'insights.combiners.httpd_conf.HttpdConfSclHttpd24Tree'>: {<class 'insights.combiners.httpd_conf._HttpdConfSclHttpd24'>}, <class 'insights.combiners.httpd_conf._HttpdConfSclJbcsHttpd24'>: {insights.specs.Specs.httpd_conf_scl_jbcs_httpd24}, <class 'insights.combiners.httpd_conf.HttpdConfSclJbcsHttpd24Tree'>: {<class 'insights.combiners.httpd_conf._HttpdConfSclJbcsHttpd24'>}, <class 'insights.combiners.ipcs_semaphores.IpcsSemaphores'>: {<class 'insights.parsers.ipcs.IpcsS'>, <class 'insights.parsers.ipcs.IpcsSI'>, <class 'insights.parsers.ps.PsAuxcww'>}, <class 'insights.combiners.ipcs_shared_memory.IpcsSharedMemory'>: {<class 'insights.parsers.ipcs.IpcsM'>, <class 'insights.parsers.ipcs.IpcsMP'>}, <class 'insights.combiners.ipv6.IPv6'>: {<class 'insights.parsers.lsmod.LsMod'>, <class 'insights.parsers.modprobe.ModProbe'>, <class 'insights.parsers.uname.Uname'>, <class 'insights.parsers.sysctl.Sysctl'>, <class 'insights.parsers.cmdline.CmdLine'>}, <class 'insights.combiners.journald_conf.JournaldConfAll'>: {<class 'insights.parsers.journald_conf.UsrJournaldConfD'>, <class 'insights.parsers.journald_conf.EtcJournaldConf'>, <class 'insights.parsers.journald_conf.EtcJournaldConfD'>}, <class 'insights.combiners.krb5.AllKrb5Conf'>: {<class 'insights.parsers.krb5.Krb5Configuration'>}, <class 'insights.combiners.limits_conf.AllLimitsConf'>: {<class 'insights.parsers.limits_conf.LimitsConf'>}, <class 'insights.combiners.logrotate_conf.LogrotateConfAll'>: {<class 'insights.parsers.logrotate_conf.LogrotateConf'>}, <class 'insights.combiners.logrotate_conf._LogRotateConf'>: {insights.specs.Specs.logrotate_conf}, <class 'insights.combiners.logrotate_conf.LogRotateConfTree'>: {<class 'insights.combiners.logrotate_conf._LogRotateConf'>}, <class 'insights.combiners.lvm.Lvm'>: {<class 'insights.parsers.lvm.PvsHeadings'>, <class 'insights.parsers.lvm.Pvs'>, <class 'insights.parsers.lvm.VgsHeadings'>, <class 'insights.parsers.lvm.LvsHeadings'>, <class 'insights.parsers.lvm.Lvs'>, <class 'insights.parsers.lvm.Vgs'>}, <class 'insights.combiners.lvm.LvmAll'>: {<class 'insights.parsers.lvm.VgsAll'>, <class 'insights.parsers.lvm.LvsAll'>, <class 'insights.parsers.lvm.PvsAll'>}, <class 'insights.combiners.md5check.NormalMD5'>: {<class 'insights.parsers.md5check.NormalMD5'>}, <class 'insights.combiners.mlx4_port.Mlx4Port'>: {<class 'insights.parsers.mlx4_port.Mlx4Port'>}, <class 'insights.combiners.modinfo.ModInfo'>: {<class 'insights.parsers.modinfo.ModInfoEach'>, <class 'insights.parsers.modinfo.ModInfoAll'>}, <class 'insights.combiners.modprobe.AllModProbe'>: {<class 'insights.parsers.modprobe.ModProbe'>}, <function multinode_product>: {<class 'insights.parsers.metadata.MetadataJson'>, <function hostname>, insights.specs.Specs.machine_id}, <function docker>: {<function multinode_product>}, <function OSP>: {<function multinode_product>}, <function RHEV>: {<function multinode_product>}, <function RHEL>: {<function multinode_product>}, <class 'insights.combiners.netstat.NetworkStats'>: {<class 'insights.parsers.ip.IpLinkInfo'>, <class 'insights.parsers.netstat.Netstat_I'>}, <class 'insights.combiners.nfs_exports.AllNFSExports'>: {<class 'insights.parsers.nfs_exports.NFSExports'>, <class 'insights.parsers.nfs_exports.NFSExportsD'>}, <class 'insights.combiners.nginx_conf._NginxConf'>: {insights.specs.Specs.nginx_conf}, <class 'insights.combiners.nginx_conf.NginxConfTree'>: {<class 'insights.combiners.nginx_conf._NginxConf'>}, <class 'insights.combiners.nmcli_dev_show.AllNmcliDevShow'>: {<class 'insights.parsers.nmcli.NmcliDevShow'>, <class 'insights.parsers.nmcli.NmcliDevShowSos'>}, <class 'insights.combiners.package_provides_httpd.PackageProvidesHttpdAll'>: {<class 'insights.parsers.package_provides_httpd.PackageProvidesHttpd'>}, <class 'insights.combiners.package_provides_java.PackageProvidesJavaAll'>: {<class 'insights.parsers.package_provides_java.PackageProvidesJava'>}, <class 'insights.combiners.ps.Ps'>: {<class 'insights.parsers.ps.PsEf'>, <class 'insights.parsers.ps.PsEo'>, <class 'insights.parsers.ps.PsAux'>, <class 'insights.parsers.ps.PsAlxwww'>, <class 'insights.parsers.ps.PsAuxww'>, <class 'insights.parsers.ps.PsAuxcww'>}, <class 'insights.combiners.rhsm_release.RhsmRelease'>: {<class 'insights.parsers.rhsm_releasever.RhsmReleaseVer'>, <class 'insights.parsers.subscription_manager_release.SubscriptionManagerReleaseShow'>}, <class 'insights.combiners.sap.Sap'>: {<function hostname>, <class 'insights.parsers.lssap.Lssap'>, <class 'insights.parsers.saphostctrl.SAPHostCtrlInstances'>}, <class 'insights.combiners.selinux.SELinux'>: {<class 'insights.parsers.sestatus.SEStatus'>, <class 'insights.parsers.selinux_config.SelinuxConfig'>, <class 'insights.combiners.grub_conf.GrubConf'>}, <class 'insights.combiners.services.Services'>: {<class 'insights.parsers.systemd.unitfiles.UnitFiles'>, <class 'insights.parsers.chkconfig.ChkConfig'>}, <class 'insights.combiners.smt.CpuTopology'>: {<class 'insights.parsers.smt.CpuCoreOnline'>, <class 'insights.parsers.smt.CpuSiblings'>}, <class 'insights.combiners.tmpfilesd.AllTmpFiles'>: {<class 'insights.parsers.tmpfilesd.TmpFilesD'>}, <class 'insights.combiners.tomcat_virtual_dir_context.TomcatVirtualDirContextCombined'>: {<class 'insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextFallback'>, <class 'insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextTargeted'>}, <function uptime>: {<class 'insights.parsers.facter.Facter'>, <class 'insights.parsers.uptime.Uptime'>}, <class 'insights.combiners.user_namespaces.UserNamespaces'>: {<class 'insights.parsers.grub_conf.Grub2Config'>, <class 'insights.parsers.cmdline.CmdLine'>}, <class 'insights.combiners.virt_what.VirtWhat'>: {<class 'insights.parsers.virt_what.VirtWhat'>, <class 'insights.parsers.dmidecode.DMIDecode'>}, <class 'insights.combiners.virt_who_conf.AllVirtWhoConf'>: {<class 'insights.parsers.virt_who_conf.VirtWhoConf'>, <class 'insights.parsers.sysconfig.VirtWhoSysconfig'>}, <class 'insights.combiners.x86_page_branch.X86PageBranch'>: {<class 'insights.parsers.x86_debug.X86IBPBEnabled'>, <class 'insights.parsers.x86_debug.X86RETPEnabled'>, <class 'insights.parsers.x86_debug.X86IBRSEnabled'>, <class 'insights.parsers.x86_debug.X86PTIEnabled'>}}), broker=None)[source]
insights.tools.query.dump_ds(d, space='')[source]
insights.tools.query.dump_info(comps)[source]
insights.tools.query.get_components(comps, default_module)[source]
insights.tools.query.get_datasources()[source]
insights.tools.query.get_matching_datasources(paths)[source]
insights.tools.query.get_pydoc(spec)[source]
insights.tools.query.get_source(spec)[source]
insights.tools.query.glob2re(pat)[source]

Translate a shell PATTERN to a regular expression. There is no way to quote meta-characters.

Stolen from https://stackoverflow.com/a/29820981/1451664

insights.tools.query.load_default_components()[source]
insights.tools.query.load_obj(spec)[source]
insights.tools.query.main()[source]
insights.tools.query.matches(d, path)[source]
insights.tools.query.parse_args()[source]
insights.tools.query.preload_components(comps)[source]
insights.tools.query.print_component(comp, verbose=False, specs=False)[source]
insights.tools.query.print_results(results, types, verbose, specs)[source]

insights.util

class insights.util.KeyPassingDefaultDict(*args, **kwargs)[source]

Bases: collections.defaultdict

A default dict that passes the key to its factory function.

insights.util.case_variants(*elements)[source]

For configs which take case-insensitive options, it is necessary to extend the list with various common case variants (all combinations are not practical). In the future, this should be removed, when parser filters are made case-insensitive.

Parameters

*elements (str) -- list of elements which need case-sensitive expansion, you should use default case such as Ciphers, MACs, UsePAM, MaxAuthTries

Returns

list of all expanded elements

Return type

list

insights.util.check_path(path)[source]
insights.util.defaults(default=None)[source]

Catches any exception thrown by the wrapped function and returns default instead.

Parameters

default (object) -- The default value to return if the wrapped function throws an exception

insights.util.deprecated(func, solution)[source]

Mark a parser or combiner as deprecated, and give a message of how to fix this. This will emit a warning in the logs when the function is used. When combined with modifications to conftest, this causes deprecations to become fatal errors when testing, so they get fixed.

Parameters
  • func (function) -- the function or method being deprecated.

  • solution (str) -- a string describing the replacement class, method or function that replaces the thing being deprecated. For example, “use the fnord() function” or “use the search() method with the parameter name=’(value)’”.

insights.util.ensure_dir(path, dirname=False)[source]
insights.util.enum(*e)[source]
insights.util.get_addr()[source]
insights.util.get_path_for_system_id(category, system_id)[source]
insights.util.keys_in(items, *args)[source]

Use this utility function to ensure multiple keys are in one or more dicts. Returns True if all keys are present in at least one of the given dicts, otherwise returns False.

Parameters
  • items: Iterable of required keys

  • Variable number of subsequent arguments, each one being a dict to check.

insights.util.logging_level(logger, level)[source]
insights.util.make_iter(item)[source]
class insights.util.objectview(dict_)[source]

Bases: object

insights.util.parse_bool(s, default=False)[source]

Return the boolean value of an English string or default if it can’t be determined.

insights.util.parse_keypair_lines(content, delim='|', kv_sep='=')[source]

Parses a set of entities, where each entity is a set of key-value pairs contained all on one line. Each entity is parsed into a dictionary and added to the list returned from this function.

insights.util.rsplit(_str, seps)[source]

Splits _str by the first sep in seps that is found from the right side. Returns a tuple without the separator.

insights.util.which(cmd, env=None)[source]
insights.util.word_wrap(line, wrap_len=72)[source]
class insights.util.file_permissions.FilePermissions(line)[source]

Bases: object

Class for parsing ls -l line targeted at concrete file and handling parsed properties.

It is useful for checking file permissions and owner.

perms_owner

Owner permissions, e.g. ‘rwx’

Type

str

perms_group

Group permissions

Type

str

perms_other

Other permissions

Type

str

owner

Owner user name

Type

str

group

Owner group name

Type

str

path

Full path to file

Type

str

Note

This class does not support Access Control Lists (ACLs). If that is needed in the future, it would be preferable to create another class than extend this one. Advanced File Permissions - SUID, SGID and Sticky Bit - are not yet correctly parsed.

all_zero()[source]

Checks that all permissions are zero (‘---------‘ in ls -l) - nobody but root can read, write, exec.

Returns

True if all permissions are zero (‘---------‘)

Return type

bool

classmethod from_dict(dirent)[source]

Create a new FilePermissions object from the given dictionary. This works with the FileListing parser class, which has already done the hard work of pulling many of these fields out. We create an object with all the dictionary keys available as properties, and also split the perms string up into owner, group

group_can_only_read()[source]

Checks if group has read-only permissions for the file. Therefore, write and execute bits for group must be unset and read bit must be set.

Returns

True if group can only read the file.

Return type

bool

group_can_read()[source]

Checks if group can read the file. Write and execute bits are not evaluated.

Returns

True if group can read the file.

Return type

bool

group_can_write()[source]

Checks if group can write the file. Read and execute bits are not evaluated.

Returns

True if group can write the file.

Return type

bool

only_root_can_read(root_group_can_read=True)[source]

Checks if only root is allowed to read the file (and anyone else is forbidden from reading). Write and execute bits are not checked. The read bits for root user/group are not checked because root can read/write anything regardless of the read/write permissions.

When called with root_root_group_can_read = True:

  • owner must be root

  • and ‘others’ permissions must not contain read

  • and if group owner is not root, the ‘group’ permissions must not contain read

Valid cases:

rwxrwxrwx    owner   ownergroup
-------------------------------
???-??-??    root    nonroot
??????-??    root    root
r--r-----    root    root
r--------    root    nonroot
rwxrwx---    root    root
rwxrwx-wx    root    root

Specifically, these cases are NOT valid because the owner can chmod permissions and grant themselves permissions without root’s knowledge:

rwxrwxrwx    owner   ownergroup
-------------------------------
-??-??-??    nonroot nonroot
-??r??-??    nonroot root
---------    nonroot nonroot

When called with root_root_group_can_read = False:

  • owner must be root

  • and ‘group’ and ‘others’ permissions must not contain read

Valid cases:

rwxrwxrwx    owner   ownergroup
-------------------------------
???-??-??    root    ?
r--------    root    root
r--------    root    nonroot
rwx-wx---    root    root
rwx-wx---    root    nonroot
rwx-wxrwx    root    nonroot

Specifically, these cases are NOT valid because the owner can chmod permissions and grant themselves permissions without root’s knowledge:

rwxrwxrwx    owner   ownergroup
-------------------------------
-??-??-??    nonroot nonroot
---------    nonroot nonroot
Parameters
  • root_group_can_read (bool) -- if set to True, this tests whether the

  • group can also read the file. ('root') --

Returns

True if only root user (or optionally root group) can read the file.

Return type

bool

only_root_can_write(root_group_can_write=True)[source]

Checks if only root is allowed to write the file (and anyone else is barred from writing). Read and execute bits are not checked. The write bits for root user/group are not checked because root can read/write anything regardless of the read/write permissions.

When called with root_root_group_can_write = True:

  • owner must be root

  • and ‘others’ permissions must not contain write

  • and if group owner is not root, the ‘group’ permissions must not contain write

Valid cases:

rwxrwxrwx    owner   ownergroup
-------------------------------
????-??-?    root    nonroot
???????-?    root    root
-w--w----    root    root
-w-------    root    root
rwxrwx---    root    root
rwxrwxr-x    root    root

Specifically, these cases are NOT valid because the owner can chmod permissions and grant themselves permissions without root’s knowledge:

rwxrwxrwx    owner   ownergroup
-------------------------------
?-??-??-?    nonroot nonroot
?-??w??-?    nonroot root
---------    nonroot nonroot

When called with root_root_group_can_write = False:

  • owner must be root

  • and ‘group’ and ‘others’ permissions must not contain write

Valid cases:

rwxrwxrwx    owner   ownergroup
-------------------------------
????-??-?    root    ?
-w-------    root    root
-w-------    root    nonroot
rwxr-x---    root    root
rwxr-x---    root    nonroot
rwxr-xrwx    root    nonroot

Specifically, these cases are NOT valid because the owner can chmod permissions and grant themselves permissions without root’s knowledge:

rwxrwxrwx    owner   ownergroup
-------------------------------
?-??-??-?    nonroot nonroot
---------    nonroot nonroot
Parameters
  • root_group_can_write (bool) -- if set to True, this tests whether

  • group can also write to the file. ('root') --

Returns

True if only root user (or optionally root group) can write the file.

Return type

bool

others_can_only_read()[source]

Checks if ‘others’ has read-only permissions for the file. Therefore, write and execute bits for ‘others’ must be unset and read bit must be set. (‘others’ in the sense of unix permissions that know about user, group, others.)

Returns

True if ‘others’ can only read the file.

Return type

bool

others_can_read()[source]

Checks if ‘others’ can read the file. Write and execute bits are not evaluated. (‘others’ in the sense of unix permissions that know about user, group, others.)

Returns

True if ‘others’ can read the file.

Return type

bool

others_can_write()[source]

Checks if ‘others’ can write the file. Read and execute bits are not evaluated. (‘others’ in the sense of unix permissions that know about user, group, others.)

Returns

True if ‘others’ can write the file.

Return type

bool

owned_by(owner, also_check_group=False)[source]

Checks if the specified user or user and group own the file.

Parameters
  • owner (str) -- the user (or group) name for which we ask about ownership

  • also_check_group (bool) -- if set to True, both user owner and group owner checked if set to False, only user owner checked

Returns

True if owner of the file is the specified owner

Return type

bool

owner_can_only_read()[source]

Checks if owner has read-only permissions for the file. Therefore, write and execute bits for owner must be unset and read bit must be set.

Returns

True if owner can only read the file.

Return type

bool

owner_can_read()[source]

Checks if owner can read the file. Write and execute bits are not evaluated.

Returns

True if owner can read the file.

Return type

bool

owner_can_write()[source]

Checks if owner can write the file. Read and execute bits are not evaluated.

Returns

True if owner can write the file.

Return type

bool

Shared Parsers Catalog

Contents:

Alternatives - command /usr/bin/alternatives output

class insights.parsers.alternatives.AlternativesOutput(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Read the output of /usr/sbin/alternatives --display *program* and convert into information about the given program’s alternatives.

Typical input is:

java - status is auto.
 link currently points to /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64/jre/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64/jre/bin/java - priority 1700111
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64/jre/bin/java - priority 1800111
/usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/java - priority 16091
 slave ControlPanel: /usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/ControlPanel
 slave keytool: /usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/keytool
 slave policytool: /usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/policytool
 slave rmid: /usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/rmid
Current `best' version is /usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/java.

Lines are interpreted this way:

  • Program lines are of the form ‘name - status is status’, and start the information for a program. Lines before this are ignored.

  • The current link to this program is found on lines starting with ‘link currently points to’.

  • Lines starting with ‘/’ and with ‘ - priority ‘ in them record an alternative path and its priority.

  • Lines starting with ‘slave program: path’ are recorded against the alternative path.

  • Lines starting with ‘Current `best’ version is’ indicate the default choice of an ‘auto’ status alternative.

The output of alternatives --display *program* can only ever list one program, so as long as one ‘status is’ line is found (as described above), the content of the object displays that program.

program

The name of the program found in the ‘status is’ line. This attribute is set to None if a status line is not found.

Type

str

status

The status of the program, or None if not found.

Type

str

The link to this program, or None if the ‘link currently points to`` line is not found.

Type

str

best

The ‘best choice’ path that alternatives would use, or None if the ‘best choice’ line is not found.

Type

str

paths

The alternative paths for this program. Each path is a dictionary containing the following keys:

  • path: the actual path of this alternative for the program

  • priority: the priority, as an integer (e.g. 1700111)

  • slave: a dictionary of programs dependent on this alternative - the key is the program name (e.g. ‘ControlPanel’) and the value is the path to that program for this alternative path.

Type

dict

Examples

>>> java = AlternativesOutput(context_wrap(JAVA_ALTERNATIVES))
>>> java.program
'java'
>>> java.link
'/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64/jre/bin/java'
>>> len(java.paths)
3
>>> java.paths[0]['path']
'/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64/jre/bin/java'
>>> java.paths[0]['priority']
1700111
>>> java.paths[2]['slave']['ControlPanel']
'/usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/ControlPanel'
parse_content(content)[source]

Parse the output of the alternatives command.

class insights.parsers.alternatives.JavaAlternatives(context, extra_bad_lines=[])[source]

Bases: insights.parsers.alternatives.AlternativesOutput

Class to read the /usr/sbin/alternatives --display java output.

Uses the AlternativesOutput base class to get information about the alternatives for java available and which one is currently in use.

Examples

>>> java = shared[JavaAlternatives]
>>> java.program
'java'
>>> java.link
'/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b15.el7_2.x86_64/jre/bin/java'
>>> len(java.paths)
3
>>> java.paths[0]['path']
'/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64/jre/bin/java'
>>> java.paths[0]['priority']
1700111
>>> java.paths[2]['slave']['ControlPanel']
'/usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/ControlPanel'

AMQBroker - file /var/opt/amq-broker/*/etc/broker.xml

Configuration of Active MQ Artemis brokers.

class insights.parsers.amq_broker.AMQBroker(context)[source]

Bases: insights.core.XMLParser

Provides access to broker.xml files that are stored in the conventional location for Active MQ Artemis.

Examples

>>> doc.get_elements(".//journal-pool-files", "urn:activemq:core")[0].text
"10"
>>> doc.get_elements(".//journal-type", "urn:activemq:core")[0].text
"NIO"

audit_log - File /var/log/audit/audit.log

class insights.parsers.audit_log.AuditLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/audit/audit.log file.

Sample log lines:

type=CRYPTO_KEY_USER msg=audit(1506046832.641:53584): pid=16865 uid=0 auid=0 ses=7247 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=destroy kind=session fp=? direction=both spid=16865 suid=0 rport=59296 laddr=192.0.2.1 lport=22  exe="/usr/sbin/sshd" hostname=? addr=192.0.2.3 terminal=? res=success'
type=LOGIN msg=audit(1506047401.407:53591): pid=482 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=993 old-ses=4294967295 ses=7389 res=1
type=AVC msg=audit(1506487181.009:32794): avc:  denied  { create } for  pid=27960 comm="mongod" scontext=system_u:system_r:mongod_t:s0 tcontext=system_u:system_r:mongod_t:s0 tclass=unix_dgram_socket

Examples

>>> log = shared[AuditLog]
>>> log.get('type=AVC')
[{
    'is_valid': True,
    'timestamp': '1506487181.009',
    'unparsed': 'avc:  denied  { create } for',
    'msg_ID': '32794',
    'pid': '27960',
    'raw_message': 'type=AVC msg=audit(1506487181.009:32794): avc:  denied  { create } for  pid=27960 comm="mongod" scontext=system_u:system_r:mongod_t:s0 tcontext=system_u:system_r:mongod_t:s0 tclass=unix_dgram_socket',
    'comm': 'mongod',
    'scontext': 'system_u:system_r:mongod_t:s0',
    'tclass': 'unix_dgram_socket',
    'type': 'AVC',
    'tcontext': 'system_u:system_r:mongod_t:s0'
}]
>>> assert len(list(log.get_after(timestamp=date.fromtimestamp(1506047401.407)))) == 3
get_after(timestamp, s=None)[source]

Find all the (available) logs that are after the given time stamp. Override this function in class LogFileOutput.

Parameters
  • timestamp (datetime.datetime) -- lines before this time are ignored.

  • s (str or list) -- one or more strings to search for. If not supplied, all available lines are searched.

Yields

(dict) -- the parsed data of lines with timestamps after this date in the same format they were supplied.

AuditctlStatus - Report auditd status

class insights.parsers.auditctl_status.AuditctlStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Module for parsing the output of the auditctl -s command.

Typical output on RHEL6 looks like:

AUDIT_STATUS: enabled=1 flag=1 pid=1483 rate_limit=0 backlog_limit=8192 lost=3 backlog=0

, while on RHEL7 the output changes to:

enabled 1
failure 1
pid 947
rate_limit 0
backlog_limit 320
lost 0
backlog 0
loginuid_immutable 0 unlocked

Example

>>> type(auds)
<class 'insights.parsers.auditctl_status.AuditctlStatus'>
>>> "enabled" in auds
True
>>> auds['enabled']
1
parse_content(content)[source]

This method must be implemented by classes based on this class.

AuditdConf - file /etc/audit/auditd.conf

The auditd.conf file is a standard key = value file with hash comments. Active settings are provided using the get_active_settings_value method or by using the dictionary contains functionality.

Example

>>> conf = shared[AuditdConf]
>>> conf.get_active_setting_value('log_group')
'root'
>>> 'log_file' in conf
True
class insights.parsers.auditd_conf.AuditdConf(*args, **kwargs)[source]

Bases: insights.core.Parser

A parser for accessing /etc/audit/auditd.conf.

get_active_setting_value(setting_name)[source]

Access active setting value by setting name.

Parameters

setting_name (string) -- Setting name

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

AutoFSConf - file /etc/autofs.conf

The /etc/autofs.conf file is in a standard ‘.ini’ format, and this parser uses the IniConfigFile base class to read this.

Example

>>> config = shared[AutoFSConf]
>>> config.sections()
['autofs', 'amd']
>>> config.items('autofs')
['timeout', 'browse_mode', 'mount_nfs_default_protocol']
>>> config.has_option('amd', 'map_type')
True
>>> config.get('amd', 'map_type')
'file'
>>> config.getint('autofs', 'timeout')
300
>>> config.getboolean('autofs', 'browse_mode')
False
class insights.parsers.autofs_conf.AutoFSConf(context)[source]

Bases: insights.core.IniConfigFile

/etc/autofs.conf is a standard INI style config file.

AvcCacheThreshold - File /sys/fs/selinux/avc/cache_threshold

This parser reads the content of /sys/fs/selinux/avc/cache_threshold.

class insights.parsers.avc_cache_threshold.AvcCacheThreshold(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class AvcCacheThreshold parses the content of the /sys/fs/selinux/avc/cache_threshold.

cache_threshold

It is used to show the value of cache threshold.

Type

int

A typical sample of the content of this file looks like:

512

Examples

>>> type(avc_cache_threshold)
<class 'insights.parsers.avc_cache_threshold.AvcCacheThreshold'>
>>> avc_cache_threshold.cache_threshold
512
parse_content(content)[source]

This method must be implemented by classes based on this class.

AvcHashStats - File /sys/fs/selinux/avc/hash_stats

This parser reads the content of /sys/fs/selinux/avc/hash_stats.

class insights.parsers.avc_hash_stats.AvcHashStats(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class AvcHashStats parses the content of the /sys/fs/selinux/avc/hash_stats.

entries

It is used to show the count of avc hash entries.

Type

int

buckets

It is used to show the total count of buckets.

Type

int

buckets_used

It is used to show the count of used buckets.

Type

int

longest_chain

It is used to show the longest chain.

Type

int

A typical sample of the content of this file looks like:

entries: 509
buckets used: 290/512
longest chain: 7

Examples

>>> type(avc_hash_stats)
<class 'insights.parsers.avc_hash_stats.AvcHashStats'>
>>> avc_hash_stats.entries
509
>>> avc_hash_stats.buckets
512
>>> avc_hash_stats.buckets_used
290
>>> avc_hash_stats.longest_chain
7
parse_content(content)[source]

This method must be implemented by classes based on this class.

AWSInstanceID

These parsers read the output of commands to collect identify information from AWS instances.

  • curl -s http://169.254.169.254/latest/dynamic/instance-identity/document and

  • curl -s http://169.254.169.254/latest/dynamic/instance-identity/pkcs7

class insights.parsers.aws_instance_id.AWSInstanceIdDoc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class for parsing the AWS Instance Identity Document returned by the command:

curl -s http://169.254.169.254/latest/dynamic/instance-identity/document

Typical output of this command is:

{
    "devpayProductCodes" : null,
    "marketplaceProductCodes" : [ "1abc2defghijklm3nopqrs4tu" ],
    "availabilityZone" : "us-west-2b",
    "privateIp" : "10.158.112.84",
    "version" : "2017-09-30",
    "instanceId" : "i-1234567890abcdef0",
    "billingProducts" : [ "bp-6ba54002" ],
    "instanceType" : "t2.micro",
    "accountId" : "123456789012",
    "imageId" : "ami-5fb8c835",
    "pendingTime" : "2016-11-19T16:32:11Z",
    "architecture" : "x86_64",
    "kernelId" : null,
    "ramdiskId" : null,
    "region" : "us-west-2"
}
Raises
dict

Parser object is a dictionary that is a direct translation of the input key:value pairs.

json

Input in JSON string format.

Type

str

Examples

>>> print(aws_id_doc['billingProducts'][0])
bp-6ba54002
>>> 'version' in aws_id_doc
True
>>> print(aws_id_doc['version'])
2017-09-30
parse_content(content)[source]

Parse output of command.

class insights.parsers.aws_instance_id.AWSInstanceIdPkcs7(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the AWS Instance Identity PKCS7 signature returned by the command:

curl -s http://169.254.169.254/latest/dynamic/instance-identity/pkcs7

Typical output of this command is:

MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE
Raises

SkipException -- When content is empty or cannot be parsed.

signature

PKCS7 signature string including header and footer.

Type

str

Examples

>>> aws_id_sig.signature.startswith('-----BEGIN PKCS7-----\nMIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC\n')
True
>>> aws_id_sig.signature.endswith('NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE\n-----END PKCS7-----')
True
parse_content(content)[source]

Parse output of command.

AWSInstanceType

This parser simply reads the output of command curl -s http://169.254.169.254/latest/meta-data/instance-type, which is used to check the type of the AWS instance on the host.

class insights.parsers.aws_instance_type.AWSInstanceType(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the AWS Instance type returned by command curl -s http://169.254.169.254/latest/meta-data/instance-type

Typical output of this command is:

r3.xlarge
Raises
type

The name of AWS instance type in all uppercase letters. E.g. R3, R4, R5, or X1.

Type

str

raw

The fully type string returned by the curl command.

Type

str

Examples

>>> aws_inst.type
'R3'
>>> aws_inst.raw
'r3.xlarge'
parse_content(content)[source]

This method must be implemented by classes based on this class.

AzureInstanceType

This parser simply reads the output of command curl -s -H Metadata:true http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2018-10-01&format=text, which is used to check the type of the Azure instance of the host.

For more details, See: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes

class insights.parsers.azure_instance_type.AzureInstanceType(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the Azure Instance type returned by command curl -s -H Metadata:true http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2018-10-01&format=text,

Typical output of this command is:

Standard_L64s_v2
Raises
type

The type of VM instance in Azure, e.g: Standard

Type

str

size

The size of VM instance in Azure, e.g: L64s, NC12s

Type

str

version

The version of VM instance in Azure, e.g: v2, v3, None for non-version

Type

str

raw

The fully type string returned by the curl command

Type

str

Examples

>>> azure_inst.type
'Standard'
>>> azure_inst.size
'L64s'
>>> azure_inst.version
'v2'
>>> azure_inst.raw
'Standard_L64s_v2'
parse_content(content)[source]

This method must be implemented by classes based on this class.

BlockIDInfo - command blkid

This module provides the class BlockIDInfo which processes blkid command output. Typical output looks like:

/dev/sda1: UUID="3676157d-f2f5-465c-a4c3-3c2a52c8d3f4" TYPE="xfs"
/dev/sda2: UUID="UVTk76-UWOc-vk7s-galL-dxIP-4UXO-0jG4MH" TYPE="LVM2_member"
/dev/mapper/rhel_hp--dl160g8--3-root: UUID="11124c1d-990b-4277-9f74-c5a34eb2cd04" TYPE="xfs"
/dev/mapper/rhel_hp--dl160g8--3-swap: UUID="c7c45f2d-1d1b-4cf0-9d51-e2b0046682f8" TYPE="swap"
/dev/mapper/rhel_hp--dl160g8--3-home: UUID="c7116820-f2de-4aee-8ea6-0b23c6491598" TYPE="xfs"
/dev/mapper/rhel_hp--dl160g8--3-lv_test: UUID="d403bcbd-0eea-4bff-95b9-2237740f5c8b" TYPE="ext4"
/dev/cciss/c0d1p3: LABEL="/u02" UUID="004d0ca3-373f-4d44-a085-c19c47da8b5e" TYPE="ext3"
/dev/cciss/c0d1p2: LABEL="/u01" UUID="ffb8b27e-5a3d-434c-b1bd-16cb17b0e325" TYPE="ext3"
/dev/loop0: LABEL="Satellite-5.6.0 x86_64 Disc 0" TYPE="iso9660"
/dev/block/253:1: UUID="f8508c37-eeb1-4598-b084-5364d489031f" TYPE="ext3"

The class has one attribute data which is a list representing each line of the input data as a dict with keys corresponding to the keys in the output.

Examples

>>> block_id = shared[BlockIDInfo]
>>> block_id.data[0]
{'NAME': '/dev/sda1', 'UUID': '3676157d-f2f5-465c-a4c3-3c2a52c8d3f4', 'TYPE': 'xfs'}
>>> block_id.data[0]['TYPE']
'xfs'
>>> block_id.filter_by_type('ext3')
[{'NAME': '/dev/cciss/c0d1p3', 'LABEL': '/u02', 'UUID': '004d0ca3-373f-4d44-a085-c19c47da8b5e',
  'TYPE': 'ext3'},
 {'NAME': '/dev/block/253:1', 'UUID': 'f8508c37-eeb1-4598-b084-5364d489031f','TYPE': 'ext3'},
 {'NAME': '/dev/cciss/c0d1p2', 'LABEL': '/u01', 'UUID': 'ffb8b27e-5a3d-434c-b1bd-16cb17b0e325',
  'TYPE': 'ext3'}]
class insights.parsers.blkid.BlockIDInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to process the blkid command output.

data

A list containing a dictionary for each line of the output in the form:

[
    {
        'NAME': "/dev/sda1"
        'UUID': '3676157d-f2f5-465c-a4c3-3c2a52c8d3f4',
        'TYPE': 'xfs'
    },
    {
        'NAME': "/dev/cciss/c0d1p3",
        'LABEL': '/u02',
        'UUID': '004d0ca3-373f-4d44-a085-c19c47da8b5e',
        'TYPE': 'ext3'
    }
]
Type

list

filter_by_type(fs_type)[source]

list: Returns a list of all entries where TYPE = fs_type.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Bond - file /proc/net/bonding

Provides plugins access to the network bonding information gathered from all the files starteing with “bond.” located in the /proc/net/bonding directory.

Typical content of bond.* file is:

Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 500
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Active Aggregator Info:
        Aggregator ID: 3
        Number of ports: 1
        Actor Key: 17
        Partner Key: 1
        Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:35:5e:42:fc
Aggregator ID: 3

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:35:5e:02:7e
Aggregator ID: 2

Data is modeled as an array of Bond objects (bond being a pattern file specification gathering data from files located in /proc/net/bonding.

Examples

>>> type(bond_info)
<class 'insights.parsers.bond.Bond'>
>>> bond_info.bond_mode
'4'
>>> bond_info.partner_mac_address
'00:00:00:00:00:00'
>>> bond_info.slave_interface
['eth1', 'eth2']
>>> bond_info.aggregator_id
['3', '3', '2']
>>> bond_info.xmit_hash_policy
'layer2'
>>> bond_info.active_slave
>>> bond_info.slave_duplex
['full', 'full']
>>> bond_info.slave_speed
['1000 Mbps', '1000 Mbps']
class insights.parsers.bond.Bond(context)[source]

Bases: insights.core.Parser

Models the /proc/net/bonding file.

Currently used information from /proc/net/bonding includes the “bond mode” and “partner mac address”.

property active_slave

Returns the active slave of the “Currently Active Slave” in the bond file if key/value exists. If the key is not in the bond file, None is returned.

property aggregator_id

Returns all the aggregator id of in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

property arp_ip_target

Returns the arp ip target as a string. None is returned if no “ARP IP target/s (n.n.n.n form)” key is found.

property arp_polling_interval

Returns the arp polling interval as a string. None is returned if no “ARP Polling Interval (ms)” key is found.

property bond_mode

Returns the bond mode number as a string, or if there is no known mapping to a number, the raw “Bonding Mode” value. None is returned if no “Bonding Mode” key is found.

property mii_status

Returns the master and all the slaves “MII Status” value in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

parse_content(content)[source]

This method must be implemented by classes based on this class.

property partner_mac_address

Returns the value of the “Partner Mac Address” in the bond file if the key/value exists. If the key is not in the bond file, None is returned.

property primary_slave

Returns the “Primary Slave” in the bond file if key/value exists. If the key is not in the bond file, None is returned.

property slave_duplex

Returns all the slave “Duplex” value in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

property slave_interface

Returns all the slave interfaces of in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

Returns all the slaves “Link Failure Count” value in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

property slave_speed

Returns all the slaves “Speed” value in the bond file wrapped a list if the key/value exists. If the key is not in the bond file, [] is returned.

BondDynamicLB - file /sys/class/net/bond[0-9]*/bonding/tlb_dynamic_lb

This file represent weather the transmit load balancing is enabled or not

tlb_dynamic_lb=1 mode:

The outgoing traffic is distributed according to the current load.

tlb_dynamic_lb=0 mode:

The load balancing based on current load is disabled and the load is distributed only using the hash distribution.

Typical content of the file is:

1

Data is modeled as an array of BondDynamicLB objects

Examples

>>> type(tlb_bond)
<class 'insights.parsers.bond_dynamic_lb.BondDynamicLB'>
>>> tlb_bond.dynamic_lb_status
1
>>> tlb_bond.bond_name
'bond0'
class insights.parsers.bond_dynamic_lb.BondDynamicLB(context)[source]

Bases: insights.core.Parser

Models the /sys/class/net/bond[0-9]*/bonding/tlb_dynamic_lb file.

0 - Hash based load balancing.

1 - Load based load balancing.

Raises
property bond_name

Name of bonding interface

Type

Returns (str)

property dynamic_lb_status

Load balancer type

Type

Returns (int)

parse_content(content)[source]

This method must be implemented by classes based on this class.

BrctlShow - command brctl show

This module provides processing for the output of the brctl show command.

Class BrctlShow parses the output of the brctl show command. Sample output of this command looks like:

---
bridge name     bridge id               STP enabled     interfaces
br0             8000.08002731ddfd       no              eth1
                                                        eth2
                                                        eth3
br1             8000.0800278cdb62       no              eth4
                                                        eth5
br2             8000.0800278cdb63       no              eth6
docker0         8000.0242d4cf2112       no
---

Examples

>>> brctl_content = '''
... bridge name     bridge id               STP enabled     interfaces
... br0             8000.08002731ddfd       no              eth1
...                                                         eth2
...                                                         eth3
... br1             8000.0800278cdb62       no              eth4
...                                                         eth5
... br2             8000.0800278cdb63       no              eth6
... docker0         8000.0242d4cf2112       no
... '''.strip()
>>> from insights.parsers.brctl_show import BrctlShow
>>> from insights.tests import context_wrap
>>> shared = {BrctlShow: BrctlShow(context_wrap(brctl_content))}
>>> brctl_info = BrctlShow(context_wrap(brctl_content))
>>> brctl_info.data
[
 {'interfaces': ['eth1', 'eth2', 'eth3'], 'bridge id': '8000.08002731ddfd',
  'STP enabled': 'no', 'bridge name': 'br0'},
 {'interfaces': ['eth4', 'eth5'], 'bridge id': '8000.0800278cdb62',
  'STP enabled': 'no', 'bridge name': 'br1'},
 {'interfaces': ['eth6'], 'bridge id': '8000.0800278cdb63',
  'STP enabled': 'no', 'bridge name': 'br2'},
 {'bridge id': '8000.0242d4cf2112', 'STP enabled': 'no',
  'bridge name': 'docker0'}
]
>>> brctl_info.group_by_iface
{
 'docker0': {'STP enabled': 'no', 'bridge id': '8000.0242d4cf2112'},
 'br2': {'interfaces': ['eth6'], 'STP enabled': 'no',
         'bridge id': '8000.0800278cdb63'},
 'br1': {'interfaces': ['eth4', 'eth5'], 'STP enabled': 'no',
         'bridge id': '8000.0800278cdb62'},
 'br0': {'interfaces': ['eth1', 'eth2', 'eth3'], 'STP enabled': 'no',
         'bridge id': '8000.08002731ddfd'}
}
class insights.parsers.brctl_show.BrctlShow(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of the command “brctl show” to get bridge interface info table

property group_by_iface

bridge id, STP enabled and interfaces

Type

Return a dict, key is the bridge name, the value is a dic with keys

parse_content(content)[source]

This method must be implemented by classes based on this class.

catalina_log - Log files for Tomcat

Note

The tomcat log files are gotten from the directory specified in the java commands.

class insights.parsers.catalina_log.CatalinaOut(context)[source]

Bases: insights.core.LogFileOutput

This parser reads all catalina.out files in the /var/log/tomcat* directories.

Note

Please refer to its super-class insights.core.LogFileOutput

Note that the standard format of Catalina log lines spreads the information over two lines:

Nov 10, 2015 8:52:38 AM org.apache.jk.common.MsgAjp processHeader
SEVERE: BAD packet signature 18245
Nov 10, 2015 8:52:38 AM org.apache.jk.common.ChannelSocket processConnection
SEVERE: Error, processing connection
SEVERE: BAD packet signature 18245
Nov 10, 2015 8:52:38 AM org.apache.jk.common.ChannelSocket processConnection
SEVERE: Error, processing connection
Nov 10, 2015 4:55:48 PM org.apache.coyote.http11.Http11Protocol pause
INFO: Pausing Coyote HTTP/1.1 on http-8080

However, this parser only recognises single lines.

When using this parser, consider using a filter or a scan method, e.g.:

CatalinaOut.filters.append('BAD packet signature')
CatalinaOut.keep_scan('bad_signatures', 'BAD packet signature')

Example

>>> type(out)
<class 'insights.parsers.catalina_log.CatalinaOut'>
>>> out.file_path
'/var/log/tomcat/catalina.out'
>>> out.file_name
'catalina.out'
>>> len(out.get('SEVERE'))
4
>>> 'Http11Protocol pause' in out
True
>>> out.lines[0]
'Nov 10, 2015 8:52:38 AM org.apache.jk.common.MsgAjp processHeader'
>>> from datetime import datetime
>>> list(out.get_after(datetime(2015, 11, 10, 12, 00, 00)))[0]['raw_message']
'Nov 10, 2015 4:55:48 PM org.apache.coyote.http11.Http11Protocol pause'
class insights.parsers.catalina_log.CatalinaServerLog(context)[source]

Bases: insights.core.LogFileOutput

Read the tomcat server log file.

Note that the standard format of Catalina log lines spreads the information over two lines:

INFO: Command line argument: -Djava.io.tmpdir=/var/cache/tomcat/temp
Nov 28, 2017 2:11:20 PM org.apache.catalina.startup.VersionLoggerListener log
INFO: Command line argument: -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties
Nov 28, 2017 2:11:20 PM org.apache.catalina.startup.VersionLoggerListener log
INFO: Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
Nov 28, 2017 2:11:20 PM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Nov 28, 2017 2:11:22 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-18080"]
Nov 28, 2017 2:11:23 PM org.apache.coyote.AbstractProtocol init
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-18080"]

However, this parser only recognises single lines. Please refer to its super-class insights.core.LogFileOutput.

When using this parser, consider using a filter or a scan method, e.g.:

CatalinaServerLog.filters.append('initializing ProtocolHandler')
CatalinaServerLog.keep_scan('init_pro', 'initializing ProtocolHandler')

Examples

>>> type(log)
<class 'insights.parsers.catalina_log.CatalinaServerLog'>
>>> log.file_path
'/var/log/tomcat/catalina.2017-11-28.log'
>>> log.file_name
'catalina.2017-11-28.log'
>>> log.get('Failed to initialize')[0]['raw_message']
'SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-18080"]'
>>> '/var/cache/tomcat/temp' in log
True
>>> from datetime import datetime
>>> list(log.get_after(datetime(2017, 11, 28, 14, 11, 21)))[0]['raw_message']
'Nov 28, 2017 2:11:22 PM org.apache.coyote.AbstractProtocol init'

Cciss - Files /proc/driver/cciss/cciss*

Reads the /proc/driver/cciss/cciss* files and converts them into a dictionary in the data property.

Example

>>> cciss = shared[Cciss]
>>> cciss.data['Logical drives']
'1'
>>> 'IRQ' in cciss.data
True
>>> cciss.model
'HP Smart Array P220i Controller'
>>> cciss.firmware_version
'3.42'
class insights.parsers.cciss.Cciss(context)[source]

Bases: insights.core.Parser

Class for parsing the content of /etc/device/cciss*

Raw Data:

cciss0: HP Smart Array P220i Controller
Board ID: 0x3355103c
Firmware Version: 3.42
IRQ: 82
Logical drives: 1
Sector size: 8192
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 84
Max # commands on controller since init: 111
Max SG entries since init: 128
Sequential access devices: 0

cciss/c0d0:  299.96GB   RAID 1(1+0)

Output:

data = {
    "Sequential access devices": "0",
    "Current Q depth": "0",
    "cciss0": "HP Smart Array P220i Controller",
    "Board ID": "0x3355103c",
    "IRQ": "82",
    "cciss/c0d0": "299.96GB   RAID 1(1+0)",
    "Logical drives": "1",
    "Current # commands on controller": "0",
    "Sector size": "8192",
    "Firmware Version": "3.42",
    "Max # commands on controller since init": "111",
    "Max SG entries since init": "128",
    "Max Q depth since init": "84"
}
property firmware_version

Return the Firmware Version.

property model

Return the full model name of the cciss device.

parse_content(content)[source]

This method must be implemented by classes based on this class.

CeilometerConf - file /etc/ceilometer/ceilometer.conf

The /etc/ceilometer/ceilometer.conf file is in a standard ‘.ini’ format, and this parser uses the IniConfigFile base class to read this.

Given a file containing the following test data:

[DEFAULT]
#
# From ceilometer
http_timeout = 600
debug = False
verbose = False
log_dir = /var/log/ceilometer
meter_dispatcher=database
event_dispatcher=database
[alarm]
evaluation_interval = 60
evaluation_service=ceilometer.alarm.service.SingletonAlarmService
partition_rpc_topic=alarm_partition_coordination
[api]
port = 8777
host = 192.0.2.10
[central]
[collector]
udp_address = 0.0.0.0
udp_port = 4952
[compute]
[coordination]
backend_url = redis://:chDWmHdH8dyjsmpCWfCEpJR87@192.0.2.7:6379/

Example

>>> config = shared[CeilometerConf]
>>> config.sections()
['DEFAULT', 'alarm', 'api', 'central', 'collector', 'compute', 'coordination']
>>> config.items('api')
['port', 'host']
>>> config.has_option('alarm', 'evaluation_interval')
True
>>> config.get('coordination', 'backend_url')
'redis://:chDWmHdH8dyjsmpCWfCEpJR87@192.0.2.7:6379/'
>>> config.getint('collector', 'udp_port')
4952
>>> config.getboolean('DEFAULT', 'debug')
False
class insights.parsers.ceilometer_conf.CeilometerConf(context)[source]

Bases: insights.core.IniConfigFile

A dict of the content of the ceilometer.conf configuration file.

Example selection of dictionary contents:

{
    "DEFAULT": {
        "http_timeout":"600",
        "debug": "False"
     },
    "api": {
        "port":"8877",
    },
}

Ceilometer logs

Module for parsing the log files for Ceilometer

CeilometerCentralLog - file /var/log/ceilometer/central.log

CeilometerCollectorLog - file /var/log/ceilometer/collector.log

CeilometerComputeLog - file /var/log/ceilometer/compute.log

Note

Please refer to the super-class insights.core.LogFileOutput

class insights.parsers.ceilometer_log.CeilometerCentralLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/ceilometer/central.log file.

Typical content of central.log file is:

2016-11-09 14:38:08.484 31654 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-11-09 14:38:09.711 31723 INFO ceilometer.declarative [-] Definitions: {'metric': [{'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.10.1.3.1', 'type': 'lambda x: float(str(x))', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.cpu.load.1min', 'unit': 'process'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.10.1.3.2', 'type': 'lambda x: float(str(x))', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.cpu.load.5min', 'unit': 'process'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.10.1.3.3', 'type': 'lambda x: float(str(x))', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.cpu.load.15min', 'unit': 'process'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.11.9.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.cpu.util', 'unit': '%'}, {'snmp_inspector': {'post_op': '_post_op_disk', 'oid': '1.3.6.1.4.1.2021.9.1.6', 'type': 'int', 'matching_type': 'type_prefix', 'metadata': {'device': {'oid': '1.3.6.1.4.1.2021.9.1.3', 'type': 'str'}, 'path': {'oid': '1.3.6.1.4.1.2021.9.1.2', 'type': 'str'}}}, 'type': 'gauge', 'name': 'hardware.disk.size.total', 'unit': 'KB'}, {'snmp_inspector': {'post_op': '_post_op_disk', 'oid': '1.3.6.1.4.1.2021.9.1.8', 'type': 'int', 'matching_type': 'type_prefix', 'metadata': {'device': {'oid': '1.3.6.1.4.1.2021.9.1.3', 'type': 'str'}, 'path': {'oid': '1.3.6.1.4.1.2021.9.1.2', 'type': 'str'}}}, 'type': 'gauge', 'name': 'hardware.disk.size.used', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.5.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.memory.total', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.6.0', 'type': 'int', 'matching_type': 'type_exact', 'post_op': '_post_op_memory_avail_to_used'}, 'type': 'gauge', 'name': 'hardware.memory.used', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.3.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.memory.swap.total', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.4.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.memory.swap.avail', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.14.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.memory.buffer', 'unit': 'KB'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.4.15.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.memory.cached', 'unit': 'KB'}, {'snmp_inspector': {'post_op': '_post_op_net', 'oid': '1.3.6.1.2.1.2.2.1.10', 'type': 'int', 'matching_type': 'type_prefix', 'metadata': {'mac': {'oid': '1.3.6.1.2.1.2.2.1.6', 'type': "lambda x: x.prettyPrint().replace('0x', '')"}, 'speed': {'oid': '1.3.6.1.2.1.2.2.1.5', 'type': 'lambda x: int(x) / 8'}, 'name': {'oid': '1.3.6.1.2.1.2.2.1.2', 'type': 'str'}}}, 'type': 'cumulative', 'name': 'hardware.network.incoming.bytes', 'unit': 'B'}, {'snmp_inspector': {'post_op': '_post_op_net', 'oid': '1.3.6.1.2.1.2.2.1.16', 'type': 'int', 'matching_type': 'type_prefix', 'metadata': {'mac': {'oid': '1.3.6.1.2.1.2.2.1.6', 'type': "lambda x: x.prettyPrint().replace('0x', '')"}, 'speed': {'oid': '1.3.6.1.2.1.2.2.1.5', 'type': 'lambda x: int(x) / 8'}, 'name': {'oid': '1.3.6.1.2.1.2.2.1.2', 'type': 'str'}}}, 'type': 'cumulative', 'name': 'hardware.network.outgoing.bytes', 'unit': 'B'}, {'snmp_inspector': {'post_op': '_post_op_net', 'oid': '1.3.6.1.2.1.2.2.1.20', 'type': 'int', 'matching_type': 'type_prefix', 'metadata': {'mac': {'oid': '1.3.6.1.2.1.2.2.1.6', 'type': "lambda x: x.prettyPrint().replace('0x', '')"}, 'speed': {'oid': '1.3.6.1.2.1.2.2.1.5', 'type': 'lambda x: int(x) / 8'}, 'name': {'oid': '1.3.6.1.2.1.2.2.1.2', 'type': 'str'}}}, 'type': 'cumulative', 'name': 'hardware.network.outgoing.errors', 'unit': 'packet'}, {'snmp_inspector': {'oid': '1.3.6.1.2.1.4.10.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'cumulative', 'name': 'hardware.network.ip.outgoing.datagrams', 'unit': 'datagrams'}, {'snmp_inspector': {'oid': '1.3.6.1.2.1.4.3.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'cumulative', 'name': 'hardware.network.ip.incoming.datagrams', 'unit': 'datagrams'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.11.11.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'gauge', 'name': 'hardware.system_stats.cpu.idle', 'unit': '%'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.11.57.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'cumulative', 'name': 'hardware.system_stats.io.outgoing.blocks', 'unit': 'blocks'}, {'snmp_inspector': {'oid': '1.3.6.1.4.1.2021.11.58.0', 'type': 'int', 'matching_type': 'type_exact'}, 'type': 'cumulative', 'name': 'hardware.system_stats.io.incoming.blocks', 'unit': 'blocks'}]}
2016-11-09 14:38:09.986 31723 WARNING oslo_config.cfg [-] Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:38:10.040 31723 INFO ceilometer.pipeline [-] Config file: {'sources': [{'interval': 600, 'meters': ['*'], 'name': 'meter_source', 'sinks': ['meter_sink']}, {'interval': 600, 'meters': ['cpu'], 'name': 'cpu_source', 'sinks': ['cpu_sink', 'cpu_delta_sink']}, {'interval': 600, 'meters': ['disk.read.bytes', 'disk.read.requests', 'disk.write.bytes', 'disk.write.requests', 'disk.device.read.bytes', 'disk.device.read.requests', 'disk.device.write.bytes', 'disk.device.write.requests'], 'name': 'disk_source', 'sinks': ['disk_sink']}, {'interval': 600, 'meters': ['network.incoming.bytes', 'network.incoming.packets', 'network.outgoing.bytes', 'network.outgoing.packets'], 'name': 'network_source', 'sinks': ['network_sink']}], 'sinks': [{'publishers': ['notifier://'], 'transformers': None, 'name': 'meter_sink'}, {'publishers': ['notifier://'], 'transformers': [{'name': 'rate_of_change', 'parameters': {'target': {'scale': '100.0 / (10**9 * (resource_metadata.cpu_number or 1))', 'type': 'gauge', 'name': 'cpu_util', 'unit': '%'}}}], 'name': 'cpu_sink'}, {'publishers': ['notifier://'], 'transformers': [{'name': 'delta', 'parameters': {'target': {'name': 'cpu.delta'}, 'growth_only': True}}], 'name': 'cpu_delta_sink'}, {'publishers': ['notifier://'], 'transformers': [{'name': 'rate_of_change', 'parameters': {'source': {'map_from': {'name': '(disk\.device|disk)\.(read|write)\.(bytes|requests)', 'unit': '(B|request)'}}, 'target': {'map_to': {'name': '\1.\2.\3.rate', 'unit': '\1/s'}, 'type': 'gauge'}}}], 'name': 'disk_sink'}, {'publishers': ['notifier://'], 'transformers': [{'name': 'rate_of_change', 'parameters': {'source': {'map_from': {'name': 'network\.(incoming|outgoing)\.(bytes|packets)', 'unit': '(B|packet)'}}, 'target': {'map_to': {'name': 'network.\1.\2.rate', 'unit': '\1/s'}, 'type': 'gauge'}}}], 'name': 'network_sink'}]}
2016-11-09 14:38:10.041 31723 INFO ceilometer.pipeline [-] detected decoupled pipeline config format
2016-11-09 14:38:10.053 31723 INFO ceilometer.coordination [-] Coordination backend started successfully.
2016-11-09 14:38:10.064 31723 INFO ceilometer.coordination [-] Joined partitioning group central-global
2016-11-09 14:58:10.621 31723 INFO ceilometer.agent.manager [-] Skip pollster switch, no resources found this cycle
2016-11-09 14:58:15.655 31723 INFO ceilometer.agent.manager [-] Skip pollster hardware.memory.used, no resources found this cycle
2016-11-09 14:58:15.656 31723 INFO ceilometer.agent.manager [-] Skip pollster switch.port, no resources found this cycle
2016-11-09 14:58:15.657 31723 INFO ceilometer.agent.manager [-] Skip pollster switch.port.receive.bytes, no resources found this cycle
2016-11-09 14:58:15.657 31723 INFO ceilometer.agent.manager [-] Skip pollster hardware.system_stats.io.incoming.blocks, no resources found this cycle
2016-11-09 14:58:17.027 31723 WARNING ceilometer.neutron_client [-] The resource could not be found.
class insights.parsers.ceilometer_log.CeilometerCollectorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/ceilometer/collector.log file.

Typical content of collector.log file is:

2016-11-09 14:32:40.269 4204 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-11-09 14:32:40.467 4259 INFO ceilometer.declarative [-] Definitions: {'resources': [{'metrics': ['identity.authenticate.success', 'identity.authenticate.pending', 'identity.authenticate.failure', 'identity.user.created', 'identity.user.deleted', 'identity.user.updated', 'identity.group.created', 'identity.group.deleted', 'identity.group.updated', 'identity.role.created', 'identity.role.deleted', 'identity.role.updated', 'identity.project.created', 'identity.project.deleted', 'identity.project.updated', 'identity.trust.created', 'identity.trust.deleted', 'identity.role_assignment.created', 'identity.role_assignment.deleted'], 'archive_policy': 'low', 'resource_type': 'identity'}, {'metrics': ['radosgw.objects', 'radosgw.objects.size', 'radosgw.objects.containers', 'radosgw.api.request', 'radosgw.containers.objects', 'radosgw.containers.objects.size'], 'resource_type': 'ceph_account'}, {'metrics': ['instance', 'memory', 'memory.usage', 'memory.resident', 'vcpus', 'cpu', 'cpu.delta', 'cpu_util', 'disk.root.size', 'disk.ephemeral.size', 'disk.read.requests', 'disk.read.requests.rate', 'disk.write.requests', 'disk.write.requests.rate', 'disk.read.bytes', 'disk.read.bytes.rate', 'disk.write.bytes', 'disk.write.bytes.rate', 'disk.latency', 'disk.iops', 'disk.capacity', 'disk.allocation', 'disk.usage'], 'event_associated_resources': {'instance_network_interface': '{"=": {"instance_id": "%s"}}', 'instance_disk': '{"=": {"instance_id": "%s"}}'}, 'event_delete': 'compute.instance.delete.start', 'attributes': {'display_name': 'resource_metadata.display_name', 'host': 'resource_metadata.host', 'image_ref': 'resource_metadata.image_ref', 'flavor_id': 'resource_metadata.(instance_flavor_id|(flavor.id))', 'server_group': 'resource_metadata.user_metadata.server_group'}, 'event_attributes': {'id': 'payload.instance_id'}, 'resource_type': 'instance'}, {'metrics': ['network.outgoing.packets.rate', 'network.incoming.packets.rate', 'network.outgoing.packets', 'network.incoming.packets', 'network.outgoing.bytes.rate', 'network.incoming.bytes.rate', 'network.outgoing.bytes', 'network.incoming.bytes'], 'attributes': {'instance_id': 'resource_metadata.instance_id', 'name': 'resource_metadata.vnic_name'}, 'resource_type': 'instance_network_interface'}, {'metrics': ['disk.device.read.requests', 'disk.device.read.requests.rate', 'disk.device.write.requests', 'disk.device.write.requests.rate', 'disk.device.read.bytes', 'disk.device.read.bytes.rate', 'disk.device.write.bytes', 'disk.device.write.bytes.rate', 'disk.device.latency', 'disk.device.iops', 'disk.device.capacity', 'disk.device.allocation', 'disk.device.usage'], 'attributes': {'instance_id': 'resource_metadata.instance_id', 'name': 'resource_metadata.disk_name'}, 'resource_type': 'instance_disk'}, {'metrics': ['image', 'image.size', 'image.download', 'image.serve'], 'attributes': {'container_format': 'resource_metadata.container_format', 'disk_format': 'resource_metadata.disk_format', 'name': 'resource_metadata.name'}, 'event_delete': 'image.delete', 'event_attributes': {'id': 'payload.resource_id'}, 'resource_type': 'image'}, {'metrics': ['hardware.ipmi.node.power', 'hardware.ipmi.node.temperature', 'hardware.ipmi.node.inlet_temperature', 'hardware.ipmi.node.outlet_temperature', 'hardware.ipmi.node.fan', 'hardware.ipmi.node.current', 'hardware.ipmi.node.voltage', 'hardware.ipmi.node.airflow', 'hardware.ipmi.node.cups', 'hardware.ipmi.node.cpu_util', 'hardware.ipmi.node.mem_util', 'hardware.ipmi.node.io_util'], 'resource_type': 'ipmi'}, {'metrics': ['bandwidth', 'network', 'network.create', 'network.update', 'subnet', 'subnet.create', 'subnet.update', 'port', 'port.create', 'port.update', 'router', 'router.create', 'router.update', 'ip.floating', 'ip.floating.create', 'ip.floating.update'], 'resource_type': 'network'}, {'metrics': ['stack.create', 'stack.update', 'stack.delete', 'stack.resume', 'stack.suspend'], 'resource_type': 'stack'}, {'metrics': ['storage.objects.incoming.bytes', 'storage.objects.outgoing.bytes', 'storage.api.request', 'storage.objects.size', 'storage.objects', 'storage.objects.containers', 'storage.containers.objects', 'storage.containers.objects.size'], 'resource_type': 'swift_account'}, {'metrics': ['volume', 'volume.size', 'volume.create', 'volume.delete', 'volume.update', 'volume.resize', 'volume.attach', 'volume.detach'], 'attributes': {'display_name': 'resource_metadata.display_name'}, 'resource_type': 'volume'}, {'metrics': ['hardware.cpu.load.1min', 'hardware.cpu.load.5min', 'hardware.cpu.load.15min', 'hardware.cpu.util', 'hardware.memory.total', 'hardware.memory.used', 'hardware.memory.swap.total', 'hardware.memory.swap.avail', 'hardware.memory.buffer', 'hardware.memory.cached', 'hardware.network.ip.outgoing.datagrams', 'hardware.network.ip.incoming.datagrams', 'hardware.system_stats.cpu.idle', 'hardware.system_stats.io.outgoing.blocks', 'hardware.system_stats.io.incoming.blocks'], 'attributes': {'host_name': 'resource_metadata.resource_url'}, 'resource_type': 'host'}, {'metrics': ['hardware.disk.size.total', 'hardware.disk.size.used'], 'attributes': {'host_name': 'resource_metadata.resource_url', 'device_name': 'resource_metadata.device'}, 'resource_type': 'host_disk'}, {'metrics': ['hardware.network.incoming.bytes', 'hardware.network.outgoing.bytes', 'hardware.network.outgoing.errors'], 'attributes': {'host_name': 'resource_metadata.resource_url', 'device_name': 'resource_metadata.name'}, 'resource_type': 'host_network_interface'}]}
2016-11-09 14:32:41.099 4259 WARNING oslo_config.cfg [-] Option "max_retries" from group "database" is deprecated. Use option "max_retries" from group "storage".
2016-11-09 14:36:35.464 4204 INFO cotyledon [-] Caught SIGTERM signal, graceful exiting of master process
2016-11-09 14:36:35.465 4259 INFO cotyledon [-] Caught signal (15) during service initialisation, delaying it
2016-11-09 14:38:07.280 31638 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
class insights.parsers.ceilometer_log.CeilometerComputeLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/ceilometer/compute.log file.

Typical content of compute.log file is:

2018-01-12 21:00:02.939 49455 INFO ceilometer.agent.manager [-] Polling pollster network.outgoing.packets in the context of some_pollsters
2018-01-12 21:00:02.950 49455 INFO ceilometer.agent.manager [-] Polling pollster memory.usage in the context of some_pollsters
2018-01-12 21:00:02.953 49455 WARNING ceilometer.compute.pollsters.memory [-] Cannot inspect data of MemoryUsagePollster for xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, non-fatal reason: Failed to inspect memory usage of instance <name=instance-name1, id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>, can not get info from libvirt.
2018-01-12 21:00:02.957 49455 WARNING ceilometer.compute.pollsters.memory [-] Cannot inspect data of MemoryUsagePollster for yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy, non-fatal reason: Failed to inspect memory usage of instance <name=instance-name2, id=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy>, can not get info from libvirt.
2018-01-12 21:00:02.963 49455 WARNING ceilometer.compute.pollsters.memory [-] Cannot inspect data of MemoryUsagePollster for zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz, non-fatal reason: Failed to inspect memory usage of instance <name=instance-name3, id=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz>, can not get info from libvirt.
2018-01-12 21:00:02.970 49455 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests in the context of some_pollsters
2018-01-12 21:00:02.976 49455 INFO ceilometer.agent.manager [-] Polling pollster network.incoming.packets in the context of some_pollsters
2018-01-12 21:00:02.981 49455 INFO ceilometer.agent.manager [-] Polling pollster cpu in the context of some_pollsters
2018-01-12 21:00:03.014 49455 INFO ceilometer.agent.manager [-] Polling pollster network.incoming.bytes in the context of some_pollsters
2018-01-12 21:00:03.020 49455 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests in the context of some_pollsters
2018-01-12 21:00:03.041 49455 INFO ceilometer.agent.manager [-] Polling pollster network.outgoing.bytes in the context of some_pollsters
2018-01-12 21:00:03.062 49455 INFO ceilometer.agent.manager [-] Polling pollster disk.write.bytes in the context of some_pollsters

Ceph status commands

This module provides processing for the output of the following ceph related commands with -f json-pretty parameter.

CephOsdDump - command ceph osd dump -f json-pretty

CephOsdDf - command ceph osd df -f json-pretty

CephS - command ceph -s -f json-pretty

CephDfDetail - command ceph df detail -f json-pretty

CephHealthDetail - command ceph health detail -f json-pretty

CephECProfileGet - command ceph osd erasure-code-profile get default -f json-pretty

CephCfgInfo - command ceph daemon {ceph_socket_files} config show

CephOsdTree - command ceph osd tree -f json-pretty

CephReport - command ceph report

All these parsers are based on a shared class which processes the JSON information into a dictionary.

class insights.parsers.ceph_cmd_json_parsing.CephCfgInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph daemon .. config show

Examples:

>>> type(ceph_cfg_info)
<class 'insights.parsers.ceph_cmd_json_parsing.CephCfgInfo'>
>>> ceph_cfg_info.max_open_files == '131072'
True
property max_open_files

Return the value of max_open_files

Type

str

class insights.parsers.ceph_cmd_json_parsing.CephDfDetail(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph df detail -f json-pretty.

Examples:

>>> type(ceph_df_detail)
<class 'insights.parsers.ceph_cmd_json_parsing.CephDfDetail'>
>>> ceph_df_detail['stats']['total_avail_bytes']
16910123008
class insights.parsers.ceph_cmd_json_parsing.CephECProfileGet(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph osd erasure-code-profile get default -f json-pretty.

Examples:

>>> type(ceph_osd_ec_profile_get)
<class 'insights.parsers.ceph_cmd_json_parsing.CephECProfileGet'>
>>> ceph_osd_ec_profile_get['k'] == '2'
True
class insights.parsers.ceph_cmd_json_parsing.CephHealthDetail(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph health detail -f json-pretty.

Examples:

>>> type(ceph_health_detail)
<class 'insights.parsers.ceph_cmd_json_parsing.CephHealthDetail'>
>>> ceph_health_detail["overall_status"] ==   'HEALTH_OK'
True
class insights.parsers.ceph_cmd_json_parsing.CephOsdDf(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph osd df -f json-pretty.

Examples:

>>> type(ceph_osd_df)
<class 'insights.parsers.ceph_cmd_json_parsing.CephOsdDf'>
>>> ceph_osd_df['nodes'][0]['pgs']
945
class insights.parsers.ceph_cmd_json_parsing.CephOsdDump(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph osd dump -f json-pretty.

Examples:

>>> type(ceph_osd_dump)
<class 'insights.parsers.ceph_cmd_json_parsing.CephOsdDump'>
>>> ceph_osd_dump['pools'][0]['min_size']
2
class insights.parsers.ceph_cmd_json_parsing.CephOsdTree(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of the command “ceph osd tree -f json-pretty

Examples:

>>> type(ceph_osd_tree)
<class 'insights.parsers.ceph_cmd_json_parsing.CephOsdTree'>
>>> ceph_osd_tree['nodes'][0]['children']
[-5, -4, -3, -2]
class insights.parsers.ceph_cmd_json_parsing.CephReport(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class to parse the output of the command ceph report.

Examples: >>> type(ceph_report_content) <class ‘insights.parsers.ceph_cmd_json_parsing.CephReport’> >>> ceph_report_content[“version”] == ‘12.2.8-52.el7cp’ True

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.ceph_cmd_json_parsing.CephS(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of ceph -s -f json-pretty.

Examples:

>>> type(ceph_s)
<class 'insights.parsers.ceph_cmd_json_parsing.CephS'>
>>> ceph_s['pgmap']['pgs_by_state'][0]['state_name'] == 'active+clean'
True

CephConf - file /etc/ceph/ceph.conf

The CephConf class parses the file /etc/ceph/ceph.conf. The ceph.conf is in the standard ‘ini’ format and is read by the base parser class IniConfigFile.

Sample /etc/ceph/ceph.conf file:

[global]
osd_pool_default_pgp_num = 128
auth_service_required = cephx
mon_initial_members = controller1-az1,controller1-az2,controller1-az3
fsid = a4c3-11e8-99c5-5254003f2830-ea8796d6
cluster_network = 10.xx.xx.xx/23
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 10.xx.xx.xx,xx.xx.xx.xx,10.xx.xx.xx
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_pg_num = 128
ms_bind_ipv6 = false
public_network = 10.xx.xx.xx/23

[osd]
osd_journal_size = 81920

[mon.controller1-az2]
public_addr = 10.xx.xx.xx

[client.radosgw.gateway]
user = apache
rgw_frontends = civetweb port=10.xx.xx.xx:8080
log_file = /var/log/ceph/radosgw.log
host = controller1-az2
keyring = /etc/ceph/ceph.client.radosgw.gateway.keyring
rgw_keystone_implicit_tenants = true
rgw_keystone_token_cache_size = 500
rgw_keystone_url = http://10.xx.xx.xx:35357
rgw_s3_auth_use_keystone = true
rgw_keystone_admin_467fE = Xqzta6dYhPHGHGEFaGnctoken
rgw_keystone_accepted_roles = admin,_member_,Member
rgw_swift_account_in_url = true

Examples

>>> list(conf.sections()) == ['global', 'osd', 'mon.controller1-az2', 'client.radosgw.gateway']
True
>>> conf.has_option('osd', 'osd_journal_size')
True
>>> conf.getboolean('client.radosgw.gateway', 'rgw_swift_account_in_url')
True
>>> conf.get('client.radosgw.gateway', 'rgw_swift_account_in_url') == 'true'
True
>>> conf.get('client.radosgw.gateway', 'user') == 'apache'
True
>>> conf.get('client.radosgw.gateway', 'log_file') == '/var/log/ceph/radosgw.log'
True
class insights.parsers.ceph_conf.CephConf(context)[source]

Bases: insights.core.IniConfigFile

Class for ceph.conf file content.

ceph_insights - command ceph insights

class insights.parsers.ceph_insights.CephInsights(*args, **kwargs)[source]

Bases: insights.core.CommandParser

Parse the output of the ceph insights command.

version

version information of the Ceph cluster.

Type

dict

data

a dictionary of the parsed output.

Type

dict

The data attribute is a dictionary containing the parsed output of the ceph insights command. The following are available in data:

  • crashes - summary of daemon crashes for the past 24 hours

  • health - the current and historical (past 24 hours) health checks

  • config - cluster and daemon configuration settings

  • osd_dump - osd and pool information

  • df - storage usage statistics

  • osd_tree - osd topology

  • fs_map - file system map

  • crush_map - the CRUSH map

  • mon_map - monitor map

  • service_map - service map

  • manager_map - manager map

  • mon_status - monitor status

  • pg_summary - placement group summary

  • osd_metadata - per-OSD metadata

  • version - ceph software version

  • errors - any errors encountered collecting this data

The version attribute contains a normalized view of self.data["version"].

Examples

>>> ceph_insights.version["release"] == 14
True
>>> ceph_insights.version["major"] == 0
True
>>> ceph_insights.version["minor"] == 0
True
>>> isinstance(ceph_insights.data["crashes"], dict)
True
>>> isinstance(ceph_insights.data["health"], dict)
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

CephLog - file /var/log/ceph/ceph.log

class insights.parsers.ceph_log.CephLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/ceph/ceph.log file.

Provide access to ceph logs using the LogFileOutput parser class.

Sample log lines:

2017-05-31 13:01:44.034376 mon.0 192.xx.xx.xx:6789/0 742585 : cluster [INF] pgmap v5133969: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 2027 kB/s rd, 20215 kB/s wr, 711 op/s
2017-05-31 13:01:45.041760 mon.0 192.xx.xx.xx:6789/0 742586 : cluster [INF] pgmap v5133970: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 1606 kB/s rd, 17354 kB/s wr, 718 op/s
2017-05-31 13:01:46.933829 osd.22 192.xx.xx.xx:6814/42154 172581 : cluster [WRN] 44 slow requests, 2 included below; oldest blocked for > 49.982746 secs
2017-05-31 13:01:46.933946 osd.22 192.xx.xx.xx:6814/42154 172582 : cluster [WRN] slow request 30.602517 seconds old, received at 2017-05-31 13:01:06.330484: osd_op(client.3395798.0:2855671 1.54392173 gnocchi_06c8214c-afae-4e64-8a4a-a466c4f257dc_1244160000.0_median_86400.0_v3 [write 26253~9] snapc 0=[] ondisk+write+known_if_redirected e487) currently waiting for subops from 23
2017-05-31 13:01:46.933955 osd.22 192.xx.xx.xx:6814/42154 172583 : cluster [WRN] slow request 30.530961 seconds old, received at 2017-05-31 13:01:06.402041: osd_op(client.324182.0:46141816 1.e637a4b3 measure [omap-rm-keys 0~107] snapc 0=[] ondisk+write+skiprwlocks+known_if_redirected e487) currently waiting for subops from 23
2017-05-31 13:01:47.050539 mon.0 192.xx.xx.xx:6789/0 742589 : cluster [INF] pgmap v5133971: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 1597 kB/s rd, 7259 kB/s wr, 398 op/s
2017-05-31 13:01:48.057187 mon.0 192.xx.xx.xx:6789/0 742590 : cluster [INF] pgmap v5133972: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 2373 kB/s rd, 5138 kB/s wr, 354 op/s
2017-05-31 13:01:49.064950 mon.0 192.xx.xx.xx:6789/0 742598 : cluster [INF] pgmap v5133973: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 4187 kB/s rd, 10266 kB/s wr, 714 op/s
2017-05-31 13:01:50.069437 mon.0 192.xx.xx.xx:6789/0 742599 : cluster [INF] pgmap v5133974: 320 pgs: 3 active+clean+scrubbing+deep, 317 active+clean; 898 GB data, 1828 GB used, 48447 GB / 50275 GB avail; 470 MB/s rd, 11461 kB/s wr, 786 op/s

Examples

>>> len(ceph_log.get("[WRN] slow request")) == 2
True
>>> from datetime import datetime
>>> len(list(ceph_log.get_after(datetime(2017, 5, 31, 13, 1, 46))))
7

CephOsdLog - file var/log/ceph/ceph-osd.*.log

This is a standard log parser based on the LogFileOutput class.

Sample input:

2015-10-30 09:09:30.334033 7f12c6f8b700  0 -- 10.1.26.72:6851/1003139 >> 10.1.26.64:6800/1005943 pipe(0x14f3b000 sd=464 :6851 s=0 pgs=0 cs=0 l=0 c=0xfe3a840).accept connect_seq 30 vs existing 29 state standby
2015-10-30 10:18:58.266050 7f12e97b1700  0 -- 10.1.26.72:6851/1003139 >> 10.1.26.23:6830/30212 pipe(0x10759000 sd=629 :6851 s=2 pgs=22 cs=1 l=0 c=0x10178160).fault, initiating reconnect

Examples

>>> logs = shared[CephOsdLog]
>>> 'initiating reconnect' in logs
True
>>> logs.get('pipe')
['2015-10-30 09:09:30.334033 7f12c6f8b700  0 -- 10.1.26.72:6851/1003139 >> 10.1.26.64:6800/1005943 pipe(0x14f3b000 sd=464 :6851 s=0 pgs=0 cs=0 l=0 c=0xfe3a840).accept connect_seq 30 vs existing 29 state standby',
 '2015-10-30 10:18:58.266050 7f12e97b1700  0 -- 10.1.26.72:6851/1003139 >> 10.1.26.23:6830/30212 pipe(0x10759000 sd=629 :6851 s=2 pgs=22 cs=1 l=0 c=0x10178160).fault, initiating reconnect']
>>> from datetime import datetime
>>> logs.get_after(datetime(2015, 10, 30, 10, 0, 0))
>>> ['2015-10-30 10:18:58.266050 7f12e97b1700  0 -- 10.1.26.72:6851/1003139 >> 10.1.26.23:6830/30212 pipe(0x10759000 sd=629 :6851 s=2 pgs=22 cs=1 l=0 c=0x10178160).fault, initiating reconnect']
class insights.parsers.ceph_osd_log.CephOsdLog(context)[source]

Bases: insights.core.LogFileOutput

Provide access to Ceph OSD logs using the LogFileOutput parser class.

Note

Please refer to the super-class insights.core.LogFileOutput

CephOsdTreeText - command ceph osd tree

class insights.parsers.ceph_osd_tree_text.CephOsdTreeText(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class to parse the output of command ceph osd tree.

The typical content is:

ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       0.08752 root default
-9       0.02917     host ceph1
 2   hdd 0.01459         osd.2       up  1.00000 1.00000
 5   hdd 0.01459         osd.5       up  1.00000 1.00000
-5       0.02917     host ceph2
 1   hdd 0.01459         osd.1       up  1.00000 1.00000
 4   hdd 0.01459         osd.4       up  1.00000 1.00000
-3       0.02917     host ceph3
 0   hdd 0.01459         osd.0       up  1.00000 1.00000
 3   hdd 0.01459         osd.3       up  1.00000 1.00000
-7             0     host ceph_1

Examples

>>> ceph_osd_tree_text['nodes'][0]['id']
'-1'
>>> ceph_osd_tree_text['nodes'][0]['name']
'default'
>>> ceph_osd_tree_text['nodes'][0]['type']
'root'
>>> ceph_osd_tree_text['nodes'][3]['type']
'osd'
parse_content(content)[source]

This method must be implemented by classes based on this class.

CephVersion - command /usr/bin/ceph -v

This module provides plugins access to the Ceph version information gathered from the ceph -v command. This module parses the community version to the Red Hat release version.

The Red Hat Ceph Storage releases and corresponding Ceph package releases are documented in https://access.redhat.com/solutions/2045583

Typical output of the ceph -v command is:

ceph version 0.94.9-9.el7cp (b83334e01379f267fb2f9ce729d74a0a8fa1e92c)

Note

This module can only be used for Ceph.

Example

>>> ceph_ver = shared[CephVersion]
>>> ceph_ver.version
'1.3.3'
>>> ceph_ver.major
'1.3'
>>> ceph_ver.minor
'3'
class insights.parsers.ceph_version.CephVersion(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the content of ceph_version.

parse_content(content)[source]

This method must be implemented by classes based on this class.

exception insights.parsers.ceph_version.CephVersionError(message, errors)[source]

Bases: Exception

Exception subclass for errors related to the content data and the CephVersion class.

This exception should not be caught by rules plugins unless it is necessary for the plugin to return a particular answer when a problem occurs with ceph version data. If a plugin catches this exception it must reraise it so that the engine has the opportunity to handle it/log it as necessary.

CertificatesEnddate - command /usr/bin/openssl x509 -noout -enddate -in path/to/cert/file

This command gets the enddates of certificate files.

Typical output of this command is:

/usr/bin/find: '/etc/origin/node': No such file or directory
/usr/bin/find: '/etc/origin/master': No such file or directory
notAfter=May 25 16:39:40 2019 GMT
FileName= /etc/origin/node/cert.pem
unable to load certificate
139881193203616:error:0906D066:PEM routines:PEM_read_bio:bad end line:pem_lib.c:802:
unable to load certificate
140695459370912:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
notAfter=May 25 16:39:40 2019 GMT
FileName= /etc/pki/ca-trust/extracted/pem/email-ca-bundle.pem
notAfter=Dec  9 10:55:38 2017 GMT
FileName= /etc/pki/consumer/cert.pem
notAfter=Jan  1 04:59:59 2022 GMT
FileName= /etc/pki/entitlement/3343502840335059594.pem
notAfter=Aug 31 02:19:59 2017 GMT
FileName= /etc/pki/consumer/cert.pem
notAfter=Jan  1 04:59:59 2022 GMT
FileName= /etc/pki/entitlement/2387590574974617178.pem

Examples

>>> cert_enddate = shared[CertificatesEnddate]
>>> paths = cert_enddate.get_certificates_path
>>> paths[0]
'/etc/origin/node/cert.pem'
>>> cert_enddate.expiration_date(paths[0]).datetime
datetime(2019, 05, 25, 16, 39, 40)
>>> cert_enddate.expiration_date(paths[0]).str
'May 25 16:39:40 2019'
class insights.parsers.certificates_enddate.CertificatesEnddate(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Class to parse the expiration dates.

class ExpirationDate(str, datetime)

Bases: tuple

namedtuple: contains the expiration date in string and datetime format.

property datetime
property str
property certificates_path

Return filepaths in list or [].

Type

list

expiration_date(path)[source]

This will return a namedtuple([‘str’, ‘datetime’]) contains the expiration date in string and datetime format. If the expiration date is unparsable, the ExpirationDate.datetime should be None.

Parameters

path (str) -- The certificate file path.

Returns

A ExpirationDate for available path. None otherwise.

parse_content(content)[source]

Parse the content of crt files.

Cgroups - File /proc/cgroups

This parser reads the content of /proc/cgroups. This file shows the control groups information of system.

Sample /proc/cgroups file:

#subsys_name        hierarchy       num_cgroups     enabled
cpuset      10      48      1
cpu 2       232     1
cpuacct     2       232     1
memory      5       232     1
devices     6       232     1
freezer     3       48      1
net_cls     4       48      1
blkio       9       232     1
perf_event  8       48      1
hugetlb     11      48      1
pids        7       232     1
net_prio    4       48      1

Examples

>>> i_cgroups.get_num_cgroups("memory")
232
>>> i_cgroups.is_subsys_enabled("memory")
True
>>> i_cgroups.data[0].get('hierarchy')
'10'
>>> i_cgroups.subsystems["memory"]["enabled"]
'1'
class insights.parsers.cgroups.Cgroups(context)[source]

Bases: insights.core.Parser

Class Cgroups parses the content of the /proc/cgroups.

data

A list of the subsystem cgroup information

Type

list

subsystems

A dict of all subsystems, key is subsystem name and value is dict with keys: hierarchy, num_cgroups, enabled

Type

dict

get_num_cgroups(i_subsys_name)[source]

Get value of cgroup number for specified subsystem, raise exception if keyword not found.

Example

>>> i_cgroups.get_num_cgroups("memory")
232
>>> i_cgroups.get_num_cgroups('hugetlb')
48
Parameters

i_subsys_name (str) -- specified subsystem name.

Returns

Int value of the specified subsystem cgroups

Return type

value (int)

Raises

KeyError -- Exception is raised if given subsystem name is wrong

is_subsys_enabled(i_subsys_name)[source]

Get enable or not of cgroup of specified subsystem, raise exception if keyword not found.

Example

>>> i_cgroups.is_subsys_enabled("memory")
True
>>> i_cgroups.is_subsys_enabled('hugetlb')
True
Parameters

i_subsys_name (str) -- specified subsystem name.

Returns

Return True if the cgroup of specified subsystem is enabled, else return False

Return type

value (boolean)

Raises

KeyError -- Exception is raised if given subsystem name is wrong

parse_content(content)[source]

This method must be implemented by classes based on this class.

checkin.conf - Files /etc/splice/checkin.conf

Parser for checkin.conf configuration file.

class insights.parsers.checkin_conf.CheckinConf(context)[source]

Bases: insights.core.IniConfigFile

Class for parsing content of “/etc/splice/checkin.conf”.

Sample input:

[logging]
config = /etc/splice/logging/basic.cfg

# this is used only for single-spacewalk deployments
[spacewalk]
# Spacewalk/Satellite server to use for syncing data.
host=
# Path to SSH private key used to connect to spacewalk host.
ssh_key_path=
login=swreport

# these are used for multi-spacewalk deployments
# [spacewalk_one]
# type = ssh
# # Spacewalk/Satellite server to use for syncing data.
# host=
# # Path to SSH private key used to connect to spacewalk host.
# ssh_key_path=
# login=swreport
#
# [spacewalk_two]
# type = file
# # Path to directory containing report output
# path = /path/to/output

[katello]
hostname=localhost
port=443
proto=https
api_url=/sam
admin_user=admin
admin_pass=admin
#autoentitle_systems = False
#flatten_orgs = False

Examples

>>> list(checkin_conf.sections())
[u'logging', u'spacewalk', u'katello']
>>> checkin_conf.get('spacewalk', 'host')
u''

ChkConfig - command chkconfig

class insights.parsers.chkconfig.ChkConfig(*args, **kwargs)[source]

Bases: insights.core.CommandParser

A parser for working with data gathered from chkconfig utility.

Sample input data is shown as content in the examples below.

Examples

>>> content = '''
... auditd         0:off   1:off   2:on    3:on    4:on    5:on    6:off
... crond          0:off   1:off   2:on    3:on    4:on    5:on    6:off
... iptables       0:off   1:off   2:on    3:on    4:on    5:on    6:off
... kdump          0:off   1:off   2:off   3:on    4:on    5:on    6:off
... restorecond    0:off   1:off   2:off   3:off   4:off   5:off   6:off
... xinetd:        0:off   1:off   2:on    3:on    4:on    5:on    6:off
...         rexec:         off
...         rlogin:        off
...         rsh:           off
...         telnet:        on
... '''
>>> shared[ChkConfig].is_on('crond')
True
>>> shared[ChkConfig].is_on('httpd')
False
>>> shared[ChkConfig].is_on('rexec')
False
>>> shared[ChkConfig].is_on('telnet')
True
>>> shared[ChkConfig].parsed_lines['crond']
'crond          0:off   1:off   2:on    3:on    4:on    5:on    6:off'
>>> shared[ChkConfig].parsed_lines['telnet']
'        telnet:        on'
>>> shared[ChkConfig].levels_on('crond')
set(['3', '2', '5', '4'])
>>> shared[ChkConfig].levels_off('crond')
set(['1', '0', '6'])
>>> shared[ChkConfig].levels_on('telnet')
set([])
>>> shared[ChkConfig].levels_off('telnet')
set([])
class LevelState(level, state)

Bases: tuple

namedtuple: Represents the state of a particular service level.

property level
property state
is_on(service_name)[source]

Checks if the service is enabled in chkconfig.

Parameters

service_name (str) -- service name

Returns

True if service is enabled, False otherwise

Return type

bool

level_states = None

Dictionary of set of level numbers access by service name.

Type

dict

levels_off(service_name)[source]

set (str): Returns set of levels where service_name is off.

Raises

KeyError -- Raises exception if service_name is not in Chkconfig.

levels_on(service_name)[source]

set (str): Returns set of level numbers where service_name is on.

Raises

KeyError -- Raises exception if service_name is not in Chkconfig.

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

parsed_lines = None

Dictionary of content lines access by service name.

Type

dict

service_list = None

List of service names in order of appearance.

Type

list

services = None

Dictionary of bool indicating if service is enabled, access by service name .

Type

dict

Pacemaker configuration - file /var/lib/pacemaker/cib/cib.xml

This parser reads the XML in the Pacemaker configuration file and provides a standard ElementTree interface to it. It also provides a nodes property that lists all the nodes.

Sample input:

<cib crm_feature_set="3.0.9" validate-with="pacemaker-2.3" have-quorum="1" dc-uuid="4">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="freeze"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="foo"/>
      <node id="2" uname="bar"/>
      <node id="3" uname="baz"/>
    </nodes>
    <resources>
      <clone id="dlm-clone">
      </clone>
    </resources>
    <constraints>
      <rsc_order first="dlm-clone" first-action="start" id="order-dlm-clone-clvmd-clone-mandatory" then="clvmd-clone" then-action="start"/>
      <rsc_colocation id="colocation-clvmd-clone-dlm-clone-INFINITY" rsc="clvmd-clone" score="INFINITY" with-rsc="dlm-clone"/>
    </constraints>
  </configuration>
</cib>

Examples

>>> cib = shared(CIB)
>>> opts = cib.dom.find(".//cluster_property_set[@id='cib-bootstrap-options']")
>>> opts.get('id')
'cib-bootstrap-options'
>>> cib.nodes
['foo', 'bar', 'baz']
class insights.parsers.cib.CIB(context)[source]

Bases: insights.core.XMLParser

Wraps a DOM of cib.xml

self.dom is an instance of ElementTree.

property nodes

Fetch the list of nodes and return their unames as a list.

Cinder configuration - file /etc/cinder/cinder.conf

The Cinder configuration file is a standard ‘.ini’ file and this parser uses the IniConfigFile class to read it.

Sample configuration:

[DEFAULT]
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=openstack

osapi_volume_listen=10.22.100.58
osapi_volume_workers=32

api_paste_config=/etc/cinder/api-paste.ini
glance_api_servers=http://10.22.120.50:9292
glance_api_version=2
glance_num_retries=0
glance_api_insecure=False
glance_api_ssl_compression=False

enable_v1_api=True
enable_v2_api=True
storage_availability_zone=nova
default_availability_zone=nova
enabled_backends=tripleo_ceph
nova_catalog_info=compute:Compute Service:publicURL
nova_catalog_admin_info=compute:Compute Service:adminURL

[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=192.168.88.10
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvm

Examples

>>> conf = shared[CinderConf]
>>> conf.sections()
['DEFAULT', 'lvm']
>>> 'lvm' in conf
True
>>> conf.has_option('DEFAULT', 'enabled_backends')
True
>>> conf.get("DEFAULT", "enabled_backends")
"tripleo_ceph"
>>> conf.get("DEFAULT", "glance_api_ssl_compression")
"False"
>>> conf.getboolean("DEFAULT", "glance_api_ssl_compression")
False
>>> conf.getint("DEFAULT", "glance_aip_version")
2
class insights.parsers.cinder_conf.CinderConf(context)[source]

Bases: insights.core.IniConfigFile

Cinder configuration parser class, based on the IniConfigFile class.

CinderApiLog - file /var/log/cinder/cinder-api.log

CinderVolumeLog - file /var/log/cinder/volume.log

This is a standard log parser based on the LogFileOutput class.

Sample input:

2015-06-19 07:31:41.020 7947 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178
2015-06-19 07:31:42.220 7947 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.7/site-packages/cinder/manager.py:128
2015-06-19 07:31:47.319 7947 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178
2015-06-19 07:32:53.612 7947 INFO cinder.volume.manager [-] Updating volume status

Examples

>>> logs = shared[CinderVolumeLog]
>>> 'Updating volume status' in logs
True
>>> logs.get('cinder.openstack.common.periodic_task')
['2015-06-19 07:31:41.020 7947 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178',
 '2015-06-19 07:31:47.319 7947 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178']
>>> from datetime import datetime
>>> logs.get_after(datetime(2015, 6, 19, 7, 32, 0))
['2015-06-19 07:32:53.612 7947 INFO cinder.volume.manager [-] Updating volume status']
class insights.parsers.cinder_log.CinderApiLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the /var/log/cinder/cinder-api.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

class insights.parsers.cinder_log.CinderVolumeLog(context)[source]

Bases: insights.core.LogFileOutput

Provide access to Cinder volume logs using the LogFileOutput parser class.

Note

Please refer to the super-class insights.core.LogFileOutput

CloudInitCustomeNetwork - file /etc/cloud/cloud.cfg.d/99-custom-networking.cfg

This module provides parsing for cloudinit custom networking configuration file. CloudInitCustomNetworking is a parser for /etc/cloud/cloud.cfg.d/99-custom-networking.cfg files.

Typical output is:

network:
  version: 1
  config:
  - type: physical
    name: eth0
    subnets:
      - type: dhcp
      - type: dhcp6

Examples

>>> cloud_init_custom_network_config.data['network']['config'][0]['name']
'eth0'
>>> cloud_init_custom_network_config.data['network']['config'][0]['subnets'][0]['type'] == 'dhcp'
True
>>> cloud_init_custom_network_config.data['network']['config'][0]['subnets'][1]['type'] == 'dhcp6'
True
class insights.parsers.cloud_init_custom_network.CloudInitCustomNetworking(context)[source]

Bases: insights.core.YAMLParser

Class for parsing the content of /etc/cloud/cloud.cfg.d/99-custom-networking.cfg.

CloudInitLog - file /var/log/cloud-init.log

class insights.parsers.cloud_init_log.CloudInitLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/cloud-init.log log file.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample input:

2019-08-07 14:33:27,269 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-datasource.cfg (quiet=False)
2019-08-07 14:33:27,269 - util.py[DEBUG]: Read 59 bytes from /etc/cloud/cloud.cfg.d/99-datasource.cfg
2019-08-07 14:33:27,269 - util.py[DEBUG]: Attempting to load yaml from string of length 59 with allowed root types (<type 'dict'>,)
2019-08-07 14:33:27,270 - util.py[WARNING]: Failed loading yaml blob. Invalid format at line 1 column 1: "while parsing a block mapping

Examples

>>> "Reading from /etc/cloud/cloud.cfg.d/99-datasource.cfg" in log
True
>>> len(log.get('DEBUG')) == 3
True

ClusterConf - file /etc/cluster/cluster.conf

Stores a filtered set of lines from the cluster config file. Because of the filtering, the content as a whole will not parse as XML. We use a insights.core.LogFileOutput parser class because, sadly, it’s easiest.

class insights.parsers.cluster_conf.ClusterConf(context)[source]

Bases: insights.core.LogFileOutput

Parse the /etc/cluster/cluster.conf file as a list of lines. get can be used to find lines containing one or more keywords. Because of filters used on this file, we cannot parse this as XML.

CmdLine - file /proc/cmdline

This parser reads the /proc/cmdline file, which contains the arguments given to the currently running kernel on boot.

class insights.parsers.cmdline.CmdLine(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

A parser class for parsing the Linux kernel command line as given in /proc/cmdline.

Parsing Logic:

Parses all elements in command line to a dict where the key is the
element itself and the value is a list stores its corresponding values.

If an element doesn't contain "=", set the corresponding value to `True`.

If an element contains "=", set the corresponding value to the whole right
value of the "=".

Note

For special command line elements that include two “=”, e.g. root=LABEL=/1, “root” will be the key and “LABEL=/1” will be the value in the returned list.

Some parameters (the returned keys) might be still effective even if there is ‘#’ before it, e.g.: #rhgb. This should be checked by the rule.

Sample input:

BOOT_IMAGE=/vmlinuz-3.10.0-327.36.3.el7.x86_64 root=/dev/system_vg/Root ro rd.lvm.lv=system_vg/Root crashkernel=auto rd.lvm.lv=system_vg/Swap rhgb quiet LANG=en_GB.utf8

Examples

>>> cmd['BOOT_IMAGE']
['/vmlinuz-3.10.0-327.36.3.el7.x86_64']
>>> cmd['rd.lvm.lv']
['system_vg/Root', 'system_vg/Swap']
>>> 'autofs' in cmd
False
>>> cmd.get('autofs') is None
True
>>> 'quiet' in cmd
True
>>> cmd.get('quiet')
[True]
>>> cmd['crashkernel']
['auto']
data

Parsed booting arguments are stored in this dictionary

Type

dict

cmdline

The RAW line of the /proc/cmdline

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

Cobbler modules configuration - file /etc/cobbler/modules.conf

The Cobbler modules configuration lists a set of services, and typically sets the module that provides that service.

Sample input:

[authentication]
module = authn_spacewalk

[authorization]
module = authz_allowall

[dns]
module = manage_bind

[dhcp]
module = manage_isc

Examples

>>> conf = CobblerModulesConf(context_wrap(conf_content))
>>> conf.get('authentication', 'module')
'authn_spacewalk'
>>> conf.get('dhcp', 'module')
'manage_isc'
class insights.parsers.cobbler_modules_conf.CobblerModulesConf(context)[source]

Bases: insights.core.IniConfigFile

This uses the standard IniConfigFile parser class.

Cobbler settings - /etc/cobbler/settings file

The Cobbler settings file is a YAML file and the standard Python yaml library is used to parse it.

Sample input:

kernel_options:
    ksdevice: bootif
    lang: ' '
    text: ~

Examples

>>> cobbler = shared[CobblerSettings]
>>> 'kernel_options' in cobbler.data
True
>>> cobbler.data['kernel_options']['ksdevice']
'bootif'
class insights.parsers.cobbler_settings.CobblerSettings(context)[source]

Bases: insights.core.YAMLParser

Read the /etc/cobbler/settings YAML file.

SystemD service configuration

Service systemd files are in /usr/lib/systemd/system/ or /etc/systemd/, and their content format is similar to a ‘.ini’ file.

Parsers included in this module are:

SystemdDocker - file /usr/lib/systemd/system/docker.service

SystemdLogindConf - file /etc/systemd/logind.conf

SystemdRpcbindSocketConf - unit file rpcbind.socket

SystemdOpenshiftNode - file /usr/lib/systemd/system/atomic-openshift-node.service

SystemdSystemConf - file /etc/systemd/system.conf

SystemdOriginAccounting - file /etc/systemd/system.conf.d/origin-accounting.conf

class insights.parsers.systemd.config.MultiOrderedDict(*args, **kwargs)[source]

Bases: dict

Warning

This class is deprecated.

Class for condition that duplicate keys exist

class insights.parsers.systemd.config.SystemdConf(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess, insights.core.ConfigParser

Base class for parsing systemd INI like configuration files

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.systemd.config.SystemdDocker(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for docker service systemd configuration.

Typical content of /usr/lib/systemd/system/docker.service file is:

[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
ExecStart=/bin/sh -c '/usr/bin/docker-current daemon \
    --authorization-plugin=rhel-push-plugin \
    --exec-opt native.cgroupdriver=systemd \
    $OPTIONS \
    $DOCKER_STORAGE_OPTIONS \
    $DOCKER_NETWORK_OPTIONS \
    $ADD_REGISTRY \
    $BLOCK_REGISTRY \
    $INSECURE_REGISTRY \
    2>&1 | /usr/bin/forward-journald -tag docker'
LimitNOFILE=1048576

Example

>>> docker_service["Service"]["ExecStart"].endswith("-tag docker'")
True
>>> len(docker_service["Service"]["EnvironmentFile"])
3
class insights.parsers.systemd.config.SystemdLogindConf(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for systemd logind configuration.

Typical content of the /etc/systemd/logind.conf file is:

[Login]
ReserveVT=6
KillUserProcesses=Yes
RemoveIPC=no

Example

>>> logind_conf["Login"]["ReserveVT"]
'6'
>>> logind_conf["Login"]["KillUserProcesses"]  # 'Yes' turns to 'True'
'True'
>>> logind_conf.get("Login").get("RemoveIPC")  # 'no' turns to 'False'
'False'
class insights.parsers.systemd.config.SystemdOpenshiftNode(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for atomic-openshift-node systemd configuration.

Typical output of /usr/lib/systemd/system/atomic-openshift-node.service file is:

[Service]
Type=notify
RestartSec=5s
OOMScoreAdjust=-999
ExecStartPost=/usr/bin/sleep 10
ExecStartPost=/usr/sbin/sysctl --system

Example

>>> openshift_node_service["Service"]["RestartSec"]
'5s'
>>> len(openshift_node_service["Service"]["ExecStartPost"])
2
class insights.parsers.systemd.config.SystemdOriginAccounting(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for systemd master configuration in the /etc/systemd/system.conf.d/origin-accounting.conf file.

Typical content of the /etc/systemd/system.conf.d/origin-accounting.conf file is:

[Manager]
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes

Example

>>> system_origin_accounting["Manager"]["DefaultCPUAccounting"]
'True'
class insights.parsers.systemd.config.SystemdRpcbindSocketConf(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for systemd configuration for rpcbind.socket unit.

Typical content of the rpcbind.socket unit file is:

[Unit]
Description=RPCbind Server Activation Socket
DefaultDependencies=no
Wants=rpcbind.target
Before=rpcbind.target

[Socket]
ListenStream=/run/rpcbind.sock

# RPC netconfig can't handle ipv6/ipv4 dual sockets
BindIPv6Only=ipv6-only
ListenStream=0.0.0.0:111
ListenDatagram=0.0.0.0:111
ListenStream=[::]:111
ListenDatagram=[::]:111

[Install]
WantedBy=sockets.target

Example

>>> rpcbind_socket["Socket"]["ListenStream"]
['/run/rpcbind.sock', '0.0.0.0:111', '[::]:111']
class insights.parsers.systemd.config.SystemdSystemConf(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemd.config.SystemdConf

Class for systemd master configuration in the /etc/systemd/system.conf file.

Typical content of the /etc/systemd/system.conf file is:

[Manager]
RuntimeWatchdogSec=0
ShutdownWatchdogSec=10min

Example

>>> system_conf["Manager"]["RuntimeWatchdogSec"]
'0'

Parsers for the Corosync Cluster Engine configurations

Parsers included in this module are:

CoroSyncConfig - file /etc/sysconfig/corosync

CorosyncConf - file /etc/corosync/corosync.conf

class insights.parsers.corosync.CoroSyncConfig(*args, **kwargs)[source]

Bases: insights.core.SysconfigOptions

Warning

This parser is deprecated, please use insights.parsers.sysconfig.CorosyncSysconfig instead.

This parser reads the /etc/sysconfig/corosync file. It uses the SysconfigOptions parser class to convert the file into a dictionary of options. It also provides the options property as a helper to retrieve the COROSYNC_OPTIONS variable.

Sample data:

# Corosync init script configuration file

# COROSYNC_INIT_TIMEOUT specifies number of seconds to wait for corosync
# initialization (default is one minute).
COROSYNC_INIT_TIMEOUT=60

# COROSYNC_OPTIONS specifies options passed to corosync command
# (default is no options).
# See "man corosync" for detailed descriptions of the options.
COROSYNC_OPTIONS=""

Examples

>>> 'COROSYNC_OPTIONS' in csconfig.data
True
>>> csconfig.options
''
property options

The value of the COROSYNC_OPTIONS variable.

Type

(str)

class insights.parsers.corosync.CorosyncConf(context)[source]

Bases: insights.core.ConfigParser

Parse the output of the file /etc/corosync/corosync.conf using the ConfigParser base class. It exposes the corosync configuration through the parsr query interface.

The parameters in the directives are referred from the manpage of corosync.conf. See man 8 corosync.conf for more info.

Sample content of the file /etc/corosync/corosync.conf

totem {
    version: 2
    secauth: off
    cluster_name: tripleo_cluster
    transport: udpu
    token: 10000
}

nodelist {
    node {
        ring0_addr: overcloud-controller-0
        nodeid: 1
    }

    node {
        ring0_addr: overcloud-controller-1
        nodeid: 2
    }

    node {
        ring0_addr: overcloud-controller-2
        nodeid: 3
    }
}

quorum {
    provider: corosync_votequorum
}

logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

Example

>>> from insights.parsr.query import first, last
>>> corosync_conf['quorum']['provider'][first].value
'corosync_votequorum'
>>> corosync_conf['totem']['token'][first].value
10000
>>> corosync_conf['nodelist']['node']['nodeid'][last].value
3

CpuVulns - files /sys/devices/system/cpu/vulnerabilities/*

Parser to parse the output of files /sys/devices/system/cpu/vulnerabilities/*

class insights.parsers.cpu_vulns.CpuVulns(context)[source]

Bases: insights.core.Parser

Base class to parse /sys/devices/system/cpu/vulnerabilities/* files, the file content will be stored in a string.

Sample output for files:
/sys/devices/system/cpu/vulnerabilities/spectre_v1::

Mitigation: Load fences

/sys/devices/system/cpu/vulnerabilities/spectre_v2::

Vulnerable: Retpoline without IBPB

/sys/devices/system/cpu/vulnerabilities/meltdown::

Mitigation: PTI

/sys/devices/system/cpu/vulnerabilities/spec_store_bypass::

Mitigation: Speculative Store Bypass disabled

Examples

>>> type(sp_v1)
<class 'insights.parsers.cpu_vulns.CpuVulns'>
>>> type(sp_v1) == type(sp_v2) == type(md) == type(ssb)
True
>>> sp_v1.value
'Mitigation: Load fences'
>>> sp_v2.value
'Vulnerable: Retpoline without IBPB'
>>> md.value
'Mitigation: PTI'
>>> ssb.value
'Mitigation: Speculative Store Bypass disabled'
value

The result parsed

Type

str

Raises

SkipException -- When file content is empty

parse_content(content)[source]

This method must be implemented by classes based on this class.

CpuInfo - file /proc/cpuinfo

This parser reads the content of the /proc/cpuinfo file and parses it into a dictionary of lists, keyed on the left hand column of the cpuinfo output.

The object also provides properties for the standard information about the CPU and motherboard architecture.

Sample input:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
stepping        : 2
microcode       : 1808
cpu MHz         : 2900.000
cache size      : 20480 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
flags           : fpu vme de pse tsc msr pae mce
address sizes   : 40 bits physical, 48 bits virtual
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
stepping        : 2
microcode       : 1808
cpu MHz         : 2900.000
cache size      : 20480 KB
physical id     : 2
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 2
flags           : fpu vme de pse tsc msr pae mce
address sizes   : 40 bits physical, 48 bits virtual
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit

Examples

>>> cpu_info.cpu_count
2
>>> sorted(cpu_info.apicid)
['0', '2']
>>> cpu_info.socket_count
2
>>> cpu_info.vendor
'GenuineIntel'
>>> "fpu" in cpu_info.flags
True
>>> cpu_info.model_name
'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz'
>>> cpu_info.get_processor_by_index(0)['cpus']
'0'
>>> cpu_info.get_processor_by_index(0)['vendors']
'GenuineIntel'
class insights.parsers.cpuinfo.CpuInfo(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

CpuInfo parser - able to be used as a dictionary through the LegacyItemAccess mixin class.

The following items are remapped into lists, with the element number corresponding to the CPU. For example, given the following input:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
stepping        : 2
microcode       : 1808
cpu MHz         : 2900.000
cache size      : 20480 KB
physical id     : 2
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 2
address sizes   : 40 bits physical, 48 bits virtual
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit

The following keys would be lists of:

  • cpus - the processor line (e.g. 1)

  • sockets - the physical id line (e.g. 2)

  • vendors - the vendor_id line (e.g. GenuineIntel)

  • models - the model name line (e.g. Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz)

  • model_ids - the model line (e.g. 45)

  • families - the cpu family line (e.g. 6)

  • clockspeeds - the cpu MHz line (e.g. 2900.000)

  • cache_sizes - the cache size line (e.g. 20480 KB)

  • cpu_cores - the cpu cores line (e.g. 1)

  • apicid - the apicid line (e.g. 1)

  • stepping - the stepping line (e.g. 2)

  • address_sizes - the address sizes line (e.g. 40 bits physical, 48 bits virtual)

  • bugs - the bugs line (e.g. cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit)

property apicid

Returns the apicid of the processor.

Type

str

property cache_size

Returns the cache size of the first CPU.

Type

str

property core_total

Returns the total number of cores for the server if available, else None.

Type

str

property cpu_count

Returns the number of CPUs.

Type

str

property cpu_speed

Returns the CPU speed of the first CPU.

Type

str

property flags

Returns a list of feature flags for the first CPU.

Type

list

get_processor_by_index(index)[source]

Construct a dictionary of the information stored for the given CPU.

Parameters

index (int) -- The CPU index to retrieve.

Returns

A dictionary of the information for that CPU.

Return type

dict

property model_name

Returns the model name of the first CPU.

Type

str

property model_number

Returns the model ID of the first CPU.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

property socket_count

Returns the number of sockets. This is distinct from the number of CPUs.

Type

str

property vendor

Returns the vendor of the first CPU.

Type

str

CpupowerFrequencyInfo - Commands cpupower -c all frequency-info

class insights.parsers.cpupower_frequency_info.CpupowerFrequencyInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class for parsing the output of cpupower -c all frequency-info command.

Typical output of the command is:

analyzing CPU 0:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 0
  CPUs which need to have their frequency coordinated by software: 0
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 800 MHz - 3.00 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 800 MHz and 3.00 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 1.22 GHz (asserted by call to kernel)
  boost state support:
    Supported: yes
    Active: yes
    2700 MHz max turbo 4 active cores
    2800 MHz max turbo 3 active cores
    2900 MHz max turbo 2 active cores
    3000 MHz max turbo 1 active cores
analyzing CPU 1:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 1
  CPUs which need to have their frequency coordinated by software: 1
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 800 MHz - 3.00 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 800 MHz and 3.00 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 871 MHz (asserted by call to kernel)
  boost state support:
    Supported: yes
    Active: yes
Raises

Examples

>>> type(cpupower_frequency_info)
<class 'insights.parsers.cpupower_frequency_info.CpupowerFrequencyInfo'>
>>> cpupower_frequency_info['analyzing CPU 0']['driver']
'intel_pstate'
>>> cpupower_frequency_info['analyzing CPU 0']['boost state support']['Supported']
'yes'
>>> cpupower_frequency_info['analyzing CPU 0']['boost state support']['Active']
'yes'
>>> cpupower_frequency_info['analyzing CPU 1']['current policy']
'frequency should be within 800 MHz and 3.00 GHz. The governor "performance" may decide which speed to use within this range.'
>>> cpupower_frequency_info['analyzing CPU 0']['boost state support']['2700 MHz max turbo 4 active cores']
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

CpuSetCpus - File /sys/fs/cgroup/cpuset/cpuset.cpus

This parser reads the content of /sys/fs/cgroup/cpuset/cpuset.cpus. This file shows the default cgroup cpuset.cpu of system. The format of the content is string including comma.

class insights.parsers.cpuset_cpus.CpusetCpus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class CpusetCpus parses the content of the /sys/fs/cgroup/cpuset/cpuset.cpus.

cpu_set

It is used to show the list of allowed cpu.

Type

list

cpu_number

It is used to display the number of allowed cpu.

Type

int

A small sample of the content of this file looks like:

0,2-4,7

Examples

>>> type(cpusetinfo)
<class 'insights.parsers.cpuset_cpus.CpusetCpus'>
>>> cpusetinfo.cpuset
["0", "2", "3", "4", "7"]
>>> cpusetinfo.cpu_number
5
parse_content(content)[source]

This method must be implemented by classes based on this class.

Crontab listings

CrontabL base class

class insights.parsers.crontab.CrontabL(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parses output of crontab -l command.

Each row of the crontab is converted into a dictionary with keys for each field. For example one row would look like:

{
  'minute': '*',
  'hour': '*',
  'day_of_month': '*',
  'month': '*',
  'day_of_week': '*',
  'command': '/usr/bin/keystone-manage token_flush > /dev/null 2>&1'
}

Crontab parses the line in the same way that cron(1) does. Lines that are blank or start with a comment are ignored. Environment lines of the form ‘KEY = value’ (with optional spacing around the equals sign) are stored in the ‘environment’ dictionary attribute by key. Lines containing a valid crontab line, with five recogniseable time fields and a command, are stored in the data property and accessed through the pseudo-list interface and search method. All other lines are stored in the invalid_lines property.

Crontab recognises the extension of time signature ‘nicknames’, which take place of the first five parts of a standard crontab line:

  • @reboot : Run once after reboot.

  • @yearly : Run once a year, ie. “0 0 1 1 *”.

  • @annually : Run once a year, ie. “0 0 1 1 *”.

  • @monthly : Run once a month, ie. “0 0 1 * *”.

  • @weekly : Run once a week, ie. “0 0 * * 0”.

  • @daily : Run once a day, ie. “0 0 * * *”.

  • @hourly : Run once an hour, ie. “0 * * * *”.

The Crontab class recognises these nicknames. In the case of the @reboot’ nickname, the row will not contain the ‘minute’, ‘hour’, ‘day_of_month’, ‘month’, or ‘day_of_week’ keys, and instead will contain the key ‘time’ with the value @reboot’ (as well as the usual ‘command’ key). All other keywords are translated directly into their five-part equivalent and parsed as a normal crontab line.

Lines that can’t be parsed because they don’t contain at least six words or meet the criteria for environment line or time signature nicknames are stored in the invalid_lines property.

Sample input looks like:

# send mail to admin address.
MAILTO=admin
* * * * * /usr/bin/keystone-manage token_flush > /dev/null 2>&1
33 0 * * * /bin/heat-manage purge_deleted -g days 7

Examples

>>> crontab = shared[KeystoneCrontab]
>>> crontab.search('keystone')
[{'minute': '*', 'hour': '*', 'day_of_month': '*', 'month': '*', 'day_of_week': '*',
  'command': '/usr/bin/keystone-manage token_flush > /dev/null 2>&1'}]
>>> [r['minute'] for r in crontab]
['*', '33']
>>> crontab.invalid_lines
[]
>>> len(crontab)  # Number of actual entries
2
>>> crontab[0]  # Individual entry access
{'minute': '*', 'hour': '*', 'day_of_month': '*', 'month': '*', 'day_of_week': '*',
  'command': '/usr/bin/keystone-manage token_flush > /dev/null 2>&1'}
>>> 'MAILTO' in crontab.environment  # Dictionary of environment settings
True
>>> crontab.environment['MAILTO']
'admin'
data

List of parsed lines. These can be accessed directly through the object as seen in the examples.

Type

list

environment

A dictionary of environment declarations in the crontab.

Type

dict

invalid_lines

Lines that could not be parsed as normal crontab entries.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(filter_str)[source]

list: Returns list of dicts for lines that have filter_str in the command.

HeatCrontab - command crontab -l -u heat

class insights.parsers.crontab.HeatCrontab(context, extra_bad_lines=[])[source]

Bases: insights.parsers.crontab.CrontabL

Parses output of the crontab -l -u heat command.

KeystoneCrontab - command crontab -l -u keystone

class insights.parsers.crontab.KeystoneCrontab(context, extra_bad_lines=[])[source]

Bases: insights.parsers.crontab.CrontabL

Parses output of the crontab -l -u keystone command.

RootCrontab - command crontab -l -u root

class insights.parsers.crontab.RootCrontab(context, extra_bad_lines=[])[source]

Bases: insights.parsers.crontab.CrontabL

Parses output of the crontab -l -u root command.

crypto-policies - files in /etc/crypto-policies/back-ends/

This is a collection of parsers that all deal with the generated configuration files under the /etc/crypto-policies/back-ends/ folder. Parsers included in this module are:

CryptoPoliciesConfig - file /etc/crypto-policies/config

CryptoPoliciesStateCurrent - file /etc/crypto-policies/state/current

CryptoPoliciesOpensshserver - file /etc/crypto-policies/back-ends/opensshserver.config

CryptoPoliciesBind - file /etc/crypto-policies/back-ends/bind.config

class insights.parsers.crypto_policies.CryptoPoliciesBind(context)[source]

Bases: insights.core.Parser

This parser reads the /etc/crypto-policies/back-ends/bind.config file. The sections disable-algorithms and disable-ds-digests are in the properties disable_algorithms and disable_ds_digests.

Sample Input:

disable-algorithms "." {
RSAMD5;
DSA;
};
disable-ds-digests "." {
GOST;
};

Examples

>>> 'GOST' in cp_bind.disable_ds_digests
True
>>> cp_bind.disable_algorithms
['RSAMD5', 'DSA']
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.crypto_policies.CryptoPoliciesConfig(context)[source]

Bases: insights.core.Parser

This parser reads the /etc/crypto-policies/config file. The contents of the file is a single-line value, available in the value property.

Sample Input:

LEGACY

Examples

>>> cp_c.value
'LEGACY'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.crypto_policies.CryptoPoliciesOpensshserver(context)[source]

Bases: insights.core.SysconfigOptions

This parser reads the /etc/crypto-policies/back-ends/opensshserver.config file. It uses the SysconfigOptions parser class to convert the file into a dictionary of options. It also provides the options property as a helper to retrieve the CRYPTO_POLICY variable.

Sample Input:

CRYPTO_POLICY='-oCiphers=aes256-gcm@openssh.com,3des-cbc -oMACs=umac-128-etm@openssh.com'

Examples

>>> 'CRYPTO_POLICY' in cp_os
True
>>> cp_os.options
'-oCiphers=aes256-gcm@openssh.com,3des-cbc -oMACs=umac-128-etm@openssh.com'
property options

The value of the CRYPTO_POLICY variable if it exists, else None.

Type

(union[str, None])

class insights.parsers.crypto_policies.CryptoPoliciesStateCurrent(context)[source]

Bases: insights.core.Parser

This parser reads the /etc/crypto-policies/state/current file. The contents of the file is a single-line value, available in the value property.

Sample Input:

LEGACY

Examples

>>> cp_sc.value
'LEGACY'
parse_content(content)[source]

This method must be implemented by classes based on this class.

CurrentClockSource - file /sys/devices/system/clocksource/clocksource0/current_clocksource

This is a relatively simple parser that reads the /sys/devices/system/clocksource/clocksource0/current_clocksource file. As well as reporting the contents of the file in its data property, it also provides three properties that are true if the clock source is set to that value:

  • is_kvm - the clock source file contains ‘kvm-clock’

  • is_tsc - the clock source file contains ‘tsc’

  • is_vmi_timer - the clock source file contains ‘vmi-timer’

Examples

>>> cs = shared[CurrentClockSource]
>>> cs.data
'tsc'
>>> cs.is_tsc
True
class insights.parsers.current_clocksource.CurrentClockSource(context)[source]

Bases: insights.core.Parser

The CurrentClockSource parser class.

data

the content of the current_clocksource file.

Type

str

property is_kvm

does the clock source contain ‘kvm-clock’?

Type

bool

property is_tsc

does the clock source contain ‘tsc’?

Type

bool

property is_vmi_timer

does the clock source contain ‘vmi-timer’?

Type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

Date parsers

This module provides processing for the output of the date command in various formats.

Date - command date

Class Date parses the output of the date command. Sample output of this command looks like:

Fri Jun 24 09:13:34 CST 2016

DateUTC - command date --utc

Class DateUTC parses the output of the date --utc command. Output is similar to the date command except that the Timezone column uses UTC.

All classes utilize the same base class DateParser so the following examples apply to all classes in this module.

Examples

>>> from insights.parsers.date import Date, DateUTC
>>> from insights.tests import context_wrap
>>> date_content = "Mon May 30 10:49:14 CST 2016"
>>> shared = {Date: Date(context_wrap(date_content))}
>>> date_info = shared[Date]
>>> date_info.data
'Mon May 30 10:49:14 CST 2016'
>>> date_info.datetime is not None
True
>>> date_info.timezone
'CST'
>>> date_content = "Mon May 30 10:49:14 UTC 2016"
>>> shared = {DateUTC: DateUTC(context_wrap(date_content))}
>>> date_info = shared[DateUTC]
>>> date_info.data
'Mon May 30 10:49:14 UTC 2016'
>>> date_info.datetime
datetime.datetime(2016, 5, 30, 10, 49, 14)
>>> date_info.timezone
'UTC'
class insights.parsers.date.Date(context, extra_bad_lines=[])[source]

Bases: insights.parsers.date.DateParser

Class to parse date command output.

Sample: Fri Jun 24 09:13:34 CST 2016

exception insights.parsers.date.DateParseException[source]

Bases: Exception

class insights.parsers.date.DateParser(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base class implementing shared code.

parse_content(content)[source]

Parses the output of the date and date --utc command.

Sample: Fri Jun 24 09:13:34 CST 2016 Sample: Fri Jun 24 09:13:34 UTC 2016

datetime

A native datetime.datetime of the parsed date string

Type

datetime.datetime

timezone

The string portion of the date string containing the timezone

Type

str

Raises

DateParseException: Raised if any exception occurs parsing the content.

class insights.parsers.date.DateUTC(context, extra_bad_lines=[])[source]

Bases: insights.parsers.date.DateParser

Class to parse date --utc command output.

Sample: Fri Jun 24 09:13:34 UTC 2016

IBM DB2 Sever details

Module for the processing of output from the db2licm -l command.

class insights.parsers.db2licm.DB2Info(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

This parser processes the output of the command db2licm_l and provides the information as a dictionary.

Sample input:

Product name:                     "DB2 Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2ese"
Version information:              "9.7"
Enforcement policy:               "Soft Stop"
Features:
DB2 Performance Optimization ESE: "Not licensed"
DB2 Storage Optimization:         "Not licensed"
DB2 Advanced Access Control:      "Not licensed"
IBM Homogeneous Replication ESE:  "Not licensed"

Product name:                     "DB2 Connect Server"
Expiry date:                      "Expired"
Product identifier:               "db2consv"
Version information:              "9.7"
Concurrent connect user policy:   "Disabled"
Enforcement policy:               "Soft Stop"

Example

>>> list(parser_result.keys())
['DB2 Enterprise Server Edition', 'DB2 Connect Server']
>>> parser_result['DB2 Enterprise Server Edition']["Version information"]
'9.7'

Override the base class parse_content to parse the output of the ‘’’db2licm -l’’’ command. Information that is stored in the object is made available to the rule plugins.

Raises

ParseException -- raised if data is not parsable.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Dcbtool - Command /sbin/dcbtool gc {interface} dcb

Parse Lines from the dcbtool gc eth1 dcb to check DCBX if enabled

Successful completion of the command returns data similar to:

Command:    Get Config
Feature:    DCB State
Port:       eth0
Status:     Off
DCBX Version: FORCED CIN

The keys in this data are converted to lower case and spaces are converted to underscores.

An is_on attribute is also provided to indicate if the status is ‘On’.

Examples:

>>> dcbstate = shared[Dcbtool]
>>> dcb.data
{
    "command": "Get Config",
    "feature": "DCB State",
    "port": "eth0",
    "status": "Off"
    "dcbx_version":"FORCED CIN"
}
>>> dcb['port']
'eth0'
>>> dcb['state']
'Off'
>>> dcb.is_on
False

If a “Connection refused” error is encountered, an empty dictionary is returned`.

class insights.parsers.dcbtool_gc_dcb.Dcbtool(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parse the output of the dcbtool command.

If the command output contains ‘Connection refused’, no data is stored. The LegacyItemAccess mixin class is used to provide direct access to the data.

data

A dictionary of the content of the command output.

Type

dict

is_on

(bool): Is the status of the interface ‘On’?

parse_content(content)[source]

This method must be implemented by classes based on this class.

Disk free space commands

Module for the processing of output from the df command. The base class DiskFree provides all of the functionality for all classes. Data is avaliable as rows of the output contained in one Record object for each line of output.

Sample input data for the df -li command looks like:

Filesystem        Inodes  IUsed     IFree IUse% Mounted on
/dev/mapper/vg_lxcrhel6sat56-lv_root
                 6275072 124955   6150117    2% /
devtmpfs         1497120    532   1496588    1% /dev
tmpfs            1499684    331   1499353    1% /dev/shm
tmpfs            1499684    728   1498956    1% /run
tmpfs            1499684     16   1499668    1% /sys/fs/cgroup
tmpfs            1499684     54   1499630    1% /tmp
/dev/sda2      106954752 298662 106656090    1% /home
/dev/sda1         128016    429    127587    1% /boot
tmpfs            1499684      6   1499678    1% /V M T o o l s
tmpfs            1499684     15   1499669    1% /VM Tools

This module provides two parsers:

DiskFree_LI - command df -li

DiskFree_ALP - command df -alP

DiskFree_AL - command df -al

This example demonstrates the DiskFree_LI class but all classes will provide the same functionality.

Examples

>>> df_info = shared[DiskFree_LI]
>>> df_info.filesystem_names
['tmpfs', '/dev/mapper/vg_lxcrhel6sat56-lv_root', 'devtmpfs', '/dev/sda2', '/dev/sda1']
>>> df_info.get_filesystem('/dev/sda2')
[Record(filesystem='/dev/sda2', total='106954752', used='298662', available='106656090', capacity='1%', mounted_on='/home')]
>>> df_info.mount_names
['/tmp', '/home', '/dev', '/boot', '/VM Tools', '/sys/fs/cgroup', '/', '/run', '/V M T o o l s', '/dev/shm']
>>> df_info.get_mount('/boot')
Record(filesystem='/dev/sda1', total='128016', used='429', available='127587', capacity='1%', mounted_on='/boot')
>>> len(df_info)
10
>>> [d.mounted_on for d in df_info if 'sda' in d.filesystem]
['/home', '/boot']
>>> df_info.data[0].filesystem
'/dev/mapper/vg_lxcrhel6sat56-lv_root'
>>> df_info.data[0]
Record(filesystem='/dev/mapper/vg_lxcrhel6sat56-lv_root', total='6275072', used='124955', available='6150117', capacity='2%', mounted_on='/')
class insights.parsers.df.DiskFree(context)[source]

Bases: insights.core.CommandParser

Class to provide methods used by all df command classes.

data

List of Record objects for each line of command output.

Type

list of Record

filesystems

Dictionary with each entry being a list of Record objects, for all lines in the command output. The dictionary is keyed by the filesystem value of the Record.

Type

dict of list

mounts

Dictionary with each entry being a Record object corresponding to the mounted_on key.

Type

dict

property filesystem_names

Returns list of unique filesystem names.

Type

list

get_dir(path)[source]

Record: returns the Record object that contains the given path.

This finds the most specific mount path that contains the given path.

get_filesystem(name)[source]

str: Returns list of Record objects for filesystem name.

get_mount(name)[source]

Record: Returns Record obj for mount point name.

property mount_names

Returns list of unique mount point names.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.df.DiskFree_AL(context)[source]

Bases: insights.parsers.df.DiskFree

Parse lines from the output of the df -al command.

Typical content of the df -al command looks like:

Filesystem                             1K-blocks      Used Available     Use% Mounted on
/dev/mapper/vg_lxcrhel6sat56-lv_root    98571884   4244032  89313940       5% /
sysfs                                          0         0         0        - /sys
proc                                           0         0         0        - /proc
devtmpfs                                 5988480         0   5988480       0% /dev
securityfs                                     0         0         0        - /sys/kernel/security
tmpfs                                    5998736    491660   5507076       9% /dev/shm
devpts                                         0         0         0        - /dev/pts
tmpfs                                    5998736      1380   5997356       1% /run
tmpfs                                    5998736         0   5998736       0% /sys/fs/cgroup
cgroup                                         0         0         0        - /sys/fs/cgroup/systemd
data

A list of the df information with one Record object for each line of command output. Mapping of input columns to output fields is:

Input column   Output Field
------------   ------------
Filesystem     filesystem
1K-blocks      total
Used           used
Available      available
Use%           capacity
Mounted on     mounted_on
Type

list

class insights.parsers.df.DiskFree_ALP(context)[source]

Bases: insights.parsers.df.DiskFree

Parse lines from the output of the df -alP command.

Typical content of the df -alP command looks like:

Filesystem                           1024-blocks      Used Available Capacity Mounted on
/dev/mapper/vg_lxcrhel6sat56-lv_root    98571884   4244032  89313940       5% /
sysfs                                          0         0         0        - /sys
proc                                           0         0         0        - /proc
devtmpfs                                 5988480         0   5988480       0% /dev
securityfs                                     0         0         0        - /sys/kernel/security
tmpfs                                    5998736    491660   5507076       9% /dev/shm
devpts                                         0         0         0        - /dev/pts
tmpfs                                    5998736      1380   5997356       1% /run
tmpfs                                    5998736         0   5998736       0% /sys/fs/cgroup
cgroup                                         0         0         0        - /sys/fs/cgroup/systemd
data

A list of the df information with one Record object for each line of command output. Mapping of input columns to output fields is:

Input column   Output Field
------------   ------------
Filesystem     filesystem
1024-blocks    total
Used           used
Available      available
Capacity       capacity
Mounted on     mounted_on
Type

list

class insights.parsers.df.DiskFree_LI(context)[source]

Bases: insights.parsers.df.DiskFree

Parse lines from the output of the df -li command.

Typical content of the df -li command output looks like:

Filesystem        Inodes  IUsed     IFree IUse% Mounted on
/dev/mapper/vg_lxcrhel6sat56-lv_root
                 6275072 124955 6150117    2% /
devtmpfs         1497120    532   1496588    1% /dev
tmpfs            1499684    331   1499353    1% /dev/shm
tmpfs            1499684    728   1498956    1% /tmp
/dev/sda2      106954752 298662 106656090    1% /home
/dev/sda1         128016    429    127587    1% /boot
tmpfs            1499684      6   1499678    1% /run/user/988
tmpfs            1499684     15   1499669    1% /run/user/100
data

A list of the df information with one Record object for each line of command output. Mapping of input columns to output fields is:

Input column   Output Field
------------   ------------
Filesystem     filesystem
Inodes         total
IUsed          used
IFree          available
IUse%          capacity
Mounted on     mounted_on
Type

list

class insights.parsers.df.Record(filesystem, total, used, available, capacity, mounted_on)

Bases: tuple

namedtuple: Represents the information parsed from df command output.

property available
property capacity
property filesystem
property mounted_on
property total
property used
insights.parsers.df.parse_df_lines(df_content)[source]

Parse contents of each line in df output.

Parse each line of df output ensuring that wrapped lines are reassembled prior to parsing, and that mount names containing spaces are maintained.

Parameters

df_content (list) -- Lines of df output to be parsed.

Returns

A list of Record namedtuple’s. One for each line of the df output with columns as the key values. The fields of Record provide information about the file system attributes as determined by the arguments to the df command. So, for example, if df is given the -alP, the values are in terms of 1024 blocks. If -li is given, then the values are in terms of inodes:

- filesystem: Name of the filesystem
- total: total number of resources on the filesystem
- used: number of the resources used on the filesystem
- available: number of the resource available on the filesystem
- capacity: percentage of the resource used on the filesystem
- mounted_on: mount point of the filesystem

Return type

list

389 Directory Server logs

DirSrvAccessLog - files var/log/dirsrv/.*/access

DirSrvErrorsLog - files var/log/dirsrv/.*/errors

Note

Please refer to the super-class insights.core.LogFileOutput

class insights.parsers.dirsrv_logs.DirSrvAccessLog(context)[source]

Bases: insights.core.LogFileOutput

The access log file from all directories in /var/log/dirsrv/

This uses the standard LogFileOutput parser class for its implementation.

Note: This parser class gets the access file from all directories in /var/log/dirsrv. Therefore, you will use this parser as a list in the shared parsers dictionary. Iterate through and check the file name for the one you want, or scan all logs.

Sample input:

    389-Directory/1.2.11.15 B2014.300.2010
    ldap.example.com:636 (/etc/dirsrv/slapd-EXAMPLE-COM)

[27/Apr/2015:13:16:35 -0400] conn=1174478 fd=131 slot=131 connection from 10.20.10.106 to 10.20.62.16
[27/Apr/2015:13:16:35 -0400] conn=1174478 op=-1 fd=131 closed - B1
[27/Apr/2015:13:16:35 -0400] conn=324375 op=606903 SRCH base="cn=users,cn=accounts,dc=example,dc=com" scope=2 filter="(uid=root)" attrs=ALL
[27/Apr/2015:13:16:35 -0400] conn=324375 op=606903 RESULT err=0 tag=101 nentries=1 etime=0
[27/Apr/2015:13:16:35 -0400] conn=324375 op=606904 SRCH base="cn=groups,cn=accounts,dc=example,dc=com" scope=2 filter="(|(member=uid=root,cn=users,cn=accounts,dc=example,dc=com)(uniqueMember=uid=root,cn=users,cn=accounts,dc=example,dc=com)(memberUid=root))" attrs="cn"
[27/Apr/2015:13:16:35 -0400] conn=324375 op=606904 RESULT err=0 tag=101 nentries=8 etime=0
[27/Apr/2015:13:16:40 -0400] conn=1174483 fd=131 slot=131 connection from 10.20.130.21 to 10.20.62.16
[27/Apr/2015:13:16:40 -0400] conn=1174483 op=0 SRCH base="" scope=0 filter="(objectClass=*)" attrs="* altServer namingContexts aci"
[27/Apr/2015:13:16:40 -0400] conn=1174483 op=0 RESULT err=0 tag=101 nentries=1 etime=0

Examples

>>> for access_log in shared[DirSrvAccessLog]:
...     print "Path:", access_log.path
...     connects = access_log.get('connection from')
...     print "Connection lines:", len(connects)
...
Path: /var/log/dirsrv/slapd-EXAMPLE-COM/access
Connection lines: 2
class insights.parsers.dirsrv_logs.DirSrvErrorsLog(context)[source]

Bases: insights.core.LogFileOutput

The errors log file from all directories in /var/log/dirsrv/

This uses the standard LogFileOutput parser class for its implementation.

Note: This parser class gets the errors file from all directories in /var/log/dirsrv. Therefore, you will use this parser as a list in the shared parsers dictionary. Iterate through and check the file name for the one you want, or scan all logs.

Sample input:

    389-Directory/1.2.11.15 B2014.300.2010
    ldap.example.com:7390 (/etc/dirsrv/slapd-PKI-IPA)

[23/Apr/2015:23:12:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)
[23/Apr/2015:23:17:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)
[23/Apr/2015:23:22:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)
[23/Apr/2015:23:27:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)
[23/Apr/2015:23:32:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)
[23/Apr/2015:23:37:31 -0400] slapi_ldap_bind - Error: could not send startTLS request: error -11 (Connect error) errno 0 (Success)

Examples

>>> for error_log in shared[DirSrvErrorsLog]:
...     print "Path:", error_log.path
...     requests = error_log.get('could not send startTLS')
...     print "TLS send error lines:", len(requests)
...     tstamp = datetime.datetime(2015, 4, 23, 23, 22, 31)
...     print "Connections not before 23:22:31:", len(error_log.get_after(tstamp, lines=requests))
...
Path: /var/log/dirsrv/slapd-PKI-IPA/errors
TLS send error lines: 6
Connections not before 23:22:31: 4

dirsrv_sysconfig - file /etc/sysconfig/dirsrv

This module provides the DirsrvSysconfig class parser, for reading the options in the /etc/sysconfig/dirsrv file.

Sample input:

# how many seconds to wait for the startpid file to show
# up before we assume there is a problem and fail to start
# if using systemd, omit the "; export VARNAME" at the end
#STARTPID_TIME=10 ; export STARTPID_TIME
# how many seconds to wait for the pid file to show
# up before we assume there is a problem and fail to start
# if using systemd, omit the "; export VARNAME" at the end
#PID_TIME=600 ; export PID_TIME
KRB5CCNAME=/tmp/krb5cc_995
KRB5_KTNAME=/etc/dirsrv/ds.keytab

Examples

>>> dirsrv_conf = shared[DirsrvSysconfig]
>>> dirsrv.KRB5_KTNAME
'/etc/dirsrv/ds.keytab'
>>> 'PID_TIME' in dirsrv.data
False
class insights.parsers.dirsrv_sysconfig.DirsrvSysconfig(*args, **kwargs)[source]

Bases: insights.core.SysconfigOptions

Warning

This parser is deprecated, please use insights.parsers.sysconfig.DirsrvSysconfig instead.

Parse the dirsrv service’s start-up configuration.

DMesgLineList - command dmesg

DMesgLineList is a simple parser that is based on the LogFileOutput parser class.

It provides one additional helper method not included in LogFileOutput:

  • has_startswith - does the log contain any line that starts with the given string?

Sample input:

[    0.000000] Linux version 3.10.0-327.36.3.el7.x86_64 (mockbuild@x86-037.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu Oct 20 04:56:07 EDT 2016
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.10.0-327.36.3.el7.x86_64 root=/dev/RHEL7CSB/Root ro rd.lvm.lv=RHEL7CSB/Root rd.luks.uuid=luks-96c66446-77fd-4431-9508-f6912bd84194 crashkernel=auto vconsole.keymap=us rd.lvm.lv=RHEL7CSB/Swap vconsole.font=latarcyrheb-sun16 rhgb quiet LANG=en_GB.utf8
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] AGP: Checking aperture...
[    0.000000] AGP: No AGP bridge found
[    0.000000] Memory: 15918544k/17274880k available (6444k kernel code, 820588k absent, 535748k reserved, 4265k data, 1632k init)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.

Examples

>>> dmesg = shared[DmesgLineList]
>>> 'BOOT_IMAGE' in dmesg
True
>>> dmesg.get('AGP')
['[    0.000000] AGP: Checking aperture...', '[    0.000000] AGP: No AGP bridge found']
class insights.parsers.dmesg.DmesgLineList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LogFileOutput

Class for reading output of dmesg using the LogFileOutput parser class.

Note

Please refer to its super-class insights.core.LogFileOutput

get_after(timestamp, s=None)[source]

Find all the (available) logs that are after the given time stamp.

If s is not supplied, then all lines are used. Otherwise, only the lines contain the s are used. s can be either a single string or a strings list. For list, all keywords in the list must be found in the line.

Note

The time stamp is the floating point number of seconds after the boot time, and is not related to an actual datetime. If a time stamp is not found on the line between square brackets, then it is treated as a continuation of the previous line and is only included if the previous line’s timestamp is greater than the timestamp given. Because continuation lines are only included if a previous line has matched, this means that searching in logs that do not have a time stamp produces no lines.

Parameters
  • timestamp (float) -- log lines after this time are returned.

  • s (str or list) -- one or more strings to search for. If not supplied, all available lines are searched.

Yields

Log lines with time stamps after the given time.

Raises

TypeError -- The timestamp should be in float type, otherwise a TypeError will be raised.

has_startswith(prefix)[source]
Parameters

prefix (str) -- The prefix of the line to look for. Ignores any timestamp before the message body.

Returns

Does any line start with the given prefix?

Return type

(bool)

logs_startwith(prefix)[source]
Parameters

prefix (str) -- The prefix of the line to look for. Ignores any timestamp.

Returns

All logs start with the given prefix without timestamp.

Return type

(list)

DmesgLog - file /var/log/dmesg

This module provides access to the messages from the kernel ring buffer gathered from the /var/log/dmesg file.

Typical output of the /var/log/dmesg file is:

[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.10.0-862.el7.x86_64 (mockbuild@x86-034.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Wed Mar 21 18:14:51 EDT 2018
[    2.090905] SELinux:  Completing initialization.
[    2.090907] SELinux:  Setting up existing superblocks.
[    2.099684] systemd[1]: Successfully loaded SELinux policy in 82.788ms.
[    2.117410] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.117429] systemd[1]: Inserted module 'ip_tables'
[    2.376551] systemd-journald[441]: Received request to flush runtime journal from PID 1
[    2.716874] cryptd: max_cpu_qlen set to 100
[    2.804152] AES CTR mode by8 optimization enabled

Examples

>>> "SELinux" in dmesg_log
True
>>> "ip6_tables" in dmesg_log
False
>>> dmesg_log.get("SELinux")[0]["raw_message"]
"[    2.090905] SELinux:  Completing initialization."
>>> len(dmesg_log.get("SELinux"))
2
class insights.parsers.dmesg_log.DmesgLog(context, extra_bad_lines=[])[source]

Bases: insights.parsers.dmesg.DmesgLineList

Class for parsing the /var/log/dmesg file using the DmesgLineList class.

DMIDecode - Command dmidecode

Parses the output of the dmidecode command to catalogue the hardware associated with the system.

In general, DMIdecode recognizes the sections of device information, separated by blank lines, processed in the following way.

  • It uses the descriptor line that precedes the indented device information (e.g. ‘BIOS Information’) as the name for that section, converting the name into lower case and replacing spaces with underscores (e.g. ‘bios_information’) to look more Pythonic.

  • Within each section, data is split up on colons.

  • Lines such as ‘Characteristics’ that end with a colon and have one or more further indented lines after them are converted into a list of the values so indented.

The common information retrieved from dmidecode is available in several convenience properties:

  • system_info - Information about the machine itself

  • bios - the BIOS information

  • bios_vendor - the BIOS’s ‘Vendor’ attribute

  • bios_date - the BIOS’s ‘Release Date’ attribute

  • processor_manufacturer - the processor’s ‘Manufacturer’ attribute

  • is_present - this indicates whether dmidecode information was found.

Sample input:

# dmidecode 2.2
SMBIOS 2.4 present.
104 structures occupying 3162 bytes.
Table at 0x000EE000.
Handle 0x0000
    DMI type 0, 24 bytes.
    BIOS Information
        Vendor: HP
        Version: A08
        Release Date: 09/27/2008
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 4096 kB
        Characteristics:
            PCI is supported
            PNP is supported
            BIOS is upgradeable
            BIOS shadowing is allowed
            ESCD support is available
            Boot from CD is supported
            Selectable boot is supported
            EDD is supported
            5.25"/360 KB floppy services are supported (int 13h)
            5.25"/1.2 MB floppy services are supported (int 13h)
            3.5"/720 KB floppy services are supported (int 13h)
            Print screen service is supported (int 5h)
            8042 keyboard services are supported (int 9h)
            Serial services are supported (int 14h)
            Printer services are supported (int 17h)
            CGA/mono video services are supported (int 10h)
            ACPI is supported
            USB legacy is supported
            BIOS boot specification is supported
            Function key-initiated network boot is supported``
Handle 0x0100
    DMI type 1, 27 bytes.
    System Information
        Manufacturer: HP
        Product Name: ProLiant BL685c G1
        Version: Not Specified
        Serial Number: 3H6CMK2537
        UUID: 58585858-5858-3348-3643-4D4B32353337
        Wake-up Type: Power Switch

Examples

>>> dmi = shared[DMIDecode]
>>> dmi.is_present
True
>>> len(dmi['bios_information'])
1
>>> dmi['bios_information'][0]['vendor']
'HP'
>>> dmi.bios_vendor
'HP'
>>> dmi.bios_date
datetime.date(2008, 9, 27)
>>> len(dmi.bios['characteristics'])
20
>>> dmi.bios['characteristics'][0]
'PCI is supported'
class insights.parsers.dmidecode.DMIDecode(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class for DMI information.

property bios

Convenience method to get BIOS information

Type

(str)

property bios_date

Get the BIOS release date in datetime.date format

Type

(datetime.date)

property bios_vendor

Convenience method to get BIOS vendor

Type

(str)

property is_present

Is there any decodable data in the content?

Type

(bool)

parse_content(content)[source]

This method must be implemented by classes based on this class.

property processor_manufacturer

Convenience method to get the processor manufacturer

Type

(str)

property system_info

Convenience method to get system information

Type

(str)

insights.parsers.dmidecode.parse_dmidecode(dmidecode_content, pythonic_keys=False)[source]

Returns a dictionary of dmidecode information parsed from a dmidecode list (i.e. from context.content)

This method will attempt to handle leading spaces rather than tabs.

dmsetup commands - Command dmsetup

Parsers for parsing and extracting data from output of commands related to dmsetup. Parsers contained in this module are:

DmsetupInfo - command dmsetup info -C

class insights.parsers.dmsetup.DmsetupInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

dmsetup info -C command output

Example input:

Name               Maj Min Stat Open Targ Event  UUID
VG00-tmp           253   8 L--w    1    1      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTax6lLmBji2ueSbX49gxcV76M29cmukQiw4
VG00-home          253   3 L--w    1    1      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTaxCqXOnbGe2zjhX923dFiIdl1oi7mO9tXp
VG00-var           253   6 L--w    1    2      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTaxicvyvt67113nTb8vMlGfgdEjDx0LKT2O
VG00-swap          253   1 L--w    2    1      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTax3Ll2XhOYZkylx1CjOQi7G4yHgrIOsyqG
VG00-root          253   0 L--w    1    1      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTaxKpnAKYhrYMYMNMwjegkW965bUgtJFTRY
VG00-var_log_audit 253   5 L--w    1    1      0 LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTaxwQ8R0XWJRm86QX3befq1cHRy47Von6ZW

Example data structure produced:

data = [
  {
    'Stat': 'L--w',
    'Name': 'VG00-tmp',
    'Min': '8',
    'Targ': '1',
    'Maj': '253',
    'Open': '1',
    'Event': '0',
    'UUID': 'LVM-gy9uAwD7LuTIApplr2sogbOx5iS0FTax6lLmBji2ueSbX49gxcV76M29cmukQiw4'
  },...
]
data

List of devices found, in order

Type

list

names

Device names, in order found

Type

list

uuids

UUID

Type

list

by_name

Access to each device by devicename

Type

dict

by_uuid

Access to each device by uuid

Type

dict

Example

>>> len(info)
6
>>> info.names[0]
'VG00-tmp'
>>> info[1]['Maj']
'253'
>>> info[1]['Stat']
'L--w'
parse_content(content)[source]

This method must be implemented by classes based on this class.

dnf module commands

Parsers provided in this module includes:

DnfModuleList - command dnf module list

DnfModuleInfo - command dnf module info *

class insights.parsers.dnf_module.DnfModuleBrief(data=None)[source]

Bases: object

An object for the dnf modules listed by dnf module list command

name

Name of the dnf module

Type

str

stream

Stream of the dnf module

Type

str

profiles

List of profiles of the dnf module

Type

list

summary

Summary of the dnf module

Type

str

class insights.parsers.dnf_module.DnfModuleDetail(data=None)[source]

Bases: insights.parsers.dnf_module.DnfModuleBrief

An object for the dnf modules listed by dnf module info command

name

Name of the dnf module

Type

str

stream

Stream of the dnf module

Type

str

version

Version of the dnf module

Type

str

context

Context of the dnf module

Type

str

profiles

List of profiles of the dnf module

Type

list

default_profiles

Default profile of the dnf module

Type

str

repo

Repo of the dnf module

Type

str

summary

Summary of the dnf module

Type

str

description

Description of the dnf module

Type

str

artifacts

List of the artifacts of the dnf module

Type

list

class insights.parsers.dnf_module.DnfModuleInfo(*args, **kwargs)[source]

Bases: insights.core.CommandParser, dict

Class to parse the output of command dnf module info XX, YY

Typical output of the command is:

Updating Subscription Management repositories.
Last metadata expiration check: 0:31:25 ago on Thu 25 Jul 2019 12:19:22 PM CST.
Name        : 389-ds
Stream      : 1.4
Version     : 8000020190424152135
Context     : ab753183
Repo        : rhel-8-for-x86_64-appstream-rpms
Summary     : 389 Directory Server (base)
Description : 389 Directory Server is an LDAPv3 compliant server.  The base package includes the LDAP server and command line utilities for server administration.
Artifacts   : 389-ds-base-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.src
            : 389-ds-base-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-debuginfo-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-debugsource-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-devel-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-legacy-tools-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-legacy-tools-debuginfo-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-libs-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-libs-debuginfo-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-snmp-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : 389-ds-base-snmp-debuginfo-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.x86_64
            : python3-lib389-0:1.4.0.20-10.module+el8.0.0+3096+101825d5.noarch

Name        : 389-ds
Stream      : 1.4
Version     : 820190201170147
Context     : 1fc8b219
Repo        : rhel-8-for-x86_64-appstream-rpms
Summary     : 389 Directory Server (base)
Description : 389 Directory Server is an LDAPv3 compliant server.  The base package includes the LDAP server and command line utilities for server administration.
Artifacts   : 389-ds-base-0:1.4.0.20-7.module+el8+2750+1f4079fb.x86_64
            : 389-ds-base-devel-0:1.4.0.20-7.module+el8+2750+1f4079fb.x86_64
            : 389-ds-base-legacy-tools-0:1.4.0.20-7.module+el8+2750+1f4079fb.x86_64
            : 389-ds-base-libs-0:1.4.0.20-7.module+el8+2750+1f4079fb.x86_64
            : 389-ds-base-snmp-0:1.4.0.20-7.module+el8+2750+1f4079fb.x86_64
            : python3-lib389-0:1.4.0.20-7.module+el8+2750+1f4079fb.noarch

Name             : ant
Stream           : 1.10 [d][a]
Version          : 820181213135032
Context          : 5ea3b708
Profiles         : common [d]
Default profiles : common
Repo             : rhel-8-for-x86_64-appstream-rpms
Summary          : Java build tool
Description      : Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications. Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java applications. Ant can also be used effectively to build non Java applications, for instance C or C++ applications. More generally, Ant can be used to pilot any type of process which can be described in terms of targets and tasks.
Artifacts        : ant-0:1.10.5-1.module+el8+2438+c99a8a1e.noarch
                 : ant-lib-0:1.10.5-1.module+el8+2438+c99a8a1e.noarch

Name             : httpd
Stream           : 2.4 [d][e][a]
Version          : 820190206142837
Context          : 9edba152
Profiles         : common [d] [i], devel, minimal
Default profiles : common
Repo             : rhel-8-for-x86_64-appstream-rpms
Summary          : Apache HTTP Server
Description      : Apache httpd is a powerful, efficient, and extensible HTTP server.
Artifacts        : httpd-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : httpd-devel-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : httpd-filesystem-0:2.4.37-10.module+el8+2764+7127e69e.noarch
                 : httpd-manual-0:2.4.37-10.module+el8+2764+7127e69e.noarch
                 : httpd-tools-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : mod_http2-0:1.11.3-1.module+el8+2443+605475b7.x86_64
                 : mod_ldap-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : mod_md-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : mod_proxy_html-1:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : mod_session-0:2.4.37-10.module+el8+2764+7127e69e.x86_64
                 : mod_ssl-1:2.4.37-10.module+el8+2764+7127e69e.x86_64

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled, [a]ctive]

The modules information is wrapped in this object like a dictionary, with the module name as the key and the list of the DnfModuleDetail as the value.

Examples

>>> type(dnf_module_info)
<class 'insights.parsers.dnf_module.DnfModuleInfo'>
>>> len(dnf_module_info)
3
>>> dnf_module_info.modules
['389-ds', 'ant', 'httpd']
>>> "389-ds" in dnf_module_info
True
>>> len(dnf_module_info.get("389-ds"))
2
>>> type(dnf_module_info.get("389-ds")[0])
<class 'insights.parsers.dnf_module.DnfModuleDetail'>
>>> dnf_module_info['389-ds'][0].name
'389-ds'
>>> dnf_module_info["389-ds"][0].profiles
[]
>>> dnf_module_info["389-ds"][0].default_profiles
''
>>> dnf_module_info['389-ds'][1].name
'389-ds'
>>> dnf_module_info['389-ds'][1].context
'1fc8b219'
>>> "ant" in dnf_module_info
True
>>> len(dnf_module_info.get("ant"))
1
>>> type(dnf_module_info.get("ant")[0])
<class 'insights.parsers.dnf_module.DnfModuleDetail'>
>>> dnf_module_info['ant'][0].name
'ant'
>>> dnf_module_info['ant'][0].context
'5ea3b708'
>>> dnf_module_info["ant"][0].version
'820181213135032'
>>> dnf_module_info['ant'][0].profiles
['common [d]']
>>> dnf_module_info['ant'][0].default_profiles
'common'
>>> dnf_module_info["httpd"][0].profiles
['common [d] [i]', 'devel', 'minimal']
>>> dnf_module_info["httpd"][0].default_profiles
'common'
property modules

Returns a list of modules name

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.dnf_module.DnfModuleList(*args, **kwargs)[source]

Bases: insights.core.CommandParser, dict

Class to parse the output of command dnf module list Typical output of the command is:

Updating Subscription Management repositories.
Name                Stream      Profiles                                  Summary
389-ds              1.4                                                   389 Directory Server (base)
ant                 1.10 [d]    common [d]                                Java build tool

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

Examples

>>> len(dnf_module_list)
2
>>> dnf_module_list.get("389-ds").stream
'1.4'
>>> dnf_module_list.get("389-ds").profiles
[]
>>> dnf_module_list.get("ant").stream
'1.10 [d]'
property modules

Returns a list of modules name

parse_content(content)[source]

This method must be implemented by classes based on this class.

DnfModules - files under in the /etc/dnf/modules.d/ directory

Modularity configuration

class insights.parsers.dnf_modules.DnfModules(context)[source]

Bases: insights.core.IniConfigFile

Provides access to state of enabled modules/streams/profiles which is located in the /etc/dnf/modules.d/ directory

Examples

>>> len(dnf_modules.sections())
3
>>> str(dnf_modules.get("postgresql", "stream"))
'9.6'
>>> str(dnf_modules.get("postgresql", "profiles"))
'client'

DnsmasqConf - files /etc/dnsmasq.conf and /etc/dnsmasq.d/*.conf

class insights.parsers.dnsmasq_config.DnsmasqConf(context)[source]

Bases: insights.core.ConfigParser

Class to parses the content of dnsmasq configuration files /etc/dnsmasq.conf and /etc/dnsmasq.d/*.conf

Sample configuration output:

# Listen on this specific port instead of the standard DNS port
# (53). Setting this to zero completely disables DNS function,
# leaving only DHCP and/or TFTP.
port=5353

no-resolv
domain-needed
no-negcache
max-cache-ttl=1
enable-dbus
dns-forward-max=5000
cache-size=5000
bind-dynamic
except-interface=lo
server=/in-addr.arpa/127.0.0.1
server=/cluster.local/127.0.0.1
# End of config
Examples::
>>> "no-resolv" in conf
True
>>> conf["server"]
server=/in-addr.arpa/127.0.0.1
server=/cluster.local/127.0.0.1
>>> conf["dns-forward-max"][-1].value
5000

docker_host_machineid_parser - File /etc/redhat-access-insights/machine-id

This is a fairly simple function to read the Insights machine ID.

Because of the way this parser is used in the docker rules, this returns a one-element dictionary with the machine ID referred to by the key host_system_id.

Examples

>>> docker_info = {}
>>> if docker_host_machineid_parser in shared:
...     docker_info.extend(shared[docker_host_machineid_parser])
>>> docker_info
{ 'host_system_id': '123456789' }
insights.parsers.docker_host_machine_id.docker_host_machineid_parser(context)[source]
Returns

Return the Insights machine ID in a dict with the key ‘host_system_id’.

Return type

(dict)

DockerInspect - Command docker inspect --type={TYPE}

This module parses the output of the docker inspect command. This uses the core.marshalling.unmarshal function to parse the JSON output from the commands. The parsed data can be accessed a dictionary via the object itself.

class insights.parsers.docker_inspect.DockerInspect(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Parse the output of command “docker inspect --type=image” and “docker inspect --type=container”. The output of these two commands is formatted as JSON, so “json.loads” is an option to parse the output in the future.

Raises

SkipException -- If content is not provided

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.docker_inspect.DockerInspectContainer(context, extra_bad_lines=[])[source]

Bases: insights.parsers.docker_inspect.DockerInspect

Parse docker container inspect output using the DockerInspect parser class.

Sample input:

[
    {
        "Id": "97d7cd1a5d8fd7730e83bb61ecbc993742438e966ac5c11910776b5d53f4ae07",
        "Created": "2016-06-23T05:12:25.433469799Z",
        "Path": "/bin/bash",
        "Args": [],
        "Name": "/hehe2",
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
    ...

Examples

>>> container['Id'] == '97d7cd1a5d8fd7730e83bb61ecbc993742438e966ac5c11910776b5d53f4ae07'
True
>>> container['Name'] == '/hehe2'
True
>>> container.get('State').get('Paused') # sub-dictionaries
False
class insights.parsers.docker_inspect.DockerInspectImage(context, extra_bad_lines=[])[source]

Bases: insights.parsers.docker_inspect.DockerInspect

Parse docker image inspect output using the DockerInspect parser class.

Sample input:

[
    {
        "Id": "882ab98aae5394aebe91fe6d8a4297fa0387c3cfd421b2d892bddf218ac373b2",
        "RepoTags": [
            "rhel7_imagemagick:latest"
        ],
        "RepoDigests": [],
...

Examples

>>> image['Id'] == '882ab98aae5394aebe91fe6d8a4297fa0387c3cfd421b2d892bddf218ac373b2'
True
>>> image['RepoTags'][0] == 'rhel7_imagemagick:latest'
True

DockerList - command /usr/bin/docker (images|ps)

Parse the output of command “docker_list_images” and “docker_list_containers”, which have very similar formats.

The header line is parsed and used as the names for the remaining columns. All fields in both header and data are assumed to be separated by at least three spaces. This allows single spaces in values and headers, so headers such as ‘IMAGE ID’ are captured as is.

If the header line and at least one data line are not found, no data is stored.

Each row is stored as a dictionary, keyed on the header fields. The data is available in two formats:

  • The old format is a list of row dictionaries.

  • The new format stores each dictionary in a dictionary keyed on the value of a given field, given by the subclass.

class insights.parsers.docker_list.DockerList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A general class for parsing tabular docker list information. Parsing rules are:

  • The first line is the header line.

  • The other lines are data lines.

  • All fields line up vertically.

  • Fields are separated from each other by at least three spaces.

  • Some fields can contain nothing, and this is shown as spaces, so we need to catch that and turn it into None.

Why not just use hard-coded fields and columns? So that we can adapt to different output lists.

Raises
  • NotImplementedError -- If key_field or attr_name is not defined

  • SkipException -- If no data to parse

parse_content(content)[source]

Parse the lines given into a list of dictionaries for each row. This is stored in the rows attribute.

If the key_field property is set, use this to key a data dictionary attribute.

class insights.parsers.docker_list.DockerListContainers(context, extra_bad_lines=[])[source]

Bases: insights.parsers.docker_list.DockerList

Handle the list of docker images using the DockerList parser class.

Sample output of command docker ps --all --no-trunc --size:

CONTAINER ID                                                       IMAGE                                                              COMMAND                                            CREATED             STATUS                        PORTS                  NAMES               SIZE
95516ea08b565e37e2a4bca3333af40a240c368131b77276da8dec629b7fe102   bd8638c869ea40a9269d87e9af6741574562af9ee013e03ac2745fb5f59e2478   "/bin/sh -c 'yum install -y vsftpd-2.2.2-6.el6'"   51 minutes ago      Exited (137) 50 minutes ago                          tender_rosalind     4.751 MB (virtual 200.4 MB)
03e2861336a76e29155836113ff6560cb70780c32f95062642993b2b3d0fc216   rhel7_httpd                                                        "/usr/sbin/httpd -DFOREGROUND"                     45 seconds ago      Up 37 seconds                 0.0.0.0:8080->80/tcp   angry_saha          796 B (virtual 669.2 MB)
rows

List of row dictionaries.

Type

list

containers

Dictionary keyed on the value of the “NAMES” field

Type

dict

Examples

>>> containers.rows[0]['STATUS']
'Up 37 seconds'
>>> containers.containers['tender_rosalind']['STATUS']
'Exited (137) 18 hours ago'
class insights.parsers.docker_list.DockerListImages(context, extra_bad_lines=[])[source]

Bases: insights.parsers.docker_list.DockerList

Handle the list of docker images using the DockerList parser class.

Sample output of command docker images --all --no-trunc --digests:

REPOSITORY                           TAG                 DIGEST              IMAGE ID                                                           CREATED             VIRTUAL SIZE
rhel7_imagemagick                    latest              <none>              882ab98aae5394aebe91fe6d8a4297fa0387c3cfd421b2d892bddf218ac373b2   4 days ago          785.4 MB
rhel6_nss-softokn                    latest              <none>              dd87dad2c7841a19263ae2dc96d32c501ee84a92f56aed75bb67f57efe4e48b5   5 days ago          449.7 MB
rows

List of row dictionaries.

Type

list

images

Dictionary keyed on the value of the “REPOSITORY” fileld

Type

dict

Examples

>>> images.rows[0]['REPOSITORY']
'rhel6_vsftpd'
>>> images.rows[1]['VIRTUAL SIZE']
'785.4 MB'
>>> images.images['rhel6_vsftpd']['CREATED']
'37 minutes ago'

DockerStorageSetup - file /etc/sysconfig/docker-storage-setup

A fairly simple parser to read the contents of the docker storage setup configuration.

Sample input:

# Edit this file to override any configuration options specified in
# /usr/lib/docker-storage-setup/docker-storage-setup.
#
# For more details refer to "man docker-storage-setup"
VG=vgtest
AUTO_EXTEND_POOL=yes
##name = mydomain
POOL_AUTOEXTEND_THRESHOLD=60
POOL_AUTOEXTEND_PERCENT=20

Examples

>>> setup = shared[DockerStorageSetup]
>>> setup['VG'] # Pseudo-dict access
'vgtest'
>>> 'name' in setup
False
>>> setup.data['POOL_AUTOEXTEND_THRESHOLD'] # old style access
'60'
class insights.parsers.docker_storage_setup.DockerStorageSetup(*args, **kwargs)[source]

Bases: insights.core.SysconfigOptions

Warning

This parser is deprecated, please use insights.parsers.sysconfig.DockerStorageSetupSysconfig instead.

A parser for accessing /etc/sysconfig/docker-storage-setup.

DockerInfo - Comand /usr/bin/docker info

This parser reads the output of /usr/bin/docker info.

The resulting data structure is avaible in the data member of the class and takes the form of a dictionary whose keys are the “keys” in the output (the string before the :) and whose values are the values (the string following the :), all stripped of leading and trailing spaces.

Sample output:

Containers: 0
Images: 0
Server Version: 1.9.1
Storage Driver: devicemapper
Pool Name: rhel-docker--pool
Pool Blocksize: 524.3 kB
Base Device Size: 107.4 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 62.39 MB
Data Space Total: 3.876 GB
Data Space Available: 3.813 GB
Metadata Space Used: 40.96 kB
Metadata Space Total: 8.389 MB
Metadata Space Available: 8.348 MB
Udev Sync Supported: true
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.el7.x86_64
CPUs: 1
Total Memory: 993 MiB
Name: host001.example.com
ID: QPOX:46K6:RZK5:GPBT:DEUD:QM6H:5LRE:R63D:42DI:4BH3:6ZOZ:5EUM

Examples

>>> docker_info = shared[DockerInfo]
>>> docker_info.data['Containers']
'0'
>>> docker_info.data['ID']
'QPOX:46K6:RZK5:GPBT:DEUD:QM6H:5LRE:R63D:42DI:4BH3:6ZOZ:5EUM'

If the command does not return the information (for example, the Docker daemon isn’t running, the data dictionary is empty.

class insights.parsers.dockerinfo.DockerInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Represents the output of the /usr/bin/docker info command.

The resulting output of the command is essentially key/value pairs.

parse_content(content)[source]

This method must be implemented by classes based on this class.

DumpE2FS - Command dumpe2fs -h

This parser handles dumpe2fs output.

The object provides access to this data using a dictionary. Particular keys are stored as lists:

  • Filesystem features

  • Filesystem flags

  • Default mount options

Other keys are stored as strings. The name of the device is stored in the dev_name property.

Typical contents of the /sbin/dumpe2fs -h /dev/device command:

dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   <none>
Last mounted on:          /usr
Filesystem UUID:          1b332c5d-2410-4934-9118-466f8a14841f
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl

Examples

>>> e2fs = shared[DumpE2fs]
>>> e2fs.dev_name
'/dev/device'
>>> e2fs['Filesystem volume name']
'<none>'
>>> e2fs['Last mounted on']
'/usr'
>>> e2fs['Filesystem UUID']
'1b332c5d-2410-4934-9118-466f8a14841f'
>>> e2fs['Filesystem magic number']
'0xEF53'
>>> e2fs['Filesystem revision #']
'1 (dynamic)'
>>> e2fs['Filesystem features']
['has_journal', 'ext_attr', 'resize_inode', 'dir_index', 'filetype',
 'needs_recovery', 'extent', 'flex_bg', 'sparse_super', 'large_file',
 'huge_file', 'uninit_bg', 'dir_nlink', 'extra_isize'],
>>> e2fs['Filesystem flags']
['signed_directory_hash']
>>> e2fs['Default mount options']
['user_xattr', 'acl']
class insights.parsers.dumpe2fs_h.DumpE2fs(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Parse each line in the output of the dumpe2fs command.

parse_content(content)[source]

This method must be implemented by classes based on this class.

engine_config - command engine-config --all

This module provides access to ovirt-engine configuration parameters by parsing output of command engine-config --all.

class insights.parsers.engine_config.EngineConfigAll(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parsing output of command engine-config --all

The parser tries its best to get value & version for specified keyword. At the moment it works well for output which has keyword, value & version in a single line. It ignores keywords where is fails. It skip(rather fails) the keyword having multi-line output.

Typical output of engine-config --all command is:

MaxRerunVmOnVdsCount: 3 version: general
MaxStorageVdsTimeoutCheckSec: 30 version: general
ClusterRequiredRngSourcesDefault:  version: 3.6
ClusterRequiredRngSourcesDefault:  version: 4.0
ClusterRequiredRngSourcesDefault:  version: 4.1
HotUnplugCpuSupported: {"x86":"false","ppc":"false"} version: 3.6
HotUnplugCpuSupported: {"x86":"false","ppc":"false"} version: 4.0
HotUnplugCpuSupported: {"x86":"true","ppc":"true"} version: 4.1

Examples

>>> from insights.parsers.engine_config import EngineConfigAll
>>> from insights.tests import context_wrap
>>> output = EngineConfigAll(context_wrap(OUTPUT))}
>>> 'MaxRerunVmOnVdsCount' in output
True
>>> output['MaxRerunVmOnVdsCount']
['3']
>>> output.get('MaxRerunVmOnVdsCount')
['3']
>>> output['HotUnplugCpuSupported']
['{"x86":"false","ppc":"false"}', '{"x86":"false","ppc":"false"}', '{"x86":"true","ppc":"true"}']
>>> output['ClusterRequiredRngSourcesDefault']
[]
>>> output.head('HotUnplugCpuSupported')
'{"x86":"false","ppc":"false"}'
>>> output.last('HotUnplugCpuSupported')
'{"x86":"true","ppc":"true"}'
>>> output.get_version('HotUnplugCpuSupported')
['3.6', '4.0', '4.1']
>>> 'IDoNotExit' in output
False
>>> output['IDoNotExit']
[]
>>> output.get('IDoNotExit')
[]
>>> output.get_version('IDoNotExit')
[]
>>> output.head('IDoNotExist)
>>> output.last('IDoNotExist)
fields

List of KeyValue namedtupules for each line in the configuration file.

Type

list

keywords

Set of keywords present in the configuration file, each keyword has been converted to lowercase.

Type

set

get(keyword)[source]

A get value for keyword specified. A “dictionary like” method.

Example

>>> output.get('MaxStorageVdsTimeoutCheckSec')
['30']
Parameters

keyword (str) -- A key. For ex. HotUnplugCpuSupported.

Returns

Values associated with a keyword. Returns an empty list if, all the values are empty or keyword does not exist.

Return type

list

get_version(keyword)[source]

Get versions associated with a key.

Typical output is engine-config --all command:

MaxStorageVdsTimeoutCheckSec: 30 version: general
HotUnplugCpuSupported: {"x86":"false","ppc":"false"} version: 3.6
HotUnplugCpuSupported: {"x86":"false","ppc":"false"} version: 4.0
HotUnplugCpuSupported: {"x86":"true","ppc":"true"} version: 4.1

Examples

>>> output.get_version('MaxStorageVdsTimeoutCheckSec')
['general']
>>> output.get_version('HotUnplugCpuSupported')
['3.6', '4.0', '4.1']
Parameters

keyword (str) -- A key. For ex. HotUnplugCpuSupported.

Returns

Versions associated with a keyword. Returns an empty list if, all the versions are empty or keyword does not exist.

Return type

list

head(keyword)[source]

Get first element from values(list).

Example

>>> output['HotUnplugCpuSupported']
['{"x86":"false","ppc":"false"}', '{"x86":"false","ppc":"false"}', '{"x86":"true","ppc":"true"}']
>>> output.head('HotUnplugCpuSupported')
'{"x86":"false","ppc":"false"}'
Parameters

keyword (str) -- A key. For ex. HotUnplugCpuSupported.

Returns

First element from values(list) associated with a keyword else None

Return type

str

keyvalue

namedtuple: Represent name value pair as a namedtuple with case.

alias of KeyValue

last(keyword)[source]

Get last element from values(list).

Example

>>> output['HotUnplugCpuSupported']
['{"x86":"false","ppc":"false"}', '{"x86":"false","ppc":"false"}', '{"x86":"true","ppc":"true"}']
>>> output.last('HotUnplugCpuSupported')
'{"x86":"true","ppc":"true"}'
Parameters

keyword (str) -- A key. For ex. HotUnplugCpuSupported.

Returns

Last element from values(list) associated with a keyword else None.

Return type

str

parse_content(content)[source]

Parse each active line for keyword, values & version.

Parameters

content (list) -- Output of command engine-config --all.

EngineLog - file var/log/ovirt-engine/engine.log

This is a standard log parser based on the LogFileOutput class.

Sample input:

2016-05-18 13:21:21,115 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit] (DefaultQuartzScheduler_Worker-95) [5bc194fa] There is no host with more than 8 running guests, no balancing is needed
2016-05-18 14:00:51,272 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-95) [5bc194fa] VM ADLG8201 ab289661-bbaa-4d27-a67a-ad20626f60f0 moved from PoweringUp --> Paused
2016-05-18 14:00:51,318 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-95) [5bc194fa] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM ADLG8201 has paused due to storage I/O problem.

Examples

>>> logs = shared[EngineLog]
>>> 'has paused due to storage I/O problem' in logs
True
>>> logs.get('INFO')
['2016-05-18 13:21:21,115 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit] (DefaultQuartzScheduler_Worker-95) [5bc194fa] There is no host with more than 8 running guests, no balancing is needed',
 '2016-05-18 14:00:51,272 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-95) [5bc194fa] VM ADLG8201 ab289661-bbaa-4d27-a67a-ad20626f60f0 moved from PoweringUp --> Paused']
>>> from datetime import datetime
>>> list(logs.get_after(datetime(2016, 5, 18, 14, 0, 0)))
['2016-05-18 14:00:51,272 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-95) [5bc194fa] VM ADLG8201 ab289661-bbaa-4d27-a67a-ad20626f60f0 moved from PoweringUp --> Paused',
 '2016-05-18 14:00:51,318 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-95) [5bc194fa] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM ADLG8201 has paused due to storage I/O problem.']
class insights.parsers.engine_log.EngineLog(*args, **kwargs)[source]

Bases: insights.core.LogFileOutput

Warning

This parser is deprecated, please use insights.parsers.ovirt_engine_log.EngineLog instead.

Provide access to ovirt engine logs using the LogFileOutput parser class.

EtcdConf - file /etc/etcd/etcd.conf

The EtcdConf class parses the file /etc/etcd/etcd.conf. The etcd.conf is in the standard ‘ini’ format and is read by the base parser class IniConfigFile.

Typical contents of the file look like:

[member]
ETCD_NAME=f05-h19-000-1029p.rdu2.scalelab.redhat.com
ETCD_LISTEN_PEER_URLS=https://10.1.40.235:2380
ETCD_DATA_DIR=/var/lib/etcd/
#ETCD_WAL_DIR=
#ETCD_SNAPSHOT_COUNT=10000
ETCD_HEARTBEAT_INTERVAL=500
ETCD_ELECTION_TIMEOUT=2500
ETCD_LISTEN_CLIENT_URLS=https://10.1.40.235:2379
#ETCD_MAX_SNAPSHOTS=5
#ETCD_MAX_WALS=5
#ETCD_CORS=


#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.1.40.235:2380
ETCD_INITIAL_CLUSTER=f05-h19-000-1029p.rdu2.scalelab.redhat.com=https://10.1.40.235:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1
#ETCD_DISCOVERY=
#ETCD_DISCOVERY_SRV=
#ETCD_DISCOVERY_FALLBACK=proxy
#ETCD_DISCOVERY_PROXY=
ETCD_ADVERTISE_CLIENT_URLS=https://10.1.40.235:2379
#ETCD_STRICT_RECONFIG_CHECK=false
#ETCD_AUTO_COMPACTION_RETENTION=0
#ETCD_ENABLE_V2=true
ETCD_QUOTA_BACKEND_BYTES=8589934592

#[proxy]
#ETCD_PROXY=off
#ETCD_PROXY_FAILURE_WAIT=5000
#ETCD_PROXY_REFRESH_INTERVAL=30000
#ETCD_PROXY_DIAL_TIMEOUT=1000
#ETCD_PROXY_WRITE_TIMEOUT=5000
#ETCD_PROXY_READ_TIMEOUT=0

#[security]
ETCD_TRUSTED_CA_FILE=/etc/etcd/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_CERT_FILE=/etc/etcd/server.crt
ETCD_KEY_FILE=/etc/etcd/server.key
#ETCD_AUTO_TLS=false
ETCD_PEER_TRUSTED_CA_FILE=/etc/etcd/ca.crt
ETCD_PEER_CLIENT_CERT_AUTH=true
ETCD_PEER_CERT_FILE=/etc/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/etcd/peer.key
#ETCD_PEER_AUTO_TLS=false

#[logging]
ETCD_DEBUG=False

#[profiling]
#ETCD_ENABLE_PPROF=false
#ETCD_METRICS=basic
#
[auth]
ETCD_AUTH_TOKEN=simple

Examples

>>> conf.get('auth', 'ETCD_AUTH_TOKEN') == 'simple'
True
>>> conf.has_option('member', 'ETCD_NAME')
True
class insights.parsers.etcd_conf.EtcdConf(context)[source]

Bases: insights.core.IniConfigFile

Class for etcd.conf file content.

Ethtool parsers

Classes to parse ethtool command information.

The interface information for a machine is stored as lists. Each interface is accessed by iterating through the shared parser list.

The interface classes all provide the following properties:

  • iface and ifname: the interface name (derived from the output file).

  • data: the data for that interface

Parsers provided by this module include:

CoalescingInfo - command /sbin/ethtool -c {interface}

Driver - command /sbin/ethtool -i {interface}

Ethtool - command /sbin/ethtool {interface}

Features - command /sbin/ethtool -k {interface}

Pause - command /sbin/ethtool -a {interface}

Ring - command /sbin/ethtool -g {interface}

Statistics - command /sbin/ethtool -S {interface}

TimeStamp - command /sbin/ethtool -T {interface}

class insights.parsers.ethtool.CoalescingInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -c command.

The parsing is fairly similar to other ethtool parsers - the interface name is available as the ifname and iface properties, and the data about the coalescing information is stored in the data property as a dictionary. The one difference is the ‘Adaptive RX’ data, which is stored as two keys - ‘adaptive-rx’ and ‘adaptive-tx’, for RX and TX respectively. Both these return a boolean for whether the respective state equals ‘on’.

Otherwise, all values are made available as keys in the data dictionary, and as properties with the hyphen transmuted to an underscore - e.g. obj.data['tx-usecs'] is available as obj.tx_usecs.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

Sample input for /sbin/ethtool -c eth0:

Coalesce parameters for eth0:
Adaptive RX: off  TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0

rx-usecs: 20
rx-frames: 5
rx-usecs-irq: 0
rx-frames-irq: 5

tx-usecs: 72
tx-frames: 53
tx-usecs-irq: 0
tx-frames-irq: 5

rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0

rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0

Examples

>>> len(coalesce) # All interfaces in a list
1
>>> type(coalesce[0])
<class 'insights.parsers.ethtool.CoalescingInfo'>
>>> eth0 = coalesce[0] # Would normally iterate through interfaces
>>> eth0.iface
'eth0'
>>> eth0.ifname
'eth0'
>>> eth0.data['adaptive-rx'] # Old-style accessor
False
>>> eth0.adaptive_rx # New-style accessor
False
>>> eth0.rx_usecs # Note integer value
20
property ifname

the interface name

Type

(str)

parse_content(content)[source]

Parse the output of ethtool -c into a dictionary.

If ethtool -c outputs an error or could not get the pause state for the device, the “iface” property will be set but the data dictionary will be blank.

class insights.parsers.ethtool.Driver(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -i command.

All the ethtool -i outputs are stored as a list, in no particular order.

Each driver is stored as a dictionary in the data property. If the key starts with ‘supports’, then the value is a boolean test of whether the string is ‘yes’. If the value is not given on the string (e.g. ‘bus-info:’), the value is set to None.

All data is also set as attributes of the object with the attribute name being the key name with hyphens replaced with underscores.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

driver

The driver providing the interface.

Type

str

version

The version of the interface driver.

Type

str

firmware_version

The firmware version of the interface.

Type

str

supports_statistics

Does the interface support statistics gathering?

Type

bool

supports_test

Does the interface support internal self-tests?

Type

bool

supports_eeprom_access

Does the interface support access to the EEPROM?

Type

bool

supports_register_dump

Does the interface support dumping the internal registers?

Type

bool

supports_priv_flags

Does the interface support use of privileged flags?

Type

bool

Sample input for /sbin/ethtool -i bond0:

driver: bonding
version: 3.6.0
firmware-version: 2
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
Examples::
>>> len(interfaces) # All interfaces in a list
1
>>> type(interfaces[0])
<class 'insights.parsers.ethtool.Driver'>
>>> bond0 = interfaces[0] # Would normally iterate through interfaces
>>> bond0.iface
'bond0'
>>> bond0.ifname
'bond0'
>>> bond0.data['driver'] # Old-style access
'bonding'
>>> bond0.driver # New-style access
'bonding'
>>> hasattr(bond0, 'bus_info')
True
>>> bond0.bus_info is None
True
>>> bond0.supports_statistics
False
property ifname

the interface name

Type

(str)

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.ethtool.Ethtool(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parses output of ethtool command.

Raises

ParseException -- Raised when any problem parsing the command output.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

A list of the ‘Supported link modes’ values, split into individual words.

Type

list

A list of the ‘Available link modes’ values, split into individual words.

Type

list

supported_link_modes

A list of the ‘Supported ports’ values, split into individual words.

Type

list

Sample input:

Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                               100baseT/Half 100baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Link partner advertised link modes:  10baseT/Half 10baseT/Full
                                             100baseT/Half 100baseT/Full
        Link partner advertised pause frame use: Symmetric
        Link partner advertised auto-negotiation: No
        Speed: 100Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 32
        Transceiver: internal
        Auto-negotiation: on
Cannot get wake-on-lan settings: Operation not permitted
        Current message level: 0x00000007 (7)
                               drv probe link
Cannot get link status: Operation not permitted

For historic reasons, values drawn from the data are stored as lists, with each item being the value on one line.

Examples

>>> len(ethers) # All interfaces in a list
1
>>> type(ethers[0])
<class 'insights.parsers.ethtool.Ethtool'>
>>> ethinfo = ethers[0] # Would normally iterate through interfaces
>>> ethinfo.ifname
'eth0'
>>> ethinfo.speed
['100Mb/s']
>>> ethinfo.link_detected
False
>>> 'Cannot get link status' in ethinfo.data
True
>>> ethinfo.data['Cannot get link status']  # Dictionary for all data
['Operation not permitted']
>>> ethinfo.data['Supported pause frame use']
['No']
>>> ethinfo.data['PHYAD']  # Values as lists of strings for historic reasons
['32']
>>> ethinfo.supported_link_modes  # This is collected across multiple lines and split
['10baseT/Half', '10baseT/Full', '100baseT/Half', '100baseT/Full']
>>> ethinfo.advertised_link_modes
['10baseT/Half', '10baseT/Full', '100baseT/Half', '100baseT/Full']
>>> ethinfo.supported_ports  # This is converted to a list of strings
['TP', 'MII']
property ifname

Return the name of network interface in content.

Type

str

Returns field in Link detected.

Type

boolean

parse_content(content)[source]

This method must be implemented by classes based on this class.

property speed

Return field in Speed.

Type

list (str)

class insights.parsers.ethtool.Features(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parse information for the ethtool -k command.

Features are stored as a flat set of key: value pairs, with no hierarchy that the indentation of the input might imply. This means, that the output below will provide data for ‘tx-checksumming’ and ‘tx-checksum-ipv4’.

Each key stores a two-key dictionary:

  • ‘on’ (boolean) - whether the value (before any ‘[fixed]’) is ‘on’.

  • ‘fixed’ (boolean) - whether the value contains ‘fixed’.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

Sample input for /sbin/ethtool -k bond0:

Features for bond0:
rx-checksumming: off [fixed]
tx-checksumming: on
    tx-checksum-ipv4: off [fixed]
    tx-checksum-unneeded: on [fixed]
    tx-checksum-ip-generic: off [fixed]
    tx-checksum-ipv6: off [fixed]
    tx-checksum-fcoe-crc: off [fixed]
    tx-checksum-sctp: off [fixed]
scatter-gather: on
    tx-scatter-gather: on [fixed]
    tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
    tx-tcp-segmentation: on [fixed]
    tx-tcp-ecn-segmentation: on [fixed]
    tx-tcp6-segmentation: on [fixed]
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: off [requested on]
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: off
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on [fixed]
tx-udp_tnl-segmentation: on [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Examples

>>> len(features) # All interfaces in a list
1
>>> type(features[0])
<class 'insights.parsers.ethtool.Features'>
>>> bond0 = features[0] # Would normally iterate through interfaces
>>> bond0.iface
'bond0'
>>> bond0.ifname
'bond0'
>>> bond0.data['rx-vlan-offload']['on'] # Traditional access
True
>>> bond0.data['rx-vlan-offload']['fixed']
False
>>> bond0.data['tx-checksum-sctp']['on']
False
>>> bond0.data['tx-checksum-sctp']['fixed']
True
>>> bond0.is_on('ntuple-filters')
False
>>> bond0.is_on('large-receive-offload')
True
>>> bond0.is_fixed('receive-hashing')
False
>>> bond0.is_fixed('fcoe-mtu')
True
property ifname

the interface name

Type

(str)

is_fixed(feature)[source]

(bool): Does this feature exist and is it fixed?

is_on(feature)[source]

(bool): Does this feature exist and is it on?

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.ethtool.Pause(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -a command.

Each parameter in the input is stored as a key in a dictionary, with the value being whether the found string equals ‘on’.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

autonegotiate

Is autonegotiate present and set to ‘on’?

Type

bool

rx

Is receive pausing present and set to ‘on’?

Type

bool

tx

Is transmit pausing present and set to ‘on’?

Type

bool

rx_negotiated

Is receive pause autonegotiate present and set to ‘on’?

Type

bool

tx_negotiated

Is transmit pause autonegotiate present and set to ‘on’?

Type

bool

Sample input from /sbin/ethtool -a 0:

Pause parameters for eth0:
Autonegotiate:  on
RX:             on
TX:             on
RX negotiated:  off
TX negotiated:  off

Examples

>>> len(pause) # All interfaces in a list
1
>>> type(pause[0])
<class 'insights.parsers.ethtool.Pause'>
>>> eth0 = pause[0] # Would normally iterate through interfaces
>>> eth0.iface
'eth0'
>>> eth0.ifname
'eth0'
>>> eth0.data['RX'] # Old-style accessor
True
>>> eth0.autonegotiate # New-style accessor
True
>>> eth0.rx_negotiated
False
parse_content(content)[source]

Return ethtool -a result as a dict.

If ethtool -a outputs an error or could not get the pause state for the device, the “iface” property will be set but the data dictionary will be blank and all properties will return False.

class insights.parsers.ethtool.Ring(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -g command.

In addition to the standard iface and ifname parameters, as well as being available in the data property, the interface statistics are accessed using two parameters: max and current. Within each the interface settings are available as four parameters - rx, rx_mini, rx_jumbo and tx.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

max

Dictonary of maximum ring buffer settings.

Type

dict

current

Dictionary of current ring buffer settings.

Type

dict

Sample input for /sbin/ethtool -g eth0:

Ring parameters for eth0:
Pre-set maximums:
RX:             2047
RX Mini:        0
RX Jumbo:       0
TX:             511
Current hardware settings:
RX:             200
RX Mini:        0
RX Jumbo:       0
TX:             511

Examples

>>> len(ring) # All interfaces in a list
1
>>> type(ring[0])
<class 'insights.parsers.ethtool.Ring'>
>>> eth0 = ring[0] # Would normally iterate through interfaces
>>> eth0.iface
'eth0'
>>> eth0.ifname
'eth0'
>>> eth0.data['max'].rx # Old-style access
2047
>>> eth0.max.rx # New-style access
2047
class Parameters(rx, rx_mini, rx_jumbo, tx)

Bases: tuple

property rx
property rx_jumbo
property rx_mini
property tx
property ifname

Return the name of network interface in content.

parse_content(content)[source]

Parse ethtool -g info into a dictionary.

class insights.parsers.ethtool.Statistics(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -S command.

All values are made available as keys in the data dictionary, and as properties - e.g. obj.data['rx_jabbers'] is available as obj.rx_jabbers.

data

Dictionary of keys with values in a list.

Type

dict

iface

Interface name.

Type

str

Sample partial input for /sbin/ethtool -S eth0:

NIC statistics:
     rx_octets: 808488730
     rx_fragments: 0
     rx_ucast_packets: 1510830
     rx_mcast_packets: 678653
     rx_bcast_packets: 9921
     rx_fcs_errors: 0
     rx_align_errors: 0
     rx_xon_pause_rcvd: 0
     rx_xoff_pause_rcvd: 0
     rx_mac_ctrl_rcvd: 0
     rx_xoff_entered: 0
     rx_frame_too_long_errors: 0
     rx_jabbers: 0

Examples

>>> len(stats) # All interfaces in a list
1
>>> type(stats[0])
<class 'insights.parsers.ethtool.Statistics'>
>>> eth0 = stats[0] # Would normally iterate through interfaces
>>> eth0.iface
'eth0'
>>> eth0.ifname
'eth0'
>>> eth0.data['rx_octets']  # Data as integers
808488730
>>> eth0.data['rx_fcs_errors']
0
property ifname

Return the name of network interface in content.

parse_content(content)[source]

Parse the output of ethtool -S.

search(pattern, flags=0)[source]

Finds all the parameters matching a given regular expression.

Parameters
  • pattern (raw) -- A regular expression

  • flags (int) -- Regular expression flags summed from re constants.

Returns

A dictionary of the key/value pairs where the key matches the given regular expression. An empty dictionary is returned if no keys matched.

Return type

(dict)

class insights.parsers.ethtool.TimeStamp(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse information for the ethtool -T command.

Each parameter in the input is stored as a key in a dictionary.

data

Dictionary of keys with values.

Type

dict

ifname

Interface name.

Type

str

Raises

ParseException -- Raised when any problem parsing the command output.

Sample partial input for /sbin/ethtool -T eno1:

Time stamping parameters for eno1:

Capabilities:
    hardware-transmit     (SOF_TIMESTAMPING_TX_HARDWARE)
    software-transmit     (SOF_TIMESTAMPING_TX_SOFTWARE)
    hardware-receive      (SOF_TIMESTAMPING_RX_HARDWARE)
    software-receive      (SOF_TIMESTAMPING_RX_SOFTWARE)
    software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
    hardware-raw-clock    (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
    off                   (HWTSTAMP_TX_OFF)
    on                    (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
    none                  (HWTSTAMP_FILTER_NONE)
    all                   (HWTSTAMP_FILTER_ALL)

Examples

>>> len(timestamp)
1
>>> type(timestamp[0])
<class 'insights.parsers.ethtool.TimeStamp'>
>>> eno1 = timestamp[0] # Would normally iterate through interfaces
>>> eno1.ifname
'eno1'
>>> eno1.data['Capabilities']['hardware-transmit']
'SOF_TIMESTAMPING_TX_HARDWARE'
>>> eno1.data['Capabilities']['hardware-raw-clock']
'SOF_TIMESTAMPING_RAW_HARDWARE'
>>> eno1.data['PTP Hardware Clock']
'0'
>>> eno1.data['Hardware Transmit Timestamp Modes']['off']
'HWTSTAMP_TX_OFF'
>>> eno1.data['Hardware Receive Filter Modes']['all']
'HWTSTAMP_FILTER_ALL'
property ifname

the interface name

Type

(str)

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.ethtool.extract_iface_name_from_content(content)[source]

Extract the interface name from the third item in the content, delimited by spaces, up to its second-last character. For example, this transmutes Features for bond0: to bond0.

insights.parsers.ethtool.extract_iface_name_from_path(path, name)[source]

Extract the ‘real’ interface name from the path name. Basically this puts the ‘@’ back in the name in place of the underscore, where the name contains a ‘.’ or contains ‘macvtap’ or ‘macvlan’.

Examples:

real name

path name

bond0.104@bond0

bond0.104_bond0

__tmp1111

__tmp1111

macvtap@bond0

macvlan_bond0

prod_bond

prod_bond

Facter - command /usr/bin/facter

Module for the parsing of output from the facter command. Data is avaliable as a dict for each line of the output.

Sample input data for the facter command looks like:

architecture => x86_64
bios_vendor => Phoenix Technologies LTD
bios_version => 6.00
domain => example.com
facterversion => 1.7.6
filesystems => btrfs,ext2,ext3,ext4,msdos,vfat,xfs
fqdn => plin-w1rhns01.example.com
hostname => plin-w1rhns01
ipaddress => 172.23.27.50
ipaddress_ens192 => 172.23.27.50
ipaddress_lo => 127.0.0.1
is_virtual => true
kernel => Linux
kernelmajversion => 3.10

Examples

>>> facts_info = shared[facter]
>>> facts_info.kernelmajversion
'3.10'
>>> facts_info.domain
'example.com'
>>> facts_info.architecture
'x86_64'
class insights.parsers.facter.Facter(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Class for parsing facter command output.

Attributes are the facts in each line of the command output. All facts may be accessed as obj.fact_name. The get method is also provided to access any facts.

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

Returns

dictionary with parsed data

Return type

dict

FCMatch - command /bin/fc-match -sv 'sans:regular:roman' family fontformat

This command gets the fonts information in the current system.

Typical output of this command is:

Pattern has 2 elts (size 16)
    family: "DejaVu Sans"(s)
    fontformat: "TrueType"(w)

Pattern has 2 elts (size 16)
    family: "DejaVu Sans"(s)
    fontformat: "TrueType"(w)

Pattern has 2 elts (size 16)
    family: "DejaVu Sans"(s)
    fontformat: "TrueType"(w)

Pattern has 2 elts (size 16)
    family: "Nimbus Sans L"(s)
    fontformat: "Type 1"(s)

Pattern has 2 elts (size 16)
    family: "Standard Symbols L"(s)
    fontformat: "Type 1"(s)

Note

As there is a bug on RHEL6 that can cause segfault when executing fc-match command, we only parse the command output on RHEL7 before the bug is fixed.

Examples

>>> fc_match = shared[FCMatch]
>>> fc_match_info = fc_match[0]
>>> fc_match_info
{'fontformat': '"TrueType"(w)', 'family': '"DejaVu Sans"(s)'}
class insights.parsers.fc_match.FCMatch(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse command /bin/fc-match -sv ‘sans:regular:roman’ family fontformat. This object provides the ‘ __getitem__’ and ‘__iter__’ methods to allow it to be used as a list to iterate over the data, e.g.:

>>> fc_match = shared[FCMatch]
>>> for item in fc_match:
>>>     print item["family"]
>>> fc_match_info = fc_match[0]
parse_content(content)[source]

This method must be implemented by classes based on this class.

FcoeadmI - command fcoeadm -i

Module for parsing the output of command fcoeadm -i. The bulk of the content is split on the colon and keys are kept as is. Lines beginning with ‘Description’, ‘Revision’, ‘Manufacturer’, ‘Serial Number’, ‘Driver’ ,’Number of Ports’ are kept in a dictionary keyed under each of these names. Lines beginning with ‘Symbolic Name’, ‘OS Device Name’, ‘Node Name’, ‘Port Name’, ‘FabricName’, ‘Speed’, ‘Supported Speed’, ‘MaxFrameSize’, ‘FC-ID (Port ID)’, ‘State’ are kept in a sub-dictionary keyed under each these names. All the sub-dictionaries are kept in a list keyed in ‘Interfaces’.

class insights.parsers.fcoeadm_i.FcoeadmI(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class for parsing fcoeadm -i command output.

Typical output of command fcoeadm -i looks like:

Description:      NetXtreme II BCM57810 10 Gigabit Ethernet
Revision:         10
Manufacturer:     Broadcom Corporation
Serial Number:    2C44FD8F4418
Driver:           bnx2x 1.712.30-0
Number of Ports:  1

    Symbolic Name:     bnx2fc (QLogic BCM57810) v2.9.6 over eth8.0-fcoe
    OS Device Name:    host6
    Node Name:         0x50060B0000C26237
    Port Name:         0x50060B0000C26236
    FabricName:        0x0000000000000000
    Speed:             Unknown
    Supported Speed:   1 Gbit, 10 Gbit
    MaxFrameSize:      2048
    FC-ID (Port ID):   0xFFFFFFFF
    State:             Online

    Symbolic Name:     bnx2fc (QLogic BCM57810) v2.9.6 over eth6.0-fcoe
    OS Device Name:    host7
    Node Name:         0x50060B0000C26235
    Port Name:         0x50060B0000C26234
    FabricName:        0x0000000000000000
    Speed:             Unknown
    Supported Speed:   1 Gbit, 10 Gbit
    MaxFrameSize:      2048
    FC-ID (Port ID):   0xFFFFFFFF
    State:             Offline

Examples

>>> type(fi)
<class 'insights.parsers.fcoeadm_i.FcoeadmI'>
>>> fi.fcoe["Description"]
'NetXtreme II BCM57810 10 Gigabit Ethernet'
>>> fi["Description"]
'NetXtreme II BCM57810 10 Gigabit Ethernet'
>>> fi.fcoe["Driver"]
'bnx2x 1.712.30-0'
>>> fi["Driver"]
'bnx2x 1.712.30-0'
>>> fi.fcoe['Serial Number']
'2C44FD8F4418'
>>> fi['Serial Number']
'2C44FD8F4418'
>>> fi.iface_list
['eth8.0-fcoe', 'eth6.0-fcoe']
>>> fi.nic_list
['eth8', 'eth6']
>>> fi.stat_list
['Online', 'Offline']
>>> fi.get_stat_from_nic('eth6')
'Offline'
>>> fi.get_host_from_nic('eth6')
'host7'
driver_name

Driver name

Type

str

driver_version

Driver version

Type

str

iface_list

FCoE interface names

Type

list

nic_list

Ethernet ports running FCoE interfaces

Type

list

stat_list

FCoE interface(s) status

Type

list

Raises
property fcoe

The result parsed of ‘/usr/sbin/fcoeadm -i’

Type

(dict)

get_host_from_nic(nic)[source]

Get ‘OS Device Name’ for the specified ethernet port.

Parameter:

nic (str): Ethernet port which provided by FCoE adapter

Returns

Return fcoe host, which as ‘OS Device Name’ to display,

when nic is not valid fcoe port, raise ValueError

Return type

str

Raises

ValueError -- When nic is not valid fcoe port

get_stat_from_nic(nic)[source]

Get ‘State’ of fcoe interface created on specified ethernet port.

Parameter:

nic (str): Ethernet port which provided by FCoE adapter.

Returns

Return fcoe status. When nic is not valid fcoe port,

raise ValueError.

Return type

str

Raises

ValueError -- When nic is not valid fcoe port

parse_content(content)[source]

This method must be implemented by classes based on this class.

FindmntPropagation - command findmnt -lo+PROPAGATION

This module provides status of propagation flag of filesystems using the output of command findmnt -lo+PROPAGATION.

class insights.parsers.findmnt.FindmntPropagation(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse output of findmnt -lo+PROPAGATION.

Typical output of findmnt -lo+PROPAGATION command is:

TARGET                                                          SOURCE                                FSTYPE          OPTIONS                                                                       PROPAGATION
/sys                                                            sysfs                                 sysfs           rw,nosuid,nodev,noexec,relatime,seclabel                                      shared
/proc                                                           proc                                  proc            rw,nosuid,nodev,noexec,relatime                                               shared
/dev                                                            devtmpfs                              devtmpfs        rw,nosuid,seclabel,size=8035516k,nr_inodes=2008879,mode=755                   shared
/sys/kernel/security                                            securityfs                            securityfs      rw,nosuid,nodev,noexec,relatime                                               shared
/dev/shm                                                        tmpfs                                 tmpfs           rw,nosuid,nodev,seclabel                                                      shared
/run/netns                                                      tmpfs[/netns]                         tmpfs           rw,nosuid,nodev,seclabel,mode=755                                             shared
/netns                                                          tmpfs[/netns]                         tmpfs           rw,nosuid,nodev,seclabel,mode=755                                             shared
/run/netns/qdhcp-08f32dab-927e-4a61-933d-57d425827b57           proc                                  proc            rw,nosuid,nodev,noexec,relatime                                               shared
/run/netns/qdhcp-fd138c0a-5ec7-44f8-88df-0501c4c7a968           proc                                  proc            rw,nosuid,nodev,noexec,relatime                                               shared

Examples

>>> output.search_target('shm') == [{'target': '/dev/shm', 'source': 'tmpfs', 'fstype': 'tmpfs', 'options': 'rw,nosuid,nodev,seclabel', 'propagation': 'shared'}]
True
>>> len(output.target_startswith('/run/netns')) == 3
True
>>> output.target_startswith('/run/netns')[0].get('propagation', None) == 'shared'
True
cols

List of key value pair derived from the command.

Type

list

keywords

keywords(or TARGETs) present in the command

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

search_target(target)[source]

Similar to __contains__() but returns the list of targets.

Example

>>> output.search_target('shm') == [{'target': '/dev/shm', 'source': 'tmpfs', 'fstype': 'tmpfs', 'options': 'rw,nosuid,nodev,seclabel', 'propagation': 'shared'}]
True
target_startswith(target)[source]

Return all the targets that starts with ‘target’. Useful to find the mountpoints.

Example

>>> len(output.target_startswith('/run/netns')) == 3
True

FirewallDConf - file /etc/firewalld/firewalld.conf

The FirewallDConf class parses the file /etc/firewalld/firewalld.conf. And returns a dict contains the firewall configurations.

Examples

>>> type(firewalld)
<class 'insights.parsers.firewall_config.FirewallDConf'>
>>> 'DefaultZone' in firewalld
True
>>> firewalld['DefaultZone']
'public'
class insights.parsers.firewall_config.FirewallDConf(context)[source]

Bases: insights.core.Parser, dict

Class for parsing /etc/firewalld/firewalld.conf file.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Foreman and Candlepin logs

Module for parsing the log files in foreman-debug archive

Note

Please refer to its super-class insights.core.LogFileOutput for usage information.

Parsers provided by this module:

CandlepinErrorLog - file sos_commands/foreman/foreman-debug/var/log/candlepin/error.log

CandlepinLog - file /var/log/candlepin/candlepin.log

ProductionLog - file /var/log/foreman/production.log

ProxyLog - file /var/log/foreman-proxy/proxy.log

SatelliteLog - file /var/log/foreman-installer/satellite.log

class insights.parsers.foreman_log.CandlepinErrorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing candlepin/error.log file.

Sample log contents:

2016-09-07 13:56:49,001 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 14:07:33,735 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 14:09:55,173 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 15:20:33,796 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 15:27:34,367 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 16:49:24,650 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname
2016-09-07 18:07:53,688 [req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28, org=org_ray] ERROR org.candlepin.sync.Importer - Conflicts occurred during import that were
2016-09-07 18:07:53,690 [req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28, org=org_ray] ERROR org.candlepin.sync.Importer - [DISTRIBUTOR_CONFLICT]
2016-09-07 18:07:53,711 [req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28, org=org_ray] ERROR org.candlepin.resource.OwnerResource - Recording import failure
org.candlepin.sync.ImportConflictException: Owner has already imported from another subscription management application.

Examples

>>> candlepin_log = shared[Candlepin_Error_Log]
>>> candlepin_log.get('req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28')[0]['raw_message']
'2016-09-07 18:07:53,688 [req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28, org=org_ray] ERROR org.candlepin.sync.Importer - Conflicts occurred during import that were'
>>> candlepin_log.get_after(datetime(2016, 9, 7, 16, 0, 0)[0]['raw_message']
'2016-09-07 16:49:24,650 [=, org=] WARN  org.apache.qpid.transport.network.security.ssl.SSLUtil - Exception received while trying to verify hostname'
class insights.parsers.foreman_log.CandlepinLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing candlepin/candlepin.log file.

class insights.parsers.foreman_log.ForemanSSLAccessLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing var/log/httpd/foreman-ssl_access_ssl.log file.

Sample log contents:

10.181.73.211 - rhcapkdc.example2.com [27/Mar/2017:13:34:52 -0400] "GET /rhsm/consumers/385e688f-43ad-41b2-9fc7-593942ddec78 HTTP/1.1" 200 10736 "-" "-"
10.181.73.211 - rhcapkdc.example2.com [27/Mar/2017:13:34:52 -0400] "GET /rhsm/status HTTP/1.1" 200 263 "-" "-"
10.185.73.33 - 8a31cd915917666001591d6fb44602a7 [27/Mar/2017:13:34:52 -0400] "GET /pulp/repos/Acme_Inc/Library/RHEL7_Sat_Cap        sule_Servers/content/dist/rhel/server/7/7Server/x86_64/os/repodata/repomd.xml HTTP/1.1" 200 2018 "-" "urlgrabber/3.10 yum/3.4.3"
10.181.73.211 - rhcapkdc.example2.com [27/Mar/2017:13:34:52 -0400] "GET /rhsm/consumers/4f8a39d0-38b6-4663-8b7e-03368be4d3ab/owner HTTP/1.1" 200 5159 "-"
10.181.73.211 - rhcapkdc.example2.com [27/Mar/2017:13:34:52 -0400] "GET /rhsm/consumers/385e688f-43ad-41b2-9fc7-593942ddec78/compliance HTTP/1.1" 200 5527
10.181.73.211 - rhcapkdc.example2.com [27/Mar/2017:13:34:52 -0400] "GET /rhsm/consumers/4f8a39d0-38b6-4663-8b7e-03368be4d3ab HTTP/1.1" 200 10695 "-" "-"

Examples

>>> foreman_ssl_acess_log = shared[ForemanSSLAccessLog]
>>> foreman_ssl_acess_log.get('req=d9dc3cfd-abf7-485e-b1eb-e1e28e4b0f28')
class insights.parsers.foreman_log.ProductionLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing foreman/production.log file.

class insights.parsers.foreman_log.ProxyLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing foreman-proxy/proxy.log file.

class insights.parsers.foreman_log.SatelliteLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing foreman-installer/satellite.log file.

ForemanProxyConf - file /etc/foreman-proxy/settings.yml

This module provides parsing for FOREMAN-PROXY configuration files. ForemanProxyConf is a parser for /etc/foreman-proxy/settings.yml files.

Typical output of the foreman_proxy_conf is:

# Comment line
:settings_directory: /etc/foreman-proxy/settings.d
:ssl_ca_file: /etc/foreman-proxy/ssl_ca.pem
:ssl_certificate: /etc/foreman-proxy/ssl_cert.pem
:ssl_private_key: /etc/foreman-proxy/ssl_key.pem
:trusted_hosts:
  - xxx.m2m.xxx
  - xxx.m2m.xxx
:foreman_url: https://xxx.m2m.xxx

Note

The examples in this module may be executed with the following command:

python -m insights.parsers.foreman_proxy_conf

Examples

>>> settings_yml_input_data = '''
... # Comment line
... :settings_directory: /etc/foreman-proxy/settings.d
... :ssl_ca_file: /etc/foreman-proxy/ssl_ca.pem
... :ssl_certificate: /etc/foreman-proxy/ssl_cert.pem
... :ssl_private_key: /etc/foreman-proxy/ssl_key.pem
... :trusted_hosts:
...   - xxx.m2m.xxx
...   - xxx.m2m.xxx
... :foreman_url: https://xxx.m2m.xxx
... '''.strip()
>>> from insights.tests import context_wrap
>>> setting_dic = ForemanProxyConf(context_wrap(settings_yml_input_data, path='/etc/foreman-proxy/settings.yml'))
>>> setting_dic.data[':settings_directory']
'/etc/foreman-proxy/settings.d'
>>> setting_dic.data[':ssl_ca_file']
'/etc/foreman-proxy/ssl_ca.pem'
>>> setting_dic.data[':ssl_private_key']
'/etc/foreman-proxy/ssl_key.pem'
>>> setting_dic.data[':foreman_url']
'https://xxx.m2m.xxx'
>>> setting_dic.data[':trusted_hosts']
['xxx.m2m.xxx', 'xxx.m2m.xxx']
>>> "xxx.m2m.xxx" in setting_dic.data[':trusted_hosts']
True
class insights.parsers.foreman_proxy_conf.ForemanProxyConf(context)[source]

Bases: insights.core.YAMLParser

Class for parsing the content of foreman_proxy_conf.

Sat6DBMigrateStatus - command foreman-rake db:migrate:status

This parser collects the output of the foreman-rake db:migrate:status command, which checks the status of all the migrations known to Foreman. Each migration has a status, a date code, and a name. These are stored in a list of migrations, with ‘up’ migrations being listed in an up property and migrations with any other status being stored in a down property.

Sample input:

database: foreman

 Status   Migration ID    Migration Name
--------------------------------------------------
   up     20090714132448  Create hosts
   up     20090714132449  Add audits table
   up     20090715143858  Create architectures
   up     20090717025820  Create media
   up     20090718060746  Create domains
   up     20090718064254  Create subnets
   up     20090720134126  Create operatingsystems
   up     20090722140138  Create models

Examples

>>> status = shared[Sat6DBMigrateStatus]
>>> status.database
'foreman'
>>> '20090714132448' in status.migrations
True
>>> '20090714140138' in status.migrations
False
>>> len(status.up)
8
>>> status.down
[]
class insights.parsers.foreman_rake_db_migrate_status.Migration(status, id, name)

Bases: tuple

namedtuple: Stores one migration record

property id
property name
property status
class insights.parsers.foreman_rake_db_migrate_status.Sat6DBMigrateStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the foreman-rake db:migrate:status command.

database

The name of the database (usually ‘foreman’)

Type

str

migrations

All the migrations, indexed by migration ID.

Type

dict

up

Only the ‘up’ migrations, in order of appearance

Type

list

down

All migrations not listed as ‘up’, in order of appearance

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

ForemanTasksConfig - file /etc/sysconfig/foreman-tasks

class insights.parsers.foreman_tasks_config.ForemanTasksConfig(*args, **kwargs)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Warning

This parser is deprecated, please use insights.parsers.sysconfig.ForemanTasksSysconfig instead.

Parse the foreman-tasks configuration file.

Produces a simple dictionary of keys and values from the configuration file contents , stored in the data attribute. The object also functions as a dictionary itself thanks to the insights.core.LegacyItemAccess mixin class.

Sample configuration file:

FOREMAN_USER=foreman
BUNDLER_EXT_HOME=/usr/share/foreman
RAILS_ENV=production
FOREMAN_LOGGING=warn
FOREMAN_LOGGING_SQL=warn
FOREMAN_TASK_PARAMS="-p foreman"
FOREMAN_LOG_DIR=/var/log/foreman

RUBY_GC_MALLOC_LIMIT=4000100
RUBY_GC_MALLOC_LIMIT_MAX=16000100
RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR=1.1
RUBY_GC_OLDMALLOC_LIMIT=16000100
RUBY_GC_OLDMALLOC_LIMIT_MAX=16000100

#Set the number of executors you want to run
#EXECUTORS_COUNT=1

#Set memory limit for executor process, before it's restarted automatically
#EXECUTOR_MEMORY_LIMIT=2gb

#Set delay before first memory polling to let executor initialize (in sec)
#EXECUTOR_MEMORY_MONITOR_DELAY=7200 #default: 2 hours

#Set memory polling interval, process memory will be checked every N seconds.
#EXECUTOR_MEMORY_MONITOR_INTERVAL=60

Examples

>>> foreman_tasks_config['RAILS_ENV']
'production'
>>> 'AUTO' in foreman_tasks_config
False
parse_content(content)[source]

This method must be implemented by classes based on this class.

freeipa_healthcheck_log - File /var/log/ipa/healthcheck/healthcheck.log

This module provides plugins access to file /var/log/ipa/healthcheck/healthcheck.log

This file contains a list of items, each item representing a check made on the local IPA server. The file can be either single-line or multi-line indented.

Typical content of file /var/log/ipa/healthcheck/healthcheck.log is:

[
  {
    "source": "ipahealthcheck.ipa.roles",
    "check": "IPACRLManagerCheck",
    "result": "SUCCESS",
    "uuid": "1f4177a4-0ddb-4e4d-8258-a5cd5f4638fc",
    "when": "20191203122317Z",
    "duration": "0.002254",
    "kw": {
      "key": "crl_manager",
      "crlgen_enabled": true
    }
  },
  {
    "source": "ipahealthcheck.ipa.roles",
    "check": "IPARenewalMasterCheck",
    "result": "SUCCESS",
    "uuid": "1feb7f99-2e98-4e37-bb52-686896972022",
    "when": "20191203122317Z",
    "duration": "0.018330",
    "kw": {
      "key": "renewal_master",
      "master": true
    }
  },
  {
    "source": "ipahealthcheck.system.filesystemspace",
    "check": "FileSystemSpaceCheck",
    "result": "ERROR",
    "uuid": "90ed8765-6ad7-425c-abbd-b07a652649cb",
    "when": "20191203122221Z",
    "duration": "0.000474",
    "kw": {
      "msg": "/var/log/audit/: free space under threshold: 14 MiB < 512 MiB",
      "store": "/var/log/audit/",
      "free_space": 14,
      "threshold": 512
    }
  }
]

The list of errors can be accessed via the issues property.

Examples

>>> len(healthcheck.issues)
1
>>> healthcheck.issues[0]['check'] == 'FileSystemSpaceCheck'
True
class insights.parsers.freeipa_healthcheck_log.FreeIPAHealthCheckLog(context)[source]

Bases: insights.core.JSONParser

Parses the content of file /var/log/ipa/healthcheck/healthcheck.log.

get_results(source, check)[source]

Given a source and check find and return the result

property issues

non-success results in healthcheck log.

Type

list

FSTab - file /etc/fstab

Parse the /etc/fstab file into a list of lines. Each line is a dictionary of fields, named according to their definitions in man fstab:

  • fs_spec - the device to mount

  • fs_file - the mount point

  • fs_vfstype - the type of file system

  • fs_mntops - the mount options as a dictionary

  • fs_freq - the dump frequency

  • fs_passno - check the filesystem on reboot in this pass number

  • raw_fs_mntops - the mount options as a string

  • raw - the RAW line which is useful to front-end

fs_freq and fs_passno are recorded as integers if found, and zero if not present.

fs_mntops is wrapped as a as a insights.parsers.mount.MountOpts object. For instance, the option rw in rw,dmode=0500 may be accessed as mnt_row_info.rw with the value True, and the dmode can be accessed as mnt_row_info.dmode with the value 0500.

This data, as above, is available in the data property:

  • Wrapped as an FSTabEntry, each column can also be accessed as an attribute with the same name.

The FSTabEntry for each mount point is also available via the FSTab.mounted_on property; the data is the same as that stored in the FSTab.data list.

class insights.parsers.fstab.FSTab(context)[source]

Bases: insights.core.Parser

Parse the content of /etc/fstab.

Typical content of the fstab looks like:

#
# /etc/fstab
# Created by anaconda on Fri May  6 19:51:54 2016
#
/dev/mapper/rhel_hadoop--test--1-root /                       xfs     defaults        0 0
UUID=2c839365-37c7-4bd5-ac47-040fba761735 /boot               xfs     defaults        0 0
/dev/mapper/rhel_hadoop--test--1-home /home                   xfs     defaults        0 0
/dev/mapper/rhel_hadoop--test--1-swap swap                    swap    defaults        0 0

/dev/sdb1 /hdfs/data1 xfs rw,relatime,seclabel,attr2,inode64,noquota 0 0
/dev/sdc1 /hdfs/data2 xfs rw,relatime,seclabel,attr2,inode64,noquota 0 0
/dev/sdd1 /hdfs/data3 xfs rw,relatime,seclabel,attr2,inode64,noquota 0 0

localhost:/ /mnt/hdfs nfs rw,vers=3,proto=tcp,nolock,timeo=600 0 0

/dev/mapper/vg0-lv2     /test1     ext4 defaults,data=writeback     1 1
nfs_hostname.redhat.com:/nfs_share/data     /srv/rdu/cases/000  nfs     ro,defaults,hard,intr,bg,noatime,nodev,nosuid,nfsvers=3,tcp,rsize=32768,wsize=32768     0

Examples

>>> type(fstab)
<class 'insights.parsers.fstab.FSTab'>
>>> len(fstab)
9
>>> fstab.data[0]['fs_spec'] # Note that data is a list not a dict here
'/dev/mapper/rhel_hadoop--test--1-root'
>>> fstab.data[0].fs_spec
'/dev/mapper/rhel_hadoop--test--1-root'
>>> fstab.data[0].raw
'/dev/mapper/rhel_hadoop--test--1-root /                       xfs    defaults        0 0'
>>> fstab.data[0].fs_mntops.defaults
True
>>> 'relatime' in fstab.data[0].fs_mntops
False
>>> fstab.data[0].fs_mntops.get('relatime')
None
>>> fstab.mounted_on['/hdfs/data3'].fs_spec
'/dev/sdd1'
data

a list of parsed fstab entries as FSTabEntry objects.

Type

list

mounted_on

a dictionary of FSTabEntry objects keyed on mount point.

Type

dict

fsspec_of_path(path)[source]

Return the device name if longest-matched mount-point of path is found, else None. If path contains any blank, pass it in directly or escape with ‘ ‘, eg: ‘/VM TOOLS/cache’ or ‘/VM TOOLS/cache’

parse_content(content)[source]

Parse each line in the file /etc/fstab.

search(**kwargs)[source]

Search for the given key/value pairs in the data. Please refer to the insights.parsers.keyword_search() function documentation for a more complete description of how to use this.

Fields that can be searched (as per man fstab):

  • fs_spec: the block special or remote filesystem path or label.

  • fs_file: The mount point for the filesystem.

  • fs_vfstype: The file system type.

  • fs_mntops: The mount options. Since this is also a dictionary, this can be searched using __contains - see the examples below.

  • fs_freq: The dump frequency - rarely used.

  • fs_passno: The pass for file system checks - rarely used.

Examples

Search for the root file system:

fstab.search(fs_file='/')

Search for file systems mounted from a LABEL declaration

fstab.search(fs_spec__startswith='LABEL=')

Search for file systems that use the ‘uid’ mount option:

fstab.search(fs_mntops__contains='uid')

Search for XFS file systems using the ‘relatime’ option:

fstab.search(fs_vfstype='xfs', fs_mntops__contains='relatime')

class insights.parsers.fstab.FSTabEntry(data=None)[source]

Bases: insights.parsers.mount.AttributeAsDict

An object representing an entry in /etc/fstab. Each entry contains below fixed attributes:

fs_spec

the device to mount

Type

str

fs_file

the mount point

Type

str

fs_vfstype

the type of file system

Type

str

fs_mntops

the mount options as a insights.parser.mount.MountOpts

Type

dict

fs_freq

the dump frequency

Type

int

fs_passno

check the filesystem on reboot in this pass number

Type

int

raw_fs_mntops

the mount options as a string

Type

str

raw

the RAW line which is useful to front-end

Type

str

GaleraCnf - file /etc/my.cnf.d/galera.cnf

This module provides parsing for the galera configuration of MySQL. The input is the contents of the file /etc/my.cnf.d/galera.cnf. Typical contents of the galera.cnf file looks like this:

[client]
port = 3306
socket = /var/lib/mysql/mysql.sock

[isamchk]
key_buffer_size = 16M

[mysqld]
basedir = /usr
binlog_format = ROW
datadir = /var/lib/mysql
default-storage-engine = innodb
expire_logs_days = 10
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
key_buffer_size = 16M
log-error = /var/log/mariadb/mariadb.log
max_allowed_packet = 16M
max_binlog_size = 100M
max_connections = 8192
wsrep_max_ws_rows = 131072
wsrep_max_ws_size = 1073741824

[mysqld_safe]
log-error = /var/log/mariadb/mariadb.log
nice = 0
socket = /var/lib/mysql/mysql.sock

[mysqldump]
max_allowed_packet = 16M
quick
quote-names

See the IniConfigFile base class for examples.

class insights.parsers.galera_cnf.GaleraCnf(context)[source]

Bases: insights.core.IniConfigFile

Parses the content of /etc/my.cnf.d/galera.cnf.

parse_content(content, allow_no_value=True)[source]

Calls parent method to parse contents but overrides parameters.

The galera config file may have keys with no value. This class implements parse_content in order to pass the flag allow_no_value to the parent parser in order to allow parsing of the no-value keys.

CertList - command getcert list

class insights.parsers.getcert_list.CertList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of getcert list.

Stores data as a pseudo-dictionary, keyed on request ID. But it’s much easier to find requests based on their properties, using the search method. This finds requests based on their keys, e.g. search(stuck='no'). Spaces and dashes are converted to underscores in the keys being sought, so one can search for key_pair_storage or pre_save_command. Multiple keys can be searched in the same call, e.. search(CA="IPA", stuck='yes'). If no keys are given, no requests are returned.

Sample output:

Number of certificates and requests being tracked: 2.
Request ID '20130725003533':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-LDAP-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-LDAP-EXAMPLE-COM/pwdfile.txt'
        certificate: type=NSSDB,location='/etc/dirsrv/slapd-LDAP-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB'
        CA: IPA
        issuer: CN=Certificate Authority,O=LDAP.EXAMPLE.COM
        subject: CN=master.LDAP.EXAMPLE.COM,O=LDAP.EXAMPLE.COM
        expires: 2017-06-28 12:52:12 UTC
        eku: id-kp-serverAuth,id-kp-clientAuth
        pre-save command:
        post-save command: /usr/lib64/ipa/certmonger/restart_dirsrv LDAP-EXAMPLE-COM
        track: yes
        auto-renew: yes
Request ID '20130725003602':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-PKI-IPA',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-PKI-IPA/pwdfile.txt'
        certificate: type=NSSDB,location='/etc/dirsrv/slapd-PKI-IPA',nickname='Server-Cert',token='NSS Certificate DB'
        CA: IPA
        issuer: CN=Certificate Authority,O=EXAMPLE.COM
        subject: CN=ldap.EXAMPLE.COM,O=EXAMPLE.COM
        expires: 2017-06-28 12:52:13 UTC
        eku: id-kp-serverAuth,id-kp-clientAuth
        pre-save command:
        post-save command: /usr/lib64/ipa/certmonger/restart_dirsrv PKI-IPA
        track: yes
        auto-renew: yes
num_tracked

The number of ‘tracked’ certificates and requests, as given in the first line of the output.

Type

int

requests

The list of request IDs as they appear in the output, as strings.

Type

list

Examples

>>> certs = shared[Cert_List]
>>> certs.num_tracked  # number of certificates tracked from first line
2
>>> len(certs)  # number of requests stored - may be smaller than num_tracked
2
>>> certs.requests
['20130725003533', '20130725003602']
>>> '20130725003533' in certs
True
>>> certs['20130725003533']['issuer']
'CN=Certificate Authority,O=LDAP.EXAMPLE.COM'
>>> for request in certs.search(CA='IPA'):
...     print request['certificate']
...
type=NSSDB,location='/etc/dirsrv/slapd-LDAP-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB'
type=NSSDB,location='/etc/dirsrv/slapd-PKI-IPA',nickname='Server-Cert',token='NSS Certificate DB'
parse_content(content)[source]

We’re only interested in lines that contain a ‘:’. Special lines start with ‘Request ID’ and ‘Number of certificates…’; we handle those separately. All other lines are stripped of surrounding white space and stored as a key-value pair against the last request ID.

search(**kwargs)[source]

Search for one or more key-value pairs in the given data. See the documentation of meth:insights.parsers.keyword_search for more details on how to use it.

GetconfPageSize - command /usr/sbin/getconf PAGE_SIZE

This very simple parser returns the output of the getconf PAGE_SIZE command.

Examples

>>> pagesize_parsed.page_size
4096
class insights.parsers.getconf_pagesize.GetconfPageSize(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing ‘getconf PAGE_SIZE’ command output

Output: page_size

page_size

returns the page_size in bytes depending upon the architecture

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

getenforce - command /usr/sbin/getenforce

This very simple parser returns the output of the getenforce command.

Examples

>>> enforce = shared[getenforcevalue]
>>> enforce['status']
'Enforcing'
insights.parsers.getenforce.getenforcevalue(context)[source]

The output of “getenforce” command is in one of “Enforcing”, “Permissive”, or “Disabled”, so we can return the content directly.

getsebool - command /usr/sbin/getsebool -a

This parser returns the output of the getsebool command.

Sample getsebool -a output:

webadm_manage_user_files --> off
webadm_read_user_files --> off
wine_mmap_zero_ignore --> off
xdm_bind_vnc_tcp_port --> off
ssh_keysign --> off

Examples

>>> "webadm_manage_user_files" in getsebool
True
>>> "tmpreaper_use_nfs" in getsebool
False
>>> getsebool['ssh_keysign']
'off'
class insights.parsers.getsebool.Getsebool(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

The output of “getsebool” command is like following:

tmpreaper_use_nfs --> off tmpreaper_use_samba --> off

So we can return the value like {“tmpreaper_use_nfs”:”off”, “tmpreaper_use_samba”:”off”}

Raises

SkipException -- When SELinux is not enabled.

parse_content(content)[source]

This method must be implemented by classes based on this class.

GlanceApiLog - file /var/log/glance/api.log

Module for parsing the log files for Glance

class insights.parsers.glance_log.GlanceApiLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/glance/api.log file.

Typical content of api.log file is:

2016-11-09 14:50:44.281 26656 INFO glance.common.wsgi [-] Started child 14826
2016-11-09 14:50:44.445 14826 INFO eventlet.wsgi.server [-] (14826) wsgi starting up on http://172.18.0.13:9292
2016-11-09 14:50:44.454 14826 INFO eventlet.wsgi.server [-] (14826) wsgi exited, is_accepting=True
2016-11-09 14:50:44.470 14826 INFO glance.common.wsgi [-] Child 14826 exiting normally
2016-11-09 14:50:49.032 14863 WARNING oslo_config.cfg [-] Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:50:49.529 14863 INFO glance.common.wsgi [-] Starting 4 workers
2016-11-09 14:50:49.539 14863 INFO glance.common.wsgi [-] Started child 15049
2016-11-09 14:50:49.550 14863 INFO glance.common.wsgi [-] Started child 15054
2016-11-09 14:50:49.552 15054 INFO eventlet.wsgi.server [-] (15054) wsgi starting up on http://172.18.0.13:9292
2016-11-09 14:50:49.561 15049 INFO eventlet.wsgi.server [-] (15049) wsgi starting up on http://172.18.0.13:9292
2016-11-09 14:50:49.719 14863 INFO glance.common.wsgi [-] Started child 15097
2016-11-09 14:50:49.726 14863 INFO glance.common.wsgi [-] Started child 15101
2016-11-09 14:50:49.727 15101 INFO eventlet.wsgi.server [-] (15101) wsgi starting up on http://172.18.0.13:9292
2016-11-09 14:50:49.730 15097 INFO eventlet.wsgi.server [-] (15097) wsgi starting up on http://172.18.0.13:9292

Note

Please refer to its super-class insights.core.LogFileOutput

GlusterPeerStatus - command gluster peer status

class insights.parsers.gluster_peer_status.GlusterPeerStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of gluster peer status

Typical output of gluster peer status command is:

Number of Peers: 1

Hostname: versegluster1.verse.loc
Uuid: 86c0266b-c78c-4d0c-afe7-953dec143530
State: Peer in Cluster (Connected)

Examples

>>> output.status['peers']
1
>>> len(output.status.get('hosts', []))
1
>>> output.status.get('hosts', [])[0].get('Hostname')
'versegluster1.verse.loc'
status

A dict with keys peers and hosts. For example:

{'peers': 3,
 'hosts': [
           {'Hostname': 'foo.com', 'State': 'Peer in Cluster (Connected)', 'Uuid': '86c0266b-c78c-4d0c-afe7-953dec143530'},
           {'Hostname': 'example.com', 'State': 'Peer in Cluster (Connected)', 'Uuid': '3b4673e3-5e95-4c02-b9bb-2823483e067b'},
           {'Hostname': 'bar.com', 'State': 'Peer in Cluster (Disconnected)', 'Uuid': '4673e3-5e95-4c02-b9bb-2823483e067bb3'}]
}
Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

Gluster vol info - command to retrive information of gluster volumes

The parsers here provide information about the time sources used by glusterd.

class insights.parsers.gluster_vol.GlusterVolInfo(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

This parser processes the output of the command gluster vol info and provides the information as a dictionary.

The LegacyItemAccess class provides some helper functions for dealing with a class having a data attribute.

Sample input:

Volume Name: test_vol
Type: Replicate
Volume ID: 2c32ed8d-5a07-4a76-a73a-123859556974
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.17.18.42:/home/brick
Brick2: 172.17.18.43:/home/brick
Brick3: 172.17.18.44:/home/brick

Examples

>>> parser_result_v_info['test_vol']['Type']
'Replicate'

Override the base class parse_content to parse the output of the ‘’’gluster vol info’’’ command. Information that is stored in the object is made available to the rule plugins.

data

Dictionary containing each of the key:value pairs from the command output.

Type

dict

Raises

ParseException -- raised if data is not parsable.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.gluster_vol.GlusterVolStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

This parser processes the output of the command gluster vol status and provides the information as a dictionary.

The LegacyItemAccess class provides some helper functions for dealing with a class having a data attribute.

Sample input:

Status of volume: test_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.17.18.42:/home/brick              49152     0          Y       26685
Brick 172.17.18.43:/home/brick              49152     0          Y       27094
Brick 172.17.18.44:/home/brick              49152     0          Y       27060
Self-heal Daemon on localhost               N/A       N/A        Y       7805
Self-heal Daemon on 172.17.18.44            N/A       N/A        Y       33400
Self-heal Daemon on 172.17.18.43            N/A       N/A        Y       33680

Task Status of Volume test_vol
------------------------------------------------------------------------------
There are no active volume tasks

Examples

>>> parser_result_v_status['test_vol'][0]["Online"]
'Y'

Override the base class parse_content to parse the output of the ‘’’gluster vol status’’’ command. Information that is stored in the object is made available to the rule plugins.

data

Dictionary containing each of the key:value pairs from the command output.

Type

dict

Raises

ParseException -- raised if data is not parsable.

parse_content(content)[source]

This method must be implemented by classes based on this class.

GRUB configuration files

This parser reads the configuration of the GRand Unified Bootloader, versions 1 or 2.

This is currently a fairly simple parsing process. Data read from the file is put into roughly three categories:

  • configs: lines read from the file that aren’t boot options (i.e. excluding lines that go in the title and menuentry sections). These are split into pairs on the first ‘=’ sign.

  • title: (GRUB v1 only) lines prefixed by the word ‘title’. All following lines up to the next title line are folded together.

  • menuentry: (GRUB v2 only) lines prefixed by the word ‘menuentry’. All following lines up to the line starting with ‘}’ are treated as part of one menu entry.

Each of these categories is (currently) stored as a simple list of tuples.

  • For the configs dict, the key-value pairs based on the line, split on the first ‘=’ character. If nothing is found after the ‘=’ character, then the value is ‘’.

  • For the title and menuentry list, dict of each boot entry is stored.

    • The items will be key-value pairs, e.g. load_video will be stored as {'load_video': ''} and set root='hd0,msdos1' will be stored as {'set': "root='hd0,msdos1'"}.

Note

For GRUB version 2, all lines between the if and fi will be ignored due to we cannot analyze the result of the bash conditions.

Parsers provided by this module are:

Grub1Config - file /boot/grub.conf

Grub1EFIConfig - file /boot/efi/EFI/redhat/grub.conf

Grub2Config - file /boot/grub/grub2.cfg

Grub2EFIConfig - file /boot/efi/EFI/redhat/grub.cfg

BootLoaderEntries - file /boot/loader/entries/*.conf

class insights.parsers.grub_conf.BootEntry(data={})[source]

Bases: dict

An object representing a boot entry in the Grub Configuration.

name

Name of the boot entry

Type

str

cmdline

Cmdline of the boot entry

Type

str

class insights.parsers.grub_conf.BootLoaderEntries(context)[source]

Bases: insights.core.Parser, dict

Parses the /boot/loader/entries/*.conf files.

title

the name of the boot entry

Type

str

cmdline

the cmdline of the saved boot entry

Type

str

Raises

SkipException -- when input content is empty or no useful data.

parse_content(content)[source]

Parses the /boot/loader/entries/*.conf files.

class insights.parsers.grub_conf.Grub1Config(*args, **kwargs)[source]

Bases: insights.parsers.grub_conf.GrubConfig

Parser for configuration for GRUB version 1.

Examples

>>> grub1_content = '''
... default=0
... timeout=0
... splashimage=(hd0,0)/grub/splash.xpm.gz
... hiddenmenu
... title Red Hat Enterprise Linux Server (2.6.32-431.17.1.el6.x86_64)
...     kernel /vmlinuz-2.6.32-431.17.1.el6.x86_64 crashkernel=128M rhgb quiet
... title Red Hat Enterprise Linux Server (2.6.32-431.11.2.el6.x86_64)
...     kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 crashkernel=128M rhgb quiet
... '''.strip()
>>> grub1_config.configs.get('default')
['0']
>>> grub1_config.configs.get('hiddenmenu')
['']
>>> grub1_config['title'][0]['kernel']
['/vmlinuz-2.6.32-431.17.1.el6.x86_64 crashkernel=128M rhgb quiet']
>>> grub1_config.entries[1]['title']
'Red Hat Enterprise Linux Server (2.6.32-431.11.2.el6.x86_64)'
>>> grub1_config.boot_entries[1].name
'Red Hat Enterprise Linux Server (2.6.32-431.11.2.el6.x86_64)'
>>> grub1_config.boot_entries[1].cmdline
'/vmlinuz-2.6.32-431.11.2.el6.x86_64 crashkernel=128M rhgb quiet'
>>> grub1_config.is_kdump_iommu_enabled
False
>>> grub1_config.kernel_initrds['grub_kernels']
['vmlinuz-2.6.32-431.17.1.el6.x86_64', 'vmlinuz-2.6.32-431.11.2.el6.x86_64']
get_current_title()[source]

Get the current default title from the default option in the main configuration. (GRUB v1 only)

Returns

A list of dict contains all settings of the default boot entry:
  • {title: name1, kernel: [val], …},

Return type

list

class insights.parsers.grub_conf.Grub1EFIConfig(*args, **kwargs)[source]

Bases: insights.parsers.grub_conf.Grub1Config

Parses grub v1 configuration for EFI-based systems Content of grub-efi.conf is the same as grub.conf

class insights.parsers.grub_conf.Grub2Config(*args, **kwargs)[source]

Bases: insights.parsers.grub_conf.GrubConfig

Parser for configuration for GRUB version 2.

Examples

>>> grub2_content = '''
... ### BEGIN /etc/grub.d/00_header ###
... set pager=1
... /
... if [ -s $prefix/grubenv ]; then
...   load_env
... fi
... #[...]
... if [ x"${feature_menuentry_id}" = xy ]; then
...   menuentry_id_option="--id"
... else
...   menuentry_id_option=""
... fi
... #[...]
... ### BEGIN /etc/grub.d/10_linux ###
... menuentry 'Red Hat Enterprise Linux Workstation (3.10.0-327.36.3.el7.x86_64) 7.2 (Maipo)' $menuentry_id_option 'gnulinux-3.10.0-123.13.2.el7.x86_64-advanced-fbff9f50-62c3-484e-bca5-d53f672cda7c' {
...     load_video
...     set gfxpayload=keep
...     insmod gzio
...     insmod part_msdos
...     insmod ext2
...     set root='hd0,msdos1'
...     if [ x$feature_platform_search_hint = xy ]; then
...       search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  1184ab74-77b5-4cfa-81d3-fb87b0457577
...     else
...       search --no-floppy --fs-uuid --set=root 1184ab74-77b5-4cfa-81d3-fb87b0457577
...     fi
...     linux16 /vmlinuz-3.10.0-327.36.3.el7.x86_64 root=/dev/RHEL7CSB/Root ro rd.lvm.lv=RHEL7CSB/Root rd.luks.uuid=luks-96c66446-77fd-4431-9508-f6912bd84194 crashkernel=128M@16M rd.lvm.lv=RHEL7CSB/Swap vconsole.font=latarcyrheb-sun16 rhgb quiet LANG=en_GB.utf8
...     initrd16 /initramfs-3.10.0-327.36.3.el7.x86_64.img
... }
... '''.strip()
>>> grub2_config['configs']
{'set pager': ['1'], '/': ['']}
>>> grub2_config.entries[0]['menuentry']
"'Red Hat Enterprise Linux Workstation (3.10.0-327.36.3.el7.x86_64) 7.2 (Maipo)' $menuentry_id_option 'gnulinux-3.10.0-123.13.2.el7.x86_64-advanced-fbff9f50-62c3-484e-bca5-d53f672cda7c'"
>>> grub2_config['menuentry'][0]['insmod']
['gzio', 'part_msdos', 'ext2']
>>> grub2_config.boot_entries[0].name
"'Red Hat Enterprise Linux Workstation (3.10.0-327.36.3.el7.x86_64) 7.2 (Maipo)' $menuentry_id_option 'gnulinux-3.10.0-123.13.2.el7.x86_64-advanced-fbff9f50-62c3-484e-bca5-d53f672cda7c'"
>>> grub2_config.boot_entries[0].cmdline
'/vmlinuz-3.10.0-327.36.3.el7.x86_64 root=/dev/RHEL7CSB/Root ro rd.lvm.lv=RHEL7CSB/Root rd.luks.uuid=luks-96c66446-77fd-4431-9508-f6912bd84194 crashkernel=128M@16M rd.lvm.lv=RHEL7CSB/Swap vconsole.font=latarcyrheb-sun16 rhgb quiet LANG=en_GB.utf8'
>>> grub2_config.kernel_initrds['grub_kernels'][0]
'vmlinuz-3.10.0-327.36.3.el7.x86_64'
>>> grub2_config.is_kdump_iommu_enabled
False
class insights.parsers.grub_conf.Grub2EFIConfig(*args, **kwargs)[source]

Bases: insights.parsers.grub_conf.Grub2Config

Parses grub2 configuration for EFI-based systems

class insights.parsers.grub_conf.GrubConfig(context)[source]

Bases: insights.core.Parser, dict

Parser for configuration for both GRUB versions 1 and 2.

property boot_entries

Get all boot entries in GRUB configuration.

Returns

A list of insights.parsers.grub_conf.BootEntry

objects for each boot entry in below format: - ‘name’: “Red Hat Enterprise Linux Server” - ‘cmdline’: “kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 crashkernel=128M rhgb quiet”

Return type

(list)

property is_kdump_iommu_enabled

Does any kernel have ‘intel_iommu=on’ set?

Returns

True when ‘intel_iommu=on’ is set, otherwise returns False

Return type

(bool)

property kernel_initrds

Get the kernel and initrd files referenced in GRUB configuration files

Returns

Returns a dict of the kernel and initrd files referenced

in GRUB configuration files

Return type

(dict)

parse_content(content)[source]

Parse grub configuration file to create a dict with this structure:

{
    "configs": {
        name1: [val1, val2, ...]
        name2: [val],
        ...
    },
    "title": [
        {title: name1, kernel: [val], ...},
        {title: name2, module: [val1, val2], ...},
    ],
    "menuentry": [
        {menuentry: name1, insmod: [val1, val2], ...},
        {menuentry: name2, linux16: [val], ...},
    ],
}

grubby - command /usr/sbin/grubby

This is a collection of parsers that all deal with the command grubby. Parsers included in this module are:

GrubbyDefaultIndex - command grubby --default-index

GrubbyDefaultKernel - command grubby --default-kernel

class insights.parsers.grubby.GrubbyDefaultIndex(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This parser parses the output of command grubby --default-index.

The typical output of this command is:

0

Examples

>>> grubby_default_index.default_index
0
Raises
default_index

the numeric index of the current default boot entry, count from 0

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.grubby.GrubbyDefaultKernel(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This parser parses the output of command grubby --default-kernel.

The typical output of this command is:

/boot/vmlinuz-2.6.32-573.el6.x86_64

Examples

>>> grubby_default_kernel.default_kernel
'/boot/vmlinuz-2.6.32-573.el6.x86_64'
Raises
default_kernel

The default kernel name for next boot

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

HammerPing - command /usr/bin/hammer ping

The hammer ping parser reads the output of hammer ping and turns it into a dictionary. The key is the service name, and the value is a dict of all the service info.

Sample output of hammer ping:

candlepin:
    Status:          FAIL
    Server Response: Message: 404 Resource Not Found
elasticsearch:
    Status:          ok
    Server Response: Duration: 35ms
foreman_tasks:
    Status:          ok
    Server Response: Duration: 1ms

Examples

>>> hammer = shared[HammerPing]
>>> 'unknown_service' in hammer.service_list
False
>>> hammer['candlepin']['Status']
'FAIL'
>>> hammer['candlepin']['Server Response']
'Message: 404 Resource Not Found'
>>> hammer.are_all_ok
False
>>> hammer.services_of_status('OK')
['elasticsearch', 'foreman_tasks']
class insights.parsers.hammer_ping.HammerPing(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Read the hammer ping status and convert it to dictionaries of status and response information.

errors

Any error messages encountered during parsing

Type

list

raw_content

The original output of hammer ping

Type

list

property are_all_ok

Return boolean value to indicate if all the service are running normally

parse_content(content)[source]

This method must be implemented by classes based on this class.

property service_list

Return a list of service in order

services_of_status(status='ok')[source]

List of the services in the given status.

Parameters

status (str) -- the status code to search for, defaulting to ‘ok’. The value is converted to lower case.

Returns: List of service names having that status.

HammerTaskList - command hammer --csv task list

This parser reads the task list of a Satellite server using hammer, in CSV format. It relies on the root user running the command being able to do authenticated commands, which currently relies on the Satellite administrator setting up an authentication file. This is often done as a convenience; if the command is unable to authenticate then no tasks will be shown and an error flag will be recorded in the parser.

Sample output from the hammer --csv task list command:

ID,Name,Owner,Started at,Ended at,State,Result,Task action,Task errors
92b732ea-7423-4644-8890-80e054f1799a,,foreman_api_admin,2016/11/11 07:18:32,2016/11/11 07:18:34,stopped,success,Refresh repository,""
e9cb6455-a433-467e-8404-7d01bd726689,,foreman_api_admin,2016/11/11 07:18:28,2016/11/11 07:18:31,stopped,success,Refresh repository,""
e30f3e7e-c023-4380-9594-337fdc4967e4,,foreman_api_admin,2016/11/11 07:18:24,2016/11/11 07:18:28,stopped,success,Refresh repository,""
3197f6a1-891f-4f42-9e4d-92c83c3ed035,,foreman_api_admin,2016/11/11 07:18:20,2016/11/11 07:18:24,stopped,success,Refresh repository,""
22169621-7175-411c-86be-46b4254a4e77,,foreman_api_admin,2016/11/11 07:18:16,2016/11/11 07:18:19,stopped,success,Refresh repository,""
f111e8f7-c956-470b-abb6-2e436ecd5866,,foreman_api_admin,2016/11/11 07:18:14,2016/11/11 07:18:16,stopped,success,Refresh repository,""
dfc702ea-ce46-427c-8a07-43e2a68e1320,,foreman_api_admin,2016/11/11 07:18:12,2016/11/11 07:18:14,stopped,success,Refresh repository,""
e8cac892-e666-4f2c-ab97-2be298da337e,,foreman_api_admin,2016/11/11 07:18:09,2016/11/11 07:18:12,stopped,success,Refresh repository,""
e6c1e1b2-a29d-4fd0-891e-e736dc9b7150,,,2016/11/11 07:14:06,2016/11/12 05:10:17,stopped,success,Listen on candlepin events,""
44a42c49-3038-4cae-8067-4d1cc305db05,,,2016/11/11 07:11:44,2016/11/11 07:12:47,stopped,success,Listen on candlepin events,""
72669288-54ac-41ba-a3b2-314a2c81f438,,,2016/11/11 06:57:15,2016/11/11 07:07:03,stopped,success,Listen on candlepin events,""
1314c91e-19d6-4d71-9bca-31db0df0aad2,,foreman_admin,2016/11/11 06:55:59,2016/11/11 06:55:59,stopped,error,Update for host sat62disc.example.org,"There was an issue with the backend service candlepin: 404 Resource Not Found, There was an issue with the backend service candlepin: 404 Resource Not Found"
303ef924-9845-4267-a705-194a4ebfbcfb,,foreman_admin,2016/11/11 06:55:58,2016/11/11 06:55:58,stopped,error,Package Profile Update,500 Internal Server Error
cffa5990-23ba-49f5-828b-ae0c77e8257a,,foreman_admin,2016/11/11 06:55:53,2016/11/11 06:55:56,stopped,error,Update for host sat62disc.example.org,"There was an issue with the backend service candlepin: 404 Resource Not Found, There was an issue with the backend service candlepin: 404 Resource Not Found"
07780e8f-dd81-49c4-a792-c4d4d162eb10,,foreman_admin,2016/11/11 06:55:50,2016/11/11 06:55:51,stopped,error,Update for host sat62disc.example.org,"There was an issue with the backend service candlepin: 404 Resource Not Found, There was an issue with the backend service candlepin: 404 Resource Not Found"
749a17a1-a8cb-46f0-98f6-017576481df8,,foreman_admin,2016/11/11 06:51:28,2016/11/11 06:51:29,stopped,error,Update for host sat62disc.example.org,"There was an issue with the backend service candlepin: 404 Resource Not Found, There was an issue with the backend service candlepin: 404 Resource Not Found"
d8f41819-b492-46e5-b0e3-ead3b4b6810c,,foreman_admin,2016/11/11 06:51:22,2016/11/11 06:51:28,stopped,error,Package Profile Update,500 Internal Server Error

Examples

>>> type(tasks)
<class 'insights.parsers.hammer_task_list.HammerTaskList'>
>>> tasks.can_authenticate
True
>>> len(tasks)  # Can act as a list
17
>>> tasks[0]['ID']  # Fetch rows directly
'92b732ea-7423-4644-8890-80e054f1799a'
>>> tasks[0]['Task errors']  # Literal contents of field - quotes not stripped
''
>>> error_tasks = tasks.search(Result='error')  # List of dictionaries
>>> len(error_tasks)
6
>>> error_tasks[0]['ID']
'1314c91e-19d6-4d71-9bca-31db0df0aad2'
>>> error_tasks[-1]['Task errors']
'500 Internal Server Error'
class insights.parsers.hammer_task_list.HammerTaskList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, list

Parse the CSV output from the hammer --output csv task list command.

Raises

SkipException -- When nothing is parsed.

can_authenticate

Whether we have valid data; if False it’s probably due to not being able to authenticate.

Type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

property running_tasks

Return a list of running tasks

search(**kwargs)[source]

Search the process list for matching rows based on key-value pairs.

This uses the py:func:insights.parsers.keyword_search function for searching; see its documentation for usage details. If no search parameters are given, no rows are returned.

Examples

>>> no_owner_tasks = tasks.search(Owner='')
>>> len(no_owner_tasks)
3
>>> no_owner_tasks[0]['Task action']
'Listen on candlepin events'
>>> len(tasks.search(State='stopped', Result='error'))
6
property tasks

Return a list of tasks, in the order they appear in the file, as dictionaries of fields and values.

HaproxyCfg - file /etc/haproxy/haproxy.cfg

Contents of the haproxy.cfg file look like:

global
    daemon
    group       haproxy
    log         /dev/log local0
    user        haproxy
    maxconn     20480
    pidfile     /var/run/haproxy.pid

defaults
    retries     3
    maxconn     4096
    log         global
    timeout     http-request 10s
    timeout     queue 1m
    timeout     connect 10s

If there are duplicate key items, merge them in to one. Like:

option  tcpka
                        }--->    option: ["tcpka","tcplog"]
option  tcplog

Examples

>>> cfg = shared[HaproxyCfg]
>>> cfg.data['global']
{"daemon": "", "group": "haproxy", "log": " /dev/log local0",
 "user": "haproxy", "maxconn": "20480", "pidfile": "/var/run/haproxy.pid"}
>>> cfg.data['global']['group']
"haproxy"
>>> 'global' in cfg.data
True
>>> 'user' in cfg.data.get('global')
True
class insights.parsers.haproxy_cfg.HaproxyCfg(context)[source]

Bases: insights.core.Parser

Class to parse file haproxy.cfg.

parse_content(content)[source]

This method must be implemented by classes based on this class.

HeatConf - file /etc/heat/heat.conf

This module provides plugins access to the heat.conf information.

Typical content of the heat.conf is:

[DEFAULT]
heat_metadata_server_url = http://172.16.0.11:8000
heat_waitcondition_server_url = http://172.16.0.11:8000/v1/waitcondition
heat_watch_server_url =http://172.16.0.11:8003
stack_user_domain_name = heat_stack
stack_domain_admin = heat_stack_domain_admin
stack_domain_admin_password = *********
auth_encryption_key = V48p9fRZzWSRgjE96e2I1oGwn216xgqf
log_dir = /var/log/heat
instance_user=
notification_driver=messaging
[auth_password]
[clients]
[clients_ceilometer]
[clients_cinder]
[clients_glance]
[clients_heat]
[clients_keystone]
auth_uri =http://192.0.2.18:35357
[clients_neutron]

Usage of this parser is similar to others that use the IniConfigFile base class.

Examples

>>> conf = shared(HeatConf)
>>> 'DEFAULT' in conf
True
>>> conf.get_item('clients_keystone', 'auth_uri')
'http://192.0.2.18:35357'
class insights.parsers.heat_conf.HeatConf(context)[source]

Bases: insights.core.IniConfigFile

Parses content of “/etc/heat/heat.conf”.

Heat logs

Module for parsing the log files for Heat. Parsers included are:

HeatApiLog - file /var/log/heat/heat-api.log, /var/log/containers/heat/heat_api.log

HeatEngineLog - file /var/log/heat/heat-engine.log

class insights.parsers.heat_log.HeatApiLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/heat/heat-api.log, /var/log/containers/heat/heat_api.log file.

Typical content of heat-api.log file is:

2016-11-09 14:39:29.223 3844 WARNING oslo_config.cfg [-] Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:39:30.612 3844 INFO heat.api [-] Starting Heat REST API on 172.16.2.12:8004
2016-11-09 14:39:30.612 3844 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-11-09 14:39:30.615 3844 INFO heat.common.wsgi [-] Starting 0 workers
2016-11-09 14:39:30.625 3844 INFO heat.common.wsgi [-] Started child 4136
2016-11-09 14:39:30.641 3844 INFO heat.common.wsgi [-] Started child 4137
2016-11-09 14:39:30.723 4137 INFO eventlet.wsgi.server [-] (4137) wsgi starting up on http://172.16.2.12:8004
2016-11-09 14:39:30.728 3844 INFO heat.common.wsgi [-] Started child 4139
2016-11-09 14:39:30.750 4140 INFO eventlet.wsgi.server [-] (4140) wsgi starting up on http://172.16.2.12:8004
2016-11-09 14:39:30.732 3844 INFO heat.common.wsgi [-] Started child 4140
2016-11-09 14:39:30.764 4139 INFO eventlet.wsgi.server [-] (4139) wsgi starting up on http://172.16.2.12:8004
2016-11-09 14:39:30.782 4136 INFO eventlet.wsgi.server [-] (4136) wsgi starting up on http://172.16.2.12:8004

Note

Please refer to its super-class insights.core.LogFileOutput

class insights.parsers.heat_log.HeatEngineLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/heat/heat-engine.log file.

Typical content of heat-engine.log file is:

2016-11-09 14:32:43.062 4392 WARNING oslo_config.cfg [-] Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:32:43.371 4392 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-11-09 14:32:43.374 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.monasca: "No module named monascaclient". Not using monasca.
2016-11-09 14:32:43.402 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.senlin: "No module named senlinclient". Not using senlin.
2016-11-09 14:32:43.761 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.senlin: "No module named senlinclient". Not using senlin.cluster.
2016-11-09 14:32:43.763 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.senlin: "No module named senlinclient". Not using senlin.profile.
2016-11-09 14:32:43.763 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.monasca: "No module named monascaclient". Not using monasca.notification.
2016-11-09 14:32:43.764 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.senlin: "No module named senlinclient". Not using senlin.profile_type.
2016-11-09 14:32:43.765 4392 WARNING heat.common.pluginutils [-] Encountered exception while loading heat.engine.clients.os.senlin: "No module named senlinclient". Not using senlin.policy_type.
2016-11-09 14:32:44.153 4392 WARNING heat.engine.environment [-] OS::Aodh::CombinationAlarm is DEPRECATED. The combination alarm is deprecated and disabled by default in Aodh.
2016-11-09 14:32:44.154 4392 WARNING heat.engine.environment [-] OS::Heat::HARestarter is DEPRECATED. The HARestarter resource type is deprecated and will be removed in a future release of Heat, once it has support for auto-healing any type of resource. Note that HARestarter does *not* actually restart servers - it deletes and then recreates them. It also does the same to all dependent resources, and may therefore exhibit unexpected and undesirable behaviour. Instead, use the mark-unhealthy API to mark a resource as needing replacement, and then a stack update to perform the replacement while respecting  the dependencies and not deleting them unnecessarily.
2016-11-09 14:32:44.154 4392 WARNING heat.engine.environment [-] OS::Heat::SoftwareDeployments is HIDDEN. Please use OS::Heat::SoftwareDeploymentGroup instead.
2016-11-09 14:32:44.155 4392 WARNING heat.engine.environment [-] OS::Heat::StructuredDeployments is HIDDEN. Please use OS::Heat::StructuredDeploymentGroup instead.
2016-11-09 14:32:44.156 4392 WARNING heat.engine.environment [-] OS::Neutron::ExtraRoute is UNSUPPORTED. Use this resource at your own risk.

Note

Please refer to its super-class insights.core.LogFileOutput

VDSMId - file /etc/vdsm/vdsm.id

Module for parsing the content of file vdsm.id, which is a simple file.

Typical content of “vdsm.id” is:

# VDSM UUID info
#
F7D9D983-6233-45C2-A387-9B0C33CB1306

Examples

>>> vd = shared[VDSMId]
>>> vd.uuid
"F7D9D983-6233-45C2-A387-9B0C33CB1306"
class insights.parsers.host_vdsm_id.VDSMId(context)[source]

Bases: insights.core.Parser

Class for parsing vdsm.id file.

parse_content(content)[source]

Returns the UUID of this Host - E.g.: F7D9D983-6233-45C2-A387-9B0C33CB1306

Hostname - command hostname

Parsers contained in this module are:

Hostname - command hostname -f

HostnameDefault - command hostname

HostnameShort - command hostname -s

class insights.parsers.hostname.Hostname(context, extra_bad_lines=[])[source]

Bases: insights.parsers.hostname.HostnameBase

This parser simply reads the output of hostname -f, which is the configured fully qualified domain name of the client system. It then splits it into hostname and domain and stores these as attributes, along with the unmodified name in the fqdn attribute.

Examples

>>> hostname.raw
'rhel7.example.com'
>>> hostname.fqdn
'rhel7.example.com'
>>> hostname.hostname
'rhel7'
>>> hostname.domain
'example.com'
raw

The raw output of the hostname -f command.

fqdn

The fully qualified domain name of the host. The same to hostname when domain part is not set.

hostname

The hostname.

domain

The domain get from the fqdn.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.hostname.HostnameBase(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

The base parser class for command hostname.

Raises

ParseException -- When the output contains multiple non-empty lines.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.hostname.HostnameDefault(context, extra_bad_lines=[])[source]

Bases: insights.parsers.hostname.HostnameBase

This parser simply reads the output of hostname.

Examples

>>> hostname_def.raw
'rhel7'
>>> hostname_def.hostname
'rhel7'
raw

The raw output of the hostname command.

hostname

The hostname.

class insights.parsers.hostname.HostnameShort(context, extra_bad_lines=[])[source]

Bases: insights.parsers.hostname.HostnameBase

This parser simply reads the output of hostname -s, which is the configured short hostname of the client system.

Examples

>>> hostname_s.raw
'rhel7'
>>> hostname_s.hostname
'rhel7'
raw

The raw output of the hostname -s command.

hostname

The hostname.

Hosts - file /etc/hosts

This parser parses the /etc/hosts file, strips the comments, ignores the blank lines, and collects the host names by IP address. IPv4 and IPv6 addresses are supported.

Sample hosts file:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# The same IP address can appear more than once, with different names
127.0.0.1 fte.example.com

10.0.0.1 nonlocal.example.com nonlocal2.fte.example.com
10.0.0.2 other.host.example.com # Comments at end of line are ignored

Examples

>>> len(hosts.all_names)
10
>>> 'localhost6'in  hosts.all_names
True
>>> hosts.data['127.0.0.1']
['localhost', 'localhost.localdomain', 'localhost4', 'localhost4.localdomain4', 'fte.example.com']
>>> sorted(hosts.get_nonlocal().keys())
['10.0.0.1', '10.0.0.2']
>>> hosts.lines[-1]['ip']
'10.0.0.2'
>>> hosts.lines[2]['names']
['fte.example.com']
class insights.parsers.hosts.Hosts(context)[source]

Bases: insights.core.Parser

Read the /etc/hosts file and parse it into a dictionary of host name lists, keyed on IP address.

property all_ips

The set of ip addresses known.

Type

(set)

property all_names

The set of host names known, regardless of their IP address.

Type

(set)

property data

The parsed result as a dict with IP address as the key.

Type

(dict)

get_nonlocal()[source]

A dictionary of host name lists, keyed on IP address, that are not the ‘localhost’ addresses ‘127.0.0.1’ or ‘::1’.

ip_of(hostname)[source]

Return the (first) IP address given for this host name. None is returned if no IP address is found.

property lines

List of the parsed lines in the original order, in the following format:

{
    'ip': '127.0.0.1',
    'names': ['localhost', 'localhost.localdomain', 'localhost4', 'localhost4.localdomain4']
    'raw_line:' '127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4'
}
Type

(list)

parse_content(content)[source]

This method must be implemented by classes based on this class.

HponConf - command /sbin/hponcfg -g

Get the iLO firmware revision from the hponcfg command. This is a 3rd party utility from HP and isn’t shipped with RHEL. However, it’s useful for detecting possible hardware incompatibilities.

There are only five pieces of information extracted:

  • firmware_revision - the Firmware Revision value

  • device_type - the Device type value

  • driver_name - the Driver name value

  • server_name - the Server Name value

  • server_number - the Server Number value

Values are ‘’ if not listed in the output.

Input looks like this:

HP Lights-Out Online Configuration utility
Version 4.3.1 Date 05/02/2014 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.22 Device type = iLO 4 Driver name = hpilo
Host Information:
                        Server Name: esxi01.hp.local
                        Server Number:

Examples

>>> cfg = shared[HponConf]
>>> cfg.data['firmware_revision']
'1.22'
>>> cfg.data['server_name']
'esxi01.hp.local'
>>> cfg.data['server_number']
''
>>> 'Version' in cfg.data # other values in the hponcfg output not found
False
class insights.parsers.hponcfg.HponConf(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Read the output of the HP ILO configuration utility.

firmware_revision

The firmware revision string.

Type

str

device_type

The device type (e.g. ‘iLO 4’).

Type

str

driver_name

The driver name (e.g. ‘hpilo’).

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

HttpdM - command httpd -M

Module for parsing the output of command httpd -M.

class insights.parsers.httpd_M.HttpdM(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Class for parsing httpd -M command output.

The data is kept in the data property and can be accessed through the object itself thanks to the LegacyItemAccess parser class.

Typical output of command httpd -M looks like:

Loaded Modules:
 core_module (static)
 http_module (static)
 access_compat_module (shared)
 actions_module (shared)
 alias_module (shared)
Syntax OK

Examples

>>> type(hm)
<class 'insights.parsers.httpd_M.HttpdM'>
>>> len(hm.loaded_modules)
5
>>> len(hm.static_modules)
2
>>> 'core_module' in hm.static_modules
True
>>> 'http_module' in hm.static_modules
True
>>> 'http_module' in hm
True
>>> 'http_module' in hm.shared_modules
False
>>> hm.httpd_command
'/usr/sbin/httpd'
Raise:

ParseException: When input content is empty or there is no parsed data.

data

All loaded modules are stored in this dictionary with the name as the key and the loaded mode (‘shared’ or ‘static’) as the value.

Type

dict

loaded_modules

List of the loaded modules.

Type

list

static_modules

List of the loaded static modules.

Type

list

shared_modules

List of the loaded shared modules.

Type

list

property httpd_command

The full path of a running httpd. An Empty string when nothing is found. It’s to identify which httpd binaries the instance run with.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

HttpdV - command httpd -V

Module for parsing the output of command httpd -V. The bulk of the content is split on the colon and keys are kept as is. Lines beginning with ‘-D’ are kept in a dictionary keyed under ‘Server compiled with’; each compilation option is a key in this sub-dictionary. The value of the compilation options is the value after the equals sign, if one is present, or the value in brackets after the compilation option, or ‘True’ if only the compilation option is present.

class insights.parsers.httpd_V.HttpdV(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Class for parsing httpd -V command output.

The data is kept in the data property and can be accessed through the object itself thanks to the LegacyItemAccess parser class.

Typical output of command httpd -V looks like:

Server version: Apache/2.2.6 (Red Hat Enterprise Linux)
Server's Module Magic Number: 20120211:24
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture:   64-bit
Server MPM:     Prefork
Server compiled with....
-D APR_HAS_SENDFILE
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"

Examples

>>> type(hv)
<class 'insights.parsers.httpd_V.HttpdV'>
>>> hv.mpm
'prefork'
>>> hv["Server's Module Magic Number"]
'20120211:24'
>>> hv['Server compiled with']['APR_HAS_SENDFILE']
True
>>> hv['Server compiled with']['APR_HAVE_IPV6']
'IPv4-mapped addresses enabled'
>>> hv['Server compiled with']['SERVER_CONFIG_FILE']
'conf/httpd.conf'
data

The bulk of the content is split on the colon and keys are kept as is. Lines beginning with ‘-D’ are kept in a dictionary keyed under ‘Server compiled with’; each compilation option is a key in this sub-dictionary. The value of the compilation options is the value after the equals sign, if one is present, or the value in brackets after the compilation option, or ‘True’ if only the compilation option is present.

Type

dict

Raises

SkipException -- When input content is empty or there is no parsed data.

property httpd_command

The full path of a running httpd. An Empty string when nothing is found. To identify which httpd binaries the instance run with.

Type

str

property mpm

The MPM mode of the running httpd. An Empty string when nothing is found.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

property version

The version of the running httpd. An Empty string when nothing is found.

Type

str

HttpdConf - files /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*

Parse the keyword-and-value-but-also-vaguely-XML of an Apache configuration file.

Generally, each line is split on the first space into key and value, leading and trailing space being ignored.

Sample (edited) httpd.conf file:

ServerRoot "/etc/httpd"
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule auth_digest_module modules/mod_auth_digest.so

<Directory />
    Options FollowSymLinks
    AllowOverride None
</Directory>

<IfModule mod_mime_magic.c>
#   MIMEMagicFile /usr/share/magic.mime
    MIMEMagicFile conf/magic
</IfModule>

ErrorLog "|/usr/sbin/httplog -z /var/log/httpd/error_log.%Y-%m-%d"

SSLProtocol -ALL +SSLv3
#SSLProtocol all -SSLv2

NSSProtocol SSLV3 TLSV1.0
#NSSProtocol ALL

# prefork MPM
<IfModule prefork.c>
StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      256
MaxClients       256
MaxRequestsPerChild  200
</IfModule>

# worker MPM
<IfModule worker.c>
StartServers         4
MaxClients         300
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

Examples

>>> httpd_conf['ServerRoot'][-1].value
'/etc/httpd'
>>> httpd_conf['LoadModule'][0].value
'auth_basic_module modules/mod_auth_basic.so'
>>> httpd_conf['LoadModule'][-1].value
'auth_digest_module modules/mod_auth_digest.so'
>>> httpd_conf['Directory', '/']['Options'][-1].value
'FollowSymLinks'
>>> type(httpd_conf[('IfModule','prefork.c')]) == type({})
True
>>> httpd_conf[('IfModule','mod_mime_magic.c')]
{'MIMEMagicFile': [ParsedData(value='conf/magic', line='MIMEMagicFile conf/magic', section='IfModule', section_name='mod_mime_magic.c', file_name='path', file_path='/path')]}
>>> httpd_conf[('IfModule','prefork.c')]['StartServers'][0].value
'8'
>>> 'ThreadsPerChild' in httpd_conf[('IfModule','prefork.c')]
False
>>> httpd_conf[('IfModule','worker.c')]['MaxRequestsPerChild'][-1].value
'0'
class insights.parsers.httpd_conf.HttpdConf(*args, **kwargs)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Note

This parser is deprecated, please use insights.combiners.httpd_conf.HttpdConfTree instead.

Get the key value pairs separated on the first space, ignoring leading and trailing spaces.

If the file is httpd.conf, it also stores first half, before IncludeOptional conf.d/*.conf line, and the rest, to the first_half and second_half attributes respectively.

data

Dictionary of parsed data with key being the option and value a list of named tuples with the following properties: - value - the value of the keyword. - line - the complete line as found in the config file. The reason why it is a list is to store data for directives which can use selective overriding such as UserDir.

Type

dict

first_half

Parsed data from main config file before inclusion of other files in the same format as data.

Type

dict

second_half

Parsed data from main config file after inclusion of other files in the same format as data.

Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.httpd_conf.ParsedData(value, line, section, section_name, file_name, file_path)

Bases: tuple

namedtuple: Type for storing the parsed httpd configuration’s directive information.

property file_name
property file_path
property line
property section
property section_name
property value
insights.parsers.httpd_conf.dict_deep_merge(tgt, src)[source]

Utility function to merge the source dictionary src to the target dictionary recursively

Note

The type of the values in the dictionary can only be dict or list

Parameters
  • tgt (dict) -- The target dictionary

  • src (dict) -- The source dictionary

Apache httpd logs

Modules for parsing the log files of httpd service. Parsers include:

HttpdSSLErrorLog - file ssl_error_log

HttpdErrorLog - file error_log

Httpd24HTTPDErrorLog - file httpd24_httpd_error_log

JBCSHTTPD24HttpdErrorLog - file jbcs_httpd24_httpd_error_log

HttpdSSLAccessLog - file ssl_access_log

HttpdAccessLog - file access_log

Note

Please refer to the super-class insights.core.LogFileOutput for more usage information.

class insights.parsers.httpd_log.Httpd24HttpdErrorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd error_log file.

class insights.parsers.httpd_log.HttpdAccessLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd access_log file.

class insights.parsers.httpd_log.HttpdErrorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd error_log file.

class insights.parsers.httpd_log.HttpdSSLAccessLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd ssl_access_log file.

class insights.parsers.httpd_log.HttpdSSLErrorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd ssl_error_log file.

class insights.parsers.httpd_log.JBCSHttpd24HttpdErrorLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing httpd error_log file.

HttpdOnNFSFilesCount - datasource httpd_on_nfs

Shared parsers for parsing output of the datasource httpd_on_nfs.

class insights.parsers.httpd_open_nfs.HttpdOnNFSFilesCount(context)[source]

Bases: insights.core.JSONParser

This class provides processing for the output of the datasource of httpd_on_nfs

The content collected by insights-client:

{"http_ids": [1787,2399], "nfs_mounts": ["/data", "/www"], "open_nfs_files": 1000}

Examples

>>> httpon_nfs.http_ids == [1787,2399]
True
>>> httpon_nfs.nfs_mounts == ["/data", "/www"]
True
>>> httpon_nfs.open_nfs_files == 1000
True
data

dict with keys “http_ids”, “nfs_mounts” and “open_nfs_files”

Type

dict

http_ids

contains all httpd process ids

Type

list

nfs_mounts

contains all nfs v4 mount points

Type

list

open_nfs_files

counting number of all httpd open files on nfs v4 mount points

Type

number

parse_content(content)[source]

This method must be implemented by classes based on this class.

IfCFG - files /etc/sysconfig/network-scripts/ifcfg-*

IfCFG is a parser for the network interface definition files in /etc/sysconfig/network-scripts. These are pulled into the network scripts using source, so they are mainly bash environment declarations of the form KEY=value. These are stored in the data property as a dictionary. Quotes surrounding the value

Three options are handled differently:

  • BONDING_OPTS is usually a quoted list of key=value arguments separated by spaces.

  • TEAM_CONFIG and TEAM_PORT_CONFIG are treated as JSON stored as a single string. Double quotes within the string are escaped using double back slashes, and these are removed so that the quoting is preserved.

Because this parser reads multiple files, the interfaces are stored as a list within the parser and need to be iterated through in order to find specific interfaces.

Sample configuration from a teamed interface in file /etc/sysconfig/network-scripts/ifcfg-team1:

DEVICE=team1
DEVICETYPE=Team
ONBOOT=yes
NETMASK=255.255.252.0
IPADDR=192.168.0.1
TEAM_CONFIG='{"runner": {"name": "lacp", "active": "true", "tx_hash": ["eth", "ipv4"]}, "tx_balancer": {"name": "basic"}, "link_watch": {"name": "ethtool"}}'

Examples

>>> for nic in shared[IfCFG]: # Parser contains list of all interfaces
...     print 'NIC:', nic.iname
...     print 'IP address:', nic['IPADDR']
...     if 'TEAM_CONFIG' in nic:
...         print 'Team runner name:', nic['TEAM_CONFIG']['runner']['name']
...
NIC: team1
IP addresss: 192.168.0.1
Team runner name: lacp
class insights.parsers.ifcfg.IfCFG(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parse ifcfg- file,return a dict contain ifcfg config file info. “iface” key is interface name parse from file name TEAM_CONFIG, TEAM_PORT_CONFIG will return a dict with user config dict BONDING_OPTS also will return a dict

Properties:
ifname (str): The interface name as defined in the name of the file

(i.e. the part after ifcfg-).

property bonding_mode

(int) the numeric value of bonding mode, or None if no bonding mode is found.

property has_empty_line

(bool) True if the file has empty line else False.

parse_content(content)[source]

This method must be implemented by classes based on this class.

InitProcessCgroup - File /proc/1/cgroup

This parser reads the content of /proc/1/cgroup. This file shows the cgroup detail of init process. The format of the content is like key-value. We can also use this info to check if the archive is from container or host.

class insights.parsers.init_process_cgroup.InitProcessCgroup(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class InitProcessCgroup parses the content of the /proc/1/cgroup.

is_container

It is used to check if a archive is from host or container. Return True if the archive is from container.

Type

bool

A small sample of the content of this file looks like:

11:hugetlb:/
10:memory:/
9:devices:/
8:pids:/
7:perf_event:/
6:net_prio,net_cls:/
5:blkio:/
4:freezer:/
3:cpuacct,cpu:/
2:cpuset:/
1:name=systemd:/

Examples

>>> type(cgroupinfo)
<class 'insights.parsers.init_process_cgroup.InitProcessCgroup'>
>>> cgroupinfo["memory"]
["10", "/"]
>>> cgroupinfo.is_container
False
parse_content(content)[source]

This method must be implemented by classes based on this class.

InitScript - files /etc/rc.d/init.d/*

InitScript is a parser for the initscripts in /etc/rc.d/init.d.

Because this parser read multiple files, the initscripts are stored as a list within the parser and need to be iterated through in order to find specific initscripts.

Examples

>>> for initscript in shared[InitScript]: # Parser contains list of all initscripts
...     print "Name:", initscript.file_name
...
Name: netconsole
Name: rhnsd
exception insights.parsers.initscript.EmptyFileException[source]

Bases: insights.parsers.ParseException

class insights.parsers.initscript.InitScript(context)[source]

Bases: insights.core.Parser

Parse initscript files. Each item is a dictionary with following fields:

file_name

initscript name

Type

str

file_path

initscript path without leading ‘/’

Type

str

file_content

initscript content, line by line

Type

list

Because some files may not be real initscripts, to determine whether a file in etc/rc.d/init.d/ is an initscript, the parser checks for # chkconfig: <values> or # Provides: <names> strings in the script. If that matches, then it assumes it is an initscript.

Otherwise, it tries to find out if it is by searching for

  • shebang (e.g. #!/bin/bash) on first line

  • start/stop/status tokens in non-commented out lines

If 3 or more items are found (half the items searched for + 1), called confidence in the code (e.g. shebang + start + stop), then we assume it is an initscript.

Otherwise the parser raises a ParseException.

parse_content(content)[source]
Raises
exception insights.parsers.initscript.NotInitscriptException[source]

Bases: insights.parsers.ParseException

Installed product IDs

InstalledProductIDs - command find /etc/pki/product-default/ /etc/pki/product/ -name '*pem' -exec rct cat-cert --no-content '{}' \;

This module provides a parser for information about certificates for Red Hat product subscriptions.

class insights.parsers.installed_product_ids.InstalledProductIDs(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parses the output of the comand:

find /etc/pki/product-default/ /etc/pki/product/ -name '*pem' -exec rct cat-cert --no-content '{}' \;

Sample output from the unfiltered command looks like:

+-------------------------------------------+
Product Certificate
+-------------------------------------------+

Certificate:
    Path: /etc/pki/product-default/69.pem
    Version: 1.0
    Serial: 12750047592154749739
    Start Date: 2017-06-28 18:05:10+00:00
    End Date: 2037-06-23 18:05:10+00:00

Subject:
    CN: Red Hat Product ID [4f9995e0-8dc4-4b4f-acfe-4ef1264b94f3]

Issuer:
    C: US
    CN: Red Hat Entitlement Product Authority
    O: Red Hat, Inc.
    OU: Red Hat Network
    ST: North Carolina
    emailAddress: ca-support@redhat.com

Product:
    ID: 69
    Name: Red Hat Enterprise Linux Server
    Version: 7.4
    Arch: x86_64
    Tags: rhel-7,rhel-7-server
    Brand Type:
    Brand Name:


+-------------------------------------------+
    Product Certificate
+-------------------------------------------+

Certificate:
    Path: /etc/pki/product/69.pem
    Version: 1.0
    Serial: 12750047592154751271
    Start Date: 2018-04-13 11:23:50+00:00
    End Date: 2038-04-08 11:23:50+00:00

Subject:
    CN: Red Hat Product ID [f3c92a95-26be-4bdf-800f-02c044503896]

Issuer:
    C: US
    CN: Red Hat Entitlement Product Authority
    O: Red Hat, Inc.
    OU: Red Hat Network
    ST: North Carolina
    emailAddress: ca-support@redhat.com

Product:
    ID: 69
    Name: Red Hat Enterprise Linux Server
    Version: 7.6
    Arch: x86_64
    Tags: rhel-7,rhel-7-server
    Brand Type:
    Brand Name:

Filters have been added to the parser so that only the ID element will be collected.

ids

set of strings of the unique IDs collected by the command

Type

set

Examples

>>> type(products)
<class 'insights.parsers.installed_product_ids.InstalledProductIDs'>
>>> list(products.ids)
['69']
parse_content(content)[source]

Parse command output

InstalledRpms - Command rpm -qa

The InstalledRpms class parses the output of the rpm -qa command. Each line is parsed and stored in an InstalledRpm object. The rpm -qa command may output data in different formats and each format can be handled by the parsing routines of this class. The basic format of command is the package and is shown in the Examples.

Sample input data:

a52dec-0.7.4-18.el7.nux.x86_64  Tue 14 Jul 2015 09:25:38 AEST   1398536494
aalib-libs-1.4.0-0.22.rc5.el7.x86_64    Tue 14 Jul 2015 09:25:40 AEST   1390535634
abrt-2.1.11-35.el7.x86_64       Wed 09 Nov 2016 14:52:01 AEDT   1446193355
...
kernel-3.10.0-230.el7synaptics.1186112.1186106.2.x86_64 Wed 20 May 2015 11:24:00 AEST   1425955944
kernel-3.10.0-267.el7.x86_64    Sat 24 Oct 2015 09:56:17 AEDT   1434466402
kernel-3.10.0-327.36.3.el7.x86_64       Wed 09 Nov 2016 14:53:25 AEDT   1476954923
kernel-headers-3.10.0-327.36.3.el7.x86_64       Wed 09 Nov 2016 14:20:59 AEDT   1476954923
kernel-tools-3.10.0-327.36.3.el7.x86_64 Wed 09 Nov 2016 15:09:42 AEDT   1476954923
kernel-tools-libs-3.10.0-327.36.3.el7.x86_64    Wed 09 Nov 2016 14:52:13 AEDT   1476954923
kexec-tools-2.0.7-38.el7_2.1.x86_64     Wed 09 Nov 2016 14:48:21 AEDT   1452845178
...
zlib-1.2.7-15.el7.x86_64        Wed 09 Nov 2016 14:21:19 AEDT   1431443476
zsh-5.0.2-14.el7_2.2.x86_64     Wed 09 Nov 2016 15:13:19 AEDT   1464185248

Examples

>>> type(rpms)
<class 'insights.parsers.installed_rpms.InstalledRpms'>
>>> 'openjpeg-libs' in rpms
True
>>> rpms.corrupt
False
>>> rpms.get_max('openjpeg-libs')
0:openjpeg-libs-1.3-9.el6_3
>>> type(rpms.get_max('openjpeg-libs'))
<class 'insights.parsers.installed_rpms.InstalledRpm'>
>>> rpms.get_min('openjpeg-libs')
0:openjpeg-libs-1.3-9.el6_3
>>> rpm = rpms.get_max('openssh-server')
>>> rpm
0:openssh-server-5.3p1-104.el6
>>> type(rpm)
<class 'insights.parsers.installed_rpms.InstalledRpm'>
>>> rpm.package
'openssh-server-5.3p1-104.el6'
>>> rpm.nvr
'openssh-server-5.3p1-104.el6'
>>> rpm.source
>>> rpm.name
'openssh-server'
>>> rpm.version
'5.3p1'
>>> rpm.release
'104.el6'
>>> rpm.arch
'x86_64'
>>> rpm.epoch
'0'
>>> from insights.parsers.installed_rpms import InstalledRpm
>>> rpm2 = InstalledRpm.from_package('openssh-server-6.0-100.el6.x86_64')
>>> rpm == rpm2
False
>>> rpm > rpm2
False
>>> rpm < rpm2
True
insights.parsers.installed_rpms.Installed

alias of insights.parsers.installed_rpms.InstalledRpms

class insights.parsers.installed_rpms.InstalledRpm(data)[source]

Bases: object

Class for holding information about one installed RPM.

This class is usually created from dictionary with following structure:

{
   'name': 'package name',
   'version': 'package version',
   'release': 'package release',
   'arch': 'package architecture'
 }

It may also contain supplementary information from SOS report or epoch information from JSON.

Factory methods are provided such as from_package to create an object from a short package string:

kernel-devel-3.10.0-327.36.1.el7.x86_64

from_json to create an object from JSON:

{"name": "kernel-devel",
 "version": "3.10.0",
 "release": "327.36.1.el7",
 "arch": "x86_64"}

and from_line to create an object from a long package string:

('kernel-devel-3.10.0-327.36.1.el7.x86_64'
 '                                '
 'Wed May 18 14:16:21 2016' '       '
 '1410968065' '     '
 'Red Hat, Inc.' '  '
 'hs20-bc2-4.build.redhat.com' '    '
 '8902150305004...b3576ff37da7e12e2285358267495ac48a437d4eefb3213' '        '
 'RSA/8, Mon Aug 16 11:14:17 2010, Key ID 199e2f91fd431d51')
PRODUCT_SIGNING_KEYS = ['F76F66C3D4082792', '199e2f91fd431d51', '5326810137017186', '45689c882fa658e0', '219180cddb42a60e', '7514f77d8366b0d9', 'fd372689897da07a', '938a80caf21541eb', '08b871e6a5787476', 'E191DDB2C509E861']

List of package-signing keys. Should be updated timely according to https://access.redhat.com/security/team/key/

Type

list

SOSREPORT_KEYS = ['installtime', 'buildtime', 'vendor', 'buildserver', 'pgpsig', 'pgpsig_short']

List of keys for SOS Report RPM information.

Type

list

arch = None

RPM package architecture.

Type

str

classmethod from_json(json_line)[source]

The object of this class is usually created from dictionary. Alternatively it can be created from JSON line.

Parameters

json_line (str) --

JSON string in the following format (shown as Python string):

'{"name": "kernel-devel", "version": "3.10.0", "release": "327.36.1.el7", "arch": "x86_64"}'

classmethod from_line(line)[source]

The object of this class is usually created from dictionary. Alternatively it can be created from package line.

Parameters

line (str) --

package line in the following format (shown as Python string):

('kernel-devel-3.10.0-327.36.1.el7.x86_64'
 '                                '
 'Wed May 18 14:16:21 2016' '       '
 '1410968065' '     '
 'Red Hat, Inc.' '  '
 'hs20-bc2-4.build.redhat.com' '    '
 '8902150305004...b3576ff37da7e12e2285358267495ac48a437d4eefb3213' '        '
 'RSA/8, Mon Aug 16 11:14:17 2010, Key ID 199e2f91fd431d51')

classmethod from_package(package_string)[source]

The object of this class is usually created from dictionary. Alternatively it can be created from package string.

Parameters

package_string (str) --

package string in the following format (shown as Python string):

'kernel-devel-3.10.0-327.36.1.el7.x86_64'

name = None

RPM package name.

Type

str

property nevra

Package string in the format:

name-epoch:version-release.arch
Type

str

property nvr

Package name-version-release string.

Type

str

property nvra

Package name-version-release.arch string.

Type

str

property package

Package name-version-release string.

Type

str

property package_with_epoch

Package string in the format:

name-epoch:version-release
Type

str

redhat_signed = None

True when RPM package is signed by Red Hat, False when RPM package is not signed by Red Hat, None when no sufficient info to determine

Type

bool

release = None

RPM package release.

Type

str

property source

Returns source RPM of this RPM object.

Type

InstalledRpm

version = None

RPM package version.

Type

str

class insights.parsers.installed_rpms.InstalledRpms(*args, **kwargs)[source]

Bases: insights.core.CommandParser, insights.parsers.installed_rpms.RpmList

A parser for working with data containing a list of installed RPM files on the system and related information.

property corrupt

True if RPM database is corrupted, else False.

Type

bool

errors = None

List of input lines that indicate an error acquiring the data on the client.

Type

list

packages = None

Dictionary of RPMs keyed by package name.

Type

dict (InstalledRpm)

parse_content(content)[source]

This method must be implemented by classes based on this class.

unparsed = None

List of input lines that raised an exception during parsing.

Type

list

insights.parsers.installed_rpms.KNOWN_ARCHITECTURES = ['x86_64', 'i386', 'i486', 'i586', 'i686', 'src', 'ia64', 'ppc', 'ppc64', 's390', 's390x', 'amd64', '(none)', 'noarch', 'alpha', 'alphaev4', 'alphaev45', 'alphaev5', 'alphaev56', 'alphaev6', 'alphaev67', 'alphaev68', 'alphaev7', 'alphapca56', 'arm64', 'armv5tejl', 'armv5tel', 'armv6l', 'armv7hl', 'armv7hnl', 'armv7l', 'athlon', 'armhfp', 'geode', 'ia32e', 'nosrc', 'ppc64iseries', 'ppc64le', 'ppc64p7', 'ppc64pseries', 'sh3', 'sh4', 'sh4a', 'sparc', 'sparc64', 'sparc64v', 'sparcv8', 'sparcv9', 'sparcv9v', 'aarch64']

List of recognized architectures.

This list is taken from the PDC (Product Definition Center) available here https://pdc.fedoraproject.org/rest_api/v1/arches/.

Type

list

insights.parsers.installed_rpms.Rpm

alias of insights.parsers.installed_rpms.InstalledRpm

class insights.parsers.installed_rpms.RpmList[source]

Bases: object

Mixin class providing __contains__, get_max, get_min, newest, and oldest implementations for components that handle rpms.

get_max(package_name)[source]

Returns the highest version of the installed package with the given name.

Parameters

package_name (str) -- Installed RPM package name such as ‘bash’

Returns

Installed RPM with highest version

Return type

InstalledRpm

get_min(package_name)[source]

Returns the lowest version of the installed package with the given name.

Parameters

package_name (str) -- Installed RPM package name such as ‘bash’.

Returns

Installed RPM with lowest version

Return type

InstalledRpm

property is_hypervisor

Warning

This method is deprecated, please use insights.parsers.virt_what.VirtWhat which uses the command virt-what to check the hypervisor type.

bool: True if “.el[6|7]ev” exists in “vdsm”.release, else False.

newest(package_name)

Returns the highest version of the installed package with the given name.

Parameters

package_name (str) -- Installed RPM package name such as ‘bash’

Returns

Installed RPM with highest version

Return type

InstalledRpm

oldest(package_name)

Returns the lowest version of the installed package with the given name.

Parameters

package_name (str) -- Installed RPM package name such as ‘bash’.

Returns

Installed RPM with lowest version

Return type

InstalledRpm

insights.parsers.installed_rpms.from_package(package_string)

The object of this class is usually created from dictionary. Alternatively it can be created from package string.

Parameters

package_string (str) --

package string in the following format (shown as Python string):

'kernel-devel-3.10.0-327.36.1.el7.x86_64'

insights.parsers.installed_rpms.pad_version(left, right)[source]

Returns two sequences of the same length so that they can be compared. The shorter of the two arguments is lengthened by inserting extra zeros before non-integer components. The algorithm attempts to align character components.

Interrupts - file /proc/interrupts

Provides parsing for contents of /proc/interrupts. The contents of a typical interrupts file looks like:

           CPU0       CPU1       CPU2       CPU3
  0:         37          0          0          0  IR-IO-APIC   2-edge      timer
  1:          3          2          1          0  IR-IO-APIC   1-edge      i8042
  8:          0          1          0          0  IR-IO-APIC   8-edge      rtc0
  9:      11107       2316       4040       1356  IR-IO-APIC   9-fasteoi   acpi
NMI:        210         92        179         96   Non-maskable interrupts
LOC:    7561411    2488524    6527767    2448192   Local timer interrupts
ERR:          0
MIS:          0

The information is parsed by the Interrupts class. The information is stored as a list of dictionaries in order corresponding to the output. The counts in the CPU# columns are represented in a list. The following is a sample of the parsed information stored in an Interrupts class object:

[
    { 'irq': '0',
      'num_cpus': 4,
      'counts': [37, 0, 0, 0],


    { 'irq': 'MIS',
      'num_cpus': 4,
      'counts': [0, ]}
]

Examples

>>> int_info = shared[Interrupts]
>>> int_info.data[0]
{'irq': '0', 'num_cpus': 4, 'counts': [37, 0, 0, 0],
 'type_device': 'IR-IO-APIC   2-edge      timer'}
>>> int_info.num_cpus
4
>>> int_info.get('i8042')
[{'irq': '1', 'num_cpus': 4, 'counts': [3, 2, 1, 0],
  'type_device': 'IR-IO-APIC   1-edge      i8042'}]
>>> [i['irq'] for i in int_info if i['counts'][0] > 1000]
['9', 'LOC']
class insights.parsers.interrupts.Interrupts(context)[source]

Bases: insights.core.Parser

Parse contents of /proc/interrupts.

data

List of dictionaries with each entry representing a row of the command output after the first line of headings.

Type

list of dict

Raises

ParseException -- Returned if first line is invalid, or no data is found to parse.

get(filter)[source]

list: Returns list of records containing filter in the type/device field.

property num_cpus

Returns total number of CPUs.

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

Parsers for ip command outputs

This module provides the following parsers:

IpAddr - command ip addr

RouteDevices - command ip route show table all

IpNeighParser - command ip neigh show nud all

IpNetnsExecNamespaceLsofI - command /sbin/ip netns exec [network-namespace] lsof -i"

This module provides class IpNetnsExecNamespaceLsofI for parsing the output of command /sbin/ip netns exec [network-namespace] lsof -i. Filters have been added so that sensitive information can be filtered out. This results in the modification of the original structure of data.

class insights.parsers.ip_netns_exec_namespace_lsof.IpNetnsExecNamespaceLsofI(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This class provides processing for the output of command /sbin/ip netns exec [network-namespace] lsof -i.

Sample command output:

COMMAND   PID   USER    FD  TYPE  DEVICE     SIZE/OFF  NODE NAME
neutron-n 975   root    5u  IPv4  6482691    0t0        TCP *:http (LISTEN)

Examples

>>> len(ns_lsof.search(command="neutron-n"))
1
>>> ns_lsof.data[0]["command"] == "neutron-n"
True
fields

List of KeyValue namedtupules for each line in the command.

Type

list

data

List of key value pair derived from the command.

Type

list

Raises

SkipException -- When the file is empty or data is useless.

keyvalue

alias of KeyValue

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kw)[source]

Search item based on key value pair.

Example

>>> len(ns_lsof.search(command="neutron-n")) == 1
True
>>> len(ns_lsof.search(user="nobody")) == 0
True

IpaupgradeLog - file /var/log/ipaupgrade.log

This file records the information of IPA server upgrade process while executing command ipa-server-upgrade

class insights.parsers.ipaupgrade_log.IpaupgradeLog(context)[source]

Bases: insights.core.LogFileOutput

This parser is used to parse the content of file /var/log/ipaupgrade.log.

Note

Please refer to its super-class insights.core.LogFileOutput

Typical content of ipaupgrade.log file is:

2017-08-07T07:36:50Z DEBUG Starting external process
2017-08-07T07:36:50Z DEBUG args=/bin/systemctl is-active pki-tomcatd@pki-tomcat.service
2017-08-07T07:36:50Z DEBUG Process finished, return code=0
2017-08-07T07:36:50Z DEBUG stdout=active
2017-08-07T07:41:50Z ERROR IPA server upgrade failed: Inspect /var/log/ipaupgrade.log and run command ipa-server-upgrade manually.

Example

>>> ipaupgradelog = shared[IpaupgradeLog]
>>> len(list(log.get('DEBUG')))
4
>>> from datetime import datetime
>>> len(log.get_after(datetime(2017, 8, 7, 7, 37, 30)))
1

IPCS commands

Shared parsers for parsing output of the ipcs commands.

IpcsM - command ipcs -m

IpcsMP - command ipcs -m -p

IpcsS - command ipcs -s

IpcsSI - command ipcs -s -i {semaphore ID}

class insights.parsers.ipcs.IPCS(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base class for parsing the output of ipcs -X command in which the X could be m, s or q.

get(sid, default=None)[source]

Returns value of key item in self.data or default if key is not present.

Parameters
  • sid (str) -- Key to get from self.data.

  • default (str) -- Default value to return if key is not present.

Returns

the stored dict item, or the default if not found.

Return type

{dict}

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.ipcs.IpcsM(context, extra_bad_lines=[])[source]

Bases: insights.parsers.ipcs.IPCS

Class for parsing the output of ipcs -m command.

Typical output of the command is:

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x0052e2c1 0          postgres   600        37879808   26

Examples

>>> '0' in shm
True
>>> shm.get('0', {}).get('bytes')
'37879808'
>>> '2602' in shm
False
>>> shm.get('2602', {}).get('bytes')
class insights.parsers.ipcs.IpcsMP(context, extra_bad_lines=[])[source]

Bases: insights.parsers.ipcs.IPCS

Class for parsing the output of ipcs -m -p command.

Typical output of the command is:

------ Shared Memory Creator/Last-op --------
shmid      owner      cpid       lpid
0          postgres   1833       14111

Examples

>>> '0' in shmp
True
>>> shmp.get('0').get('cpid')
'1833'
class insights.parsers.ipcs.IpcsS(context, extra_bad_lines=[])[source]

Bases: insights.parsers.ipcs.IPCS

Class for parsing the output of ipcs -s command.

Typical output of the command is:

------ Semaphore Arrays --------
key        semid      owner      perms      nsems
0x00000000 557056     apache     600        1
0x00000000 589825     apache     600        1
0x00000000 131074     apache     600        1
0x0052e2c1 163843     postgres   600        17
0x0052e2c2 196612     postgres   600        17
0x0052e2c3 229381     postgres   600        17
0x0052e2c4 262150     postgres   600        17
0x0052e2c5 294919     postgres   600        17
0x0052e2c6 327688     postgres   600        17
0x0052e2c7 360457     postgres   600        17
0x00000000 622602     apache     600        1
0x00000000 655371     apache     600        1
0x00000000 688140     apache     600        1

Examples

>>> '622602' in sem
True
>>> '262150' in sem
True
>>> sem.get('262150').get('owner')
'postgres'
class insights.parsers.ipcs.IpcsSI(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the output of ipcs -s -i ## command. ## will be replaced with specific semid

Typical output of the command is:

# ipcs -s -i 65536

Semaphore Array semid=65536
uid=500  gid=501     cuid=500    cgid=501
mode=0600, access_perms=0600
nsems = 8
otime = Sun May 12 14:44:53 2013
ctime = Wed May  8 22:12:15 2013
semnum     value      ncount     zcount     pid
0          1          0          0          0
1          1          0          0          0
0          1          0          0          6151
3          1          0          0          2265
4          1          0          0          0
5          1          0          0          0
0          0          7          0          6152
7          1          0          0          4390

Examples

>>> semi.semid
'65536'
>>> semi.pid_list
['0', '2265', '4390', '6151', '6152']
parse_content(content)[source]

This method must be implemented by classes based on this class.

property pid_list

Return the ID list of the processes which use this semaphore.

Returns

the processes’ ID list

Return type

[list]

property semid

Return the semaphore ID.

Returns

the semaphore ID.

Return type

str

IPTables configuration

Module for processing output of the iptables-save and ip6tables-save commands. Parsers included are:

IPTables - command iptables-save

IP6Tables - command ip6tables-save

IPTabPermanent - file /etc/sysconfig/iptables

IP6TabPermanent - file /etc/sysconfig/ip6tables

Sample input data looks like:

# Generated by iptables-save v1.4.7 on Tue Aug 16 10:18:43 2016
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [769:196899]
:REJECT-LOG - [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -s 192.168.0.0/24 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A REJECT-LOG -p tcp -j REJECT --reject-with tcp-reset
COMMIT
# Completed on Tue Aug 16 10:18:43 2016
# Generated by iptables-save v1.4.7 on Tue Aug 16 10:18:43 2016
*mangle
:PREROUTING ACCEPT [451:22060]
:INPUT ACCEPT [451:22060]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [594:47151]
:POSTROUTING ACCEPT [594:47151]
COMMIT
# Completed on Tue Aug 16 10:18:43 2016
# Generated by iptables-save v1.4.7 on Tue Aug 16 10:18:43 2016
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [3:450]
:OUTPUT ACCEPT [3:450]
COMMIT
# Completed on Tue Aug 16 10:18:43 2016
  • Each table of iptables starts with a # Generated by ... line.

  • Each table starts with *<table-name>, for example *filter.

  • Each chain specifications starts with a : sign.

  • A chain specification looks like :<chain-name> <chain-policy> [<packet-counter>:<byte-counter>]

  • The chain-name may be for example INPUT.

  • Each iptables rule starts with a - sign.

Examples

>>> ipt.rules[0] == {'target': 'ACCEPT', 'chain': 'INPUT', 'rule': '-m state --state RELATED,ESTABLISHED -j ACCEPT', 'table': 'filter', 'target_options': None, 'target_action': 'jump', 'constraints': '-m state --state RELATED,ESTABLISHED'}
True
>>> ipt.get_chain('INPUT')[1] == {'target': 'ACCEPT', 'chain': 'INPUT', 'rule': '-s 192.168.0.0/24 -j ACCEPT', 'table': 'filter', 'target_options': None, 'target_action': 'jump', 'constraints': '-s 192.168.0.0/24'}
True
>>> ipt.table_chains('mangle') == {'FORWARD': [], 'INPUT': [], 'POSTROUTING': [], 'PREROUTING': [], 'OUTPUT': []}
True
>>> ipt.get_table('nat')[-1] == {'policy': 'ACCEPT', 'table': 'nat', 'byte_counter': 450, 'name': 'OUTPUT', 'packet_counter': 3}
True
class insights.parsers.iptables.IP6TabPermanent(context)[source]

Bases: insights.parsers.iptables.IPTablesConfiguration

Process ip6tables configuration saved in file /etc/sysconfig/ip6tables.

The configuration in this file is loaded by the ip6tables service when the system boots. New configuration is saved by using the service ip6tables save command. This configuration file is not available on a system with firewalld service.

See the insights.parsers.iptables.IPTablesConfiguration base class for additional information.

class insights.parsers.iptables.IP6Tables(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.parsers.iptables.IPTablesConfiguration

Process output of the ip6tables-save command.

See the insights.parsers.iptables.IPTablesConfiguration base class for additional information.

class insights.parsers.iptables.IPTabPermanent(context)[source]

Bases: insights.parsers.iptables.IPTablesConfiguration

Process iptables configuration saved in file /etc/sysconfig/iptables.

The configuration in this file is loaded by the iptables service when the system boots. New configuration is saved by using the service iptables save command. This configuration file is not available on a system with firewalld service.

See the insights.parsers.iptables.IPTablesConfiguration base class for additional information.

class insights.parsers.iptables.IPTables(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.parsers.iptables.IPTablesConfiguration

Process output of the iptables-save command.

See the insights.parsers.iptables.IPTablesConfiguration base class for additional information.

class insights.parsers.iptables.IPTablesConfiguration(context)[source]

Bases: insights.core.Parser

A general class for parsing iptables configuration in the iptables-save-like format.

get_chain(name, table='filter')[source]

Get the list of rules for a particular chain. Chain order is kept intact.

Parameters
  • name (str) -- chain name, e.g. ``

  • table (str) -- table name, defaults to filter

Returns

rules

Return type

list

get_rule(s)[source]

Get the list of rules that contain the given string.

Parameters

s (str) -- string to look for in iptables rules

Returns

rules containing given string

Return type

list

get_table(name='filter')[source]

Get the list of chains for a particular table.

Parameters

name (str) -- table name, defaults to filter

Returns

chains

Return type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

table_chains(table='filter')[source]

Get a dict where the keys are all the chains for the given table and each value is the set of rules defined for the given chain.

Parameters

table (str) -- table name, defaults to filter

Returns

chains with set of defined rules

Return type

dict

IronicConf - file /etc/ironic/ironic.conf

This class provides parsing for the file /etc/ironic/ironic.conf. See the IniConfigFile class for more usage information.

class insights.parsers.ironic_conf.IronicConf(context)[source]

Bases: insights.core.IniConfigFile

Ironic configuration parser class, based on the IniConfigFile class.

Sample input data is in the format:

[DEFAULT]

auth_strategy=keystone
default_resource_class=baremetal
enabled_hardware_types=idrac,ilo,ipmi,redfish
enabled_bios_interfaces=no-bios
enabled_boot_interfaces=ilo-pxe,pxe
enabled_console_interfaces=ipmitool-socat,ilo,no-console
force_raw_images = True

[agent]

deploy_logs_collect=always
deploy_logs_storage_backend=local
deploy_logs_local_path=/var/log/ironic/deploy/

[cinder]

auth_url=http://1.1.1.1:5000
project_domain_name=Default
project_name=service
user_domain_name=Default

Examples

>>> ironic_conf.has_option("agent", "deploy_logs_collect")
True
>>> ironic_conf.get("DEFAULT", "auth_strategy") == "keystone"
True

IronicInspectorLog - file /var/log/ironic-inspector/ironic-inspector.log

This is a standard log parser based on the LogFileOutput class.

Sample input:

2018-12-05 17:20:41.404 25139 ERROR requests.packages.urllib3.connection [-] Certificate did not match expected hostname: 10.xx.xx.xx. Certificate: {'subjectAltName': (('DNS', '10.xx.xx.xx'),), 'notBefore': u'Dec  4 12:02:36 2018 GMT', 'serialNumber': u'616460101648CCDF5727C', 'notAfter': 'Jun 21 21:40:11 2019 GMT', 'version': 3L, 'subject': ((('commonName', u'10.xx.xx.xx'),),), 'issuer': ((('commonName', u'Local Signing Authority'),), (('commonName', u'616460a1-da41448c-cdf566ff'),))}

Examples

>>> assert len(log.lines) == 1
>>> assert log.lines[0] == "2018-12-05 17:20:41.404 25139 ERROR requests.packages.urllib3.connection [-] Certificate did not match expected hostname: 10.xx.xx.xx. Certificate: {'subjectAltName': (('DNS', '10.xx.xx.xx'),), 'notBefore': u'Dec  4 12:02:36 2018 GMT', 'serialNumber': u'616460101648CCDF5727C', 'notAfter': 'Jun 21 21:40:11 2019 GMT', 'version': 3L, 'subject': ((('commonName', u'10.xx.xx.xx'),),), 'issuer': ((('commonName', u'Local Signing Authority'),), (('commonName', u'616460a1-da41448c-cdf566ff'),))}"
class insights.parsers.ironic_inspector_log.IronicInspectorLog(context)[source]

Bases: insights.core.LogFileOutput

Provide access to Ironic Inspector logs using the base class LogFileOutput.

Note

Please refer to the super-class insights.core.LogFileOutput

IscsiAdmModeSession - command iscsiadm - m session

This module provides the class IscsiAdmModeSession which processes iscsiadm -m session command output. Typical output looks like:

tcp: [1] 10.72.32.45:3260,1 iqn.2017-06.com.example:server1 (non-flash)
tcp: [2] 10.72.32.45:3260,1 iqn.2017-06.com.example:server2 (non-flash)

The class has one attribute data which is a list representing each line of the input data as a dict with keys corresponding to the keys in the output.

Examples

>>> iscsiadm_mode_session_content = '''
... tcp: [1] 10.72.32.45:3260,1 iqn.2017-06.com.example:server1 (non-flash)
... tcp: [2] 10.72.32.45:3260,1 iqn.2017-06.com.example:server2 (non-flash)
...'''.strip()
>>> from insights.parsers.iscsiadm_mode_session import IscsiAdmModeSession
>>> from insights.tests import context_wrap
>>> shared = {IscsiAdmModeSession: IscsiAdmModeSession(context_wrap(iscsiadm_mode_session_content))}
>>> result = shared[IscsiAdmModeSession]
>>> result[0]
{'IFACE_TRANSPORT': 'tcp', 'SID': '1', 'TARGET_IP': '10.72.32.45:3260,1',
 'TARGET_IQN': 'iqn.2017-06.com.example:server1'}
>>> result[1]['TARGET_IQN']
'iqn.2017-06.com.example:server2'
class insights.parsers.iscsiadm_mode_session.IscsiAdmModeSession(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class to process the iscsiadm - m session command output.

Attributes:

data (list): A list containing a dictionary for each line of the output in the form:

[
    {
        'IFACE_TRANSPORT': "tcp",
        'SID': '1',
        'TARGET_IP': '10.72.32.45:3260,1',
        'TARGET_IQN': 'iqn.2017-06.com.example:server1'
    },
    {
        'IFACE_TRANSPORT': "tcp",
        'SID': '2',
        'TARGET_IP': '10.72.32.45:3260,1',
        'TARGET_IQN': 'iqn.2017-06.com.example:server2'
    }
]
parse_content(content)[source]

This method must be implemented by classes based on this class.

JbossDomainServerLog - file $JBOSS_SERVER_LOG_DIR/server.log*

Parser for the JBoss Domain Server Log File.

class insights.parsers.jboss_domain_log.JbossDomainLog(context)[source]

Bases: insights.core.LogFileOutput

Read JBoss domain log file.

get_after(timestamp, s=None)[source]

Find all the (available) logs that are after the given time stamp.

If s is not supplied, then all lines are used. Otherwise, only the lines contain the s are used. s can be either a single string or a strings list. For list, all keywords in the list must be found in the line.

Note

The time stamp is time type instead of usual datetime type. If a time stamp is not found on the line between square brackets, then it is treated as a continuation of the previous line and is only included if the previous line’s timestamp is greater than the timestamp given. Because continuation lines are only included if a previous line has matched, this means that searching in logs that do not have a time stamp produces no lines.

Parameters
  • timestamp (time) -- log lines after this time are returned.

  • s (str or list) -- one or more strings to search for. If not supplied, all available lines are searched.

Yields

Log lines with time stamps after the given time.

Raises

TypeError -- The timestamp should be in time type, otherwise a TypeError will be raised.

class insights.parsers.jboss_domain_log.JbossDomainServerLog(context)[source]

Bases: insights.parsers.jboss_domain_log.JbossDomainLog

Read JBoss domain server log file.

Sample input:

16:22:57,476 INFO  [org.xnio] (MSC service thread 1-12) XNIO Version 3.0.14.GA-redhat-1
16:22:57,480 INFO  [org.xnio.nio] (MSC service thread 1-12) XNIO NIO Implementation Version 3.0.14.GA-redhat-1
16:22:57,495 INFO  [org.jboss.remoting] (MSC service thread 1-12) JBoss Remoting version 3.3.5.Final-redhat-1
16:23:03,881 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 23) JBAS014627: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:23:03,958 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 37) JBAS013371: Activating Security Subsystem

Examples

>>> type(log)
<class 'insights.parsers.jboss_domain_log.JbossDomainServerLog'>
>>> log.file_path
'/home/test/jboss/machine2/domain/servers/server-one/log/server.log'
>>> log.file_name
'server.log'
>>> error_msgs = log.get('3.0.14.GA-redhat-1')
>>> error_msgs[0]['raw_message']
'16:22:57,476 INFO  [org.xnio] (MSC service thread 1-12) XNIO Version 3.0.14.GA-redhat-1'
>>> 'Activating Security Subsystem' in log
True
>>> from datetime import time
>>> list(log.get_after(time(16, 23, 3)))[1]['raw_message']
'16:23:03,958 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 37) JBAS013371: Activating Security Subsystem'
class insights.parsers.jboss_domain_log.JbossStandaloneServerLog(context)[source]

Bases: insights.parsers.jboss_domain_log.JbossDomainLog

Read JBoss standalone server log file.

Sample input:

2018-07-17 10:58:44,606 INFO  [org.jboss.modules] (main) JBoss Modules version 1.6.0.Final-redhat-1
2018-07-17 10:58:44,911 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.7.SP1-redhat-1
2018-07-17 10:58:45,032 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: JBoss EAP 7.1.0.GA (WildFly Core 3.0.10.Final-redhat-1) starting
2018-07-17 10:58:45,033 DEBUG [org.jboss.as.config] (MSC service thread 1-7) Configured system properties:
    [Standalone] =
    awt.toolkit = sun.awt.X11.XToolkit
    file.encoding = UTF-8
    file.encoding.pkg = sun.io
    file.separator = /
    java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
    java.awt.headless = true
    java.awt.printerjob = sun.print.PSPrinterJob
    java.class.path = /opt/jboss-eap-7.1/jboss-modules.jar
    java.class.version = 52.0
    java.endorsed.dirs = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/endorsed
    java.ext.dirs = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/ext:/usr/java/packages/lib/ext
    java.home = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre
    java.io.tmpdir = /tmp
    java.library.path = /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
    java.net.preferIPv4Stack = true
    java.runtime.name = OpenJDK Runtime Environment
    java.runtime.version = 1.8.0_111-b16
    java.specification.name = Java Platform API Specification
    java.specification.vendor = Oracle Corporation
    java.specification.version = 1.8
    java.util.logging.manager = org.jboss.logmanager.LogManager
    java.vendor = Oracle Corporation
    java.vendor.url = http://java.oracle.com/
    java.vendor.url.bug = http://bugreport.sun.com/bugreport/
    java.version = 1.8.0_111
    java.vm.info = mixed mode
    java.vm.name = OpenJDK 64-Bit Server VM
    java.vm.specification.name = Java Virtual Machine Specification
    java.vm.specification.vendor = Oracle Corporation
    java.vm.specification.version = 1.8
    java.vm.vendor = Oracle Corporation
    java.vm.version = 25.111-b16
    javax.management.builder.initial = org.jboss.as.jmx.PluggableMBeanServerBuilder
    javax.xml.datatype.DatatypeFactory = __redirected.__DatatypeFactory
    javax.xml.parsers.DocumentBuilderFactory = __redirected.__DocumentBuilderFactory
    javax.xml.parsers.SAXParserFactory = __redirected.__SAXParserFactory
    javax.xml.stream.XMLEventFactory = __redirected.__XMLEventFactory
    javax.xml.stream.XMLInputFactory = __redirected.__XMLInputFactory
    javax.xml.stream.XMLOutputFactory = __redirected.__XMLOutputFactory
    javax.xml.transform.TransformerFactory = __redirected.__TransformerFactory
    javax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema = __redirected.__SchemaFactory
    javax.xml.xpath.XPathFactory:http://java.sun.com/jaxp/xpath/dom = __redirected.__XPathFactory
    jboss.home.dir = /opt/jboss-eap-7.1
    jboss.host.name = mylinux
    jboss.modules.dir = /opt/jboss-eap-7.1/modules
    jboss.modules.system.pkgs = org.jboss.byteman
    jboss.node.name = mylinux
    jboss.qualified.host.name = mylinux
    jboss.server.base.dir = /opt/jboss-eap-7.1/standalone
    jboss.server.config.dir = /opt/jboss-eap-7.1/standalone/configuration
    jboss.server.data.dir = /opt/jboss-eap-7.1/standalone/data
    jboss.server.deploy.dir = /opt/jboss-eap-7.1/standalone/data/content
    jboss.server.log.dir = /opt/jboss-eap-7.1/standalone/log
    jboss.server.name = mylinux
    jboss.server.persist.config = true
    jboss.server.temp.dir = /opt/jboss-eap-7.1/standalone/tmp
    line.separator =

    logging.configuration = file:/opt/jboss-eap-7.1/standalone/configuration/logging.properties
    module.path = /opt/jboss-eap-7.1/modules
    org.jboss.boot.log.file = /opt/jboss-eap-7.1/standalone/log/server.log
    org.jboss.resolver.warning = true
    org.xml.sax.driver = __redirected.__XMLReaderFactory
    os.arch = amd64
    os.name = Linux
    os.version = 4.8.13-100.fc23.x86_64
    path.separator = :
    sun.arch.data.model = 64
    sun.boot.class.path = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/resources.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/rt.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/charsets.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/jfr.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/classes
    sun.boot.library.path = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc23.x86_64/jre/lib/amd64
    sun.cpu.endian = little
    sun.cpu.isalist =
    sun.desktop = gnome
    sun.io.unicode.encoding = UnicodeLittle
    sun.java.command = /opt/jboss-eap-7.1/jboss-modules.jar -mp /opt/jboss-eap-7.1/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss-eap-7.1 -Djboss.server.base.dir=/opt/jboss-eap-7.1/standalone --server-config=standalone-ha.xml
    sun.java.launcher = SUN_STANDARD

Examples

>>> type(standalone_log)
<class 'insights.parsers.jboss_domain_log.JbossStandaloneServerLog'>
>>> standalone_log.file_path
'/JBOSS_HOME/standalone/log/server.log'
>>> standalone_log.file_name
'server.log'
>>> len(standalone_log.get("sun.java.command ="))
1

JBoss standalone mode main configuration - file $JBOSS_BASE_DIR/standalone.xml

This parser reads the XML in the JBoss standalone mode main configuration file.

Note

Please refer to its super-class insights.core.XMLParser for more details.

Sample input:

<?xml version='1.0' encoding='UTF-8'?>

<server xmlns="urn:jboss:domain:1.7">
    <management>
        <security-realms>
            <security-realm name="ManagementRealm">
                <authentication>
                    <local default-user="$local" skip-group-loading="true"/>
                    <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/>
                </authentication>
                <authorization map-groups-to-roles="false">
                    <properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/>
                </authorization>
            </security-realm>
            <security-realm name="ApplicationRealm">
                <authentication>
                    <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
                    <properties path="application-users.properties" relative-to="jboss.server.config.dir"/>
                </authentication>
                <authorization>
                    <properties path="application-roles.properties" relative-to="jboss.server.config.dir"/>
                </authorization>
            </security-realm>
        </security-realms>
    </management>
</server>

Examples

>>> jboss_main_config.file_path
'/root/jboss/jboss-eap-6.4/standalone/configuration/standalone.xml'
>>> properties = sorted(jboss_main_config.get_elements(".//management/security-realms/security-realm/authentication/properties"), key=lambda e: e.tag)
>>> properties[0].get("relative-to")
'jboss.server.config.dir'
class insights.parsers.jboss_standalone_main_conf.JbossStandaloneConf(context)[source]

Bases: insights.core.XMLParser

Read the XML in the JBoss standalone mode main configuration file

JBoss version - File $JBOSS_HOME/version.txt

This module provides plugins access to file $JBOSS_HOME/version.txt

Typical content of file $JBOSS_HOME/version.txt is:

Red Hat JBoss Enterprise Application Platform - Version 6.4.3.GA

This module parses the file content and stores data in the dict self.parsed. The version info can also be got via obj.major and obj.minor, etc.

Examples

>>> jboss_version.file_path
'/home/test/jboss/jboss-eap-6.4/version.txt'
>>> jboss_version.raw
'Red Hat JBoss Enterprise Application Platform - Version 6.4.3.GA'
>>> jboss_version.major
6
>>> jboss_version.minor
4
>>> jboss_version.release
3
>>> jboss_version.version
'6.4.3'
>>> jboss_version.code_name
'GA'
class insights.parsers.jboss_version.JbossVersion(context)[source]

Bases: insights.core.Parser

Parses the content of file $JBOSS_HOME/version.txt.

property code_name

code name of this running JBoss progress.

Type

string

property major

the major version of this running JBoss progress.

Type

int

property minor

the minor version of this running JBoss progress.

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

property release

release of this running JBoss progress.

Type

int

property version

the version of this running JBoss progress.

Type

string

JournalSinceBoot file /sos_commands/logs/journalctl_--no-pager_--boot

class insights.parsers.journal_since_boot.JournalSinceBoot(context)[source]

Bases: insights.core.Syslog

Read the /sos_commands/logs/journalctl_--no-pager_--boot file. Uses the Syslog class parser functionality - see the base class for more details.

Sample log lines:

-- Logs begin at Wed 2017-02-08 15:18:00 CET, end at Tue 2017-09-19 09:25:27 CEST. --
May 18 15:13:34 lxc-rhel68-sat56 jabberd/sm[11057]: session started: jid=rhn-dispatcher-sat@lxc-rhel6-sat56.redhat.com/superclient
May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon
May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: Launching a JVM...
May 18 15:24:28 lxc-rhel68-sat56 yum[11597]: Installed: lynx-2.8.6-27.el6.x86_64
May 18 15:36:19 lxc-rhel68-sat56 yum[11954]: Updated: sos-3.2-40.el6.noarch

Note

Because journal timestamps by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

Examples

>>> JournalSinceBoot.filters.append('wrapper')
>>> JournalSinceBoot.token_scan('daemon_start', 'Wrapper Started as Daemon')
>>> msgs = shared[JournalSinceBoot]
>>> len(msgs.lines)
>>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present
>>> wrapper_msgs[0]
{'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',
 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',
 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'
}
>>> msgs.daemon_start # Token set if matching lines present in logs
True

Journald configuration files

The journald.conf file is a key=value file with hash comments. Everything is in the [Journal] section, so sections are ignored.

Only active settings lines are processed, commented out settings are not processed.

Active settings are provided using the get_active_settings_value method or by using the dictionary contains functionality.

Options that are commented out are not returned - a rule using this parser has to be aware of which default value is assumed by systemd if the particular option is not specified.

Note: Precedence logic is implemented in JournaldConfAll combiner, the parser is called for every file separately.

Parsers provided by this module are:

EtcJournaldConf - file /etc/systemd/journald.conf

EtcJournaldConfD - file /etc/systemd/journald.conf.d/*.conf

UsrJournaldConfD - file usr/lib/systemd/journald.conf.d/*.conf

Example

>>> conf = shared[EtcJournaldConf]
>>> conf.get_active_setting_value('Storage')
'auto'
>>> 'Storage' in conf.active_settings
True
class insights.parsers.journald_conf.EtcJournaldConf(*args, **kwargs)[source]

Bases: insights.parsers.journald_conf.JournaldConf

Parser for accessing the /etc/systemd/journald.conf file.

class insights.parsers.journald_conf.EtcJournaldConfD(*args, **kwargs)[source]

Bases: insights.parsers.journald_conf.JournaldConf

Parser for accessing the /etc/systemd/journald.conf.d/*.conf files.

class insights.parsers.journald_conf.JournaldConf(*args, **kwargs)[source]

Bases: insights.core.Parser

A parser for accessing journald conf files.

get_active_setting_value(setting_name)[source]

Access active setting value by setting name.

Parameters

setting_name (string) -- Setting name

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

class insights.parsers.journald_conf.UsrJournaldConfD(*args, **kwargs)[source]

Bases: insights.parsers.journald_conf.JournaldConf

Parser for accessing the usr/lib/systemd/journald.conf.d/*.conf files.

KatelloServiceStatus - command katello-service status

The KatelloServiceStatus parser only reads the last line of command katello-service status to get the list of the failed services.

Note

Since we only care about the failed services, this parser only process the last line which contains the list of the failed services.

The last line of the output of katello-service status:

Some services failed to status: tomcat6,httpd

Examples

>>> kss_ = shared[KatelloServiceStatus]
>>> kss.is_ok
False
>>> kss.failed_services
['tomcat6', 'httpd']
class insights.parsers.katello_service_status.KatelloServiceStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Read the katello-service status and get the list of failed_services.

failed_services

The list of failed services.

Type

list

is_ok

Is there no failed service?

Type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

Kernel dump configuration files

This module contains the following parsers:

KDumpConf - file /etc/kdump.conf

KexecCrashLoaded - file /sys/kernel/kexec_crash_loaded

KexecCrashSize - file /sys/kernel/kexec_crash_size

class insights.parsers.kdump.KDumpConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

A dictionary like object for the values of the /etc/kdump.conf file.

lines

raw lines from the file, in order

Type

list

data

a dictionary of options set in the data

Type

dict

comments

fully commented lines

Type

list

inline_comments

lines containing inline comments

Type

list

target

target line parsed as a (x, y) tuple if set, else None

Type

tuple

The data property has two special behaviours:

  • If an option - e.g. blacklist - is repeated, its values are collected together in a list. Options that only appear once have their values stored as is.

  • The options option is special - it appears in the form option module value. The options key in the data dictionary is therefore stored as a dictionary, keyed on the module name.

The target property has following possibilities:

  • If target-line starts with any keyword in [‘raw’, ‘ssh’, ‘net’, ‘nfs’, ‘nfs4’], return tuple (keyword, value).

  • If target-line is set with ‘<fs_type> <partation>’, return tuple (<fs_type>, <partation>).

  • If target-line is not set, the target is default which is depending on what’s mounted in the current system, return None instead of tuple here.

Main helper functions:

  • options - the options value in the data(see above).

Sample /etc/kdump.conf file:

path /var/crash
core_collector makedumpfile -c --message-level 1 -d 24
default shell

Examples

>>> kd.is_local_disk
True
>>> kd.is_ssh()
False
>>> 'path' in kd
True
get_hostname(net_commands={'net', 'nfs', 'ssh'})[source]

Find the first host name in the given list of commands. Uses _network_lines above to find the list of commands. The first line that matches urlparse’s definition of a host name is returned, or None is returned.

get_ip(net_commands={'net', 'nfs', 'ssh'})[source]

Find the first IP address in the given list of commands. Uses _network_lines above to find the list of commands. The first line that lists an IP address is returned, otherwise None is returned.

property hostname

Uses get_hostname() above to give the first host name found in the list of crash dump destinations.

property ip

Uses get_ip() above to give the first IP address found in the list of crash dump destinations.

is_nfs()[source]

Is the destination of the kernel dump a NFS or NFSv4 connection?

is_ssh()[source]

Is the destination of the kernel dump an ssh connection?

options(module)[source]

Returns the options for this module in the settings.

Parameters

module (str) -- The module name

Returns

(str) The module’s options, or ‘’ if either options or

module is not found.

parse_content(content)[source]

This method must be implemented by classes based on this class.

property using_local_disk

Is kdump configured to only use local disk?

Several target types:

  • If ‘raw’ is given, then the dump is local.

  • If ‘ssh’, ‘net’, ‘nfs’, or ‘nfs4’ is given, then the dump is NOT local.

  • If ‘<fs type> <partition>’ is given, then the dump is local.

  • Otherwise, the dump is local.

Since only one target could be set, the logic used here is checking if remote target is used, return True for not.

class insights.parsers.kdump.KexecCrashLoaded(context)[source]

Bases: insights.core.Parser

A simple parser to determine if a crash kernel (i.e. a second kernel capable of capturing the machine state should the main kernel crash) is present.

This simply returns a set of whether the /sys/kernel/kexec_crash_loaded file has the value 1.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.kdump.KexecCrashSize(context)[source]

Bases: insights.core.Parser

Parses the /sys/kernel/kexec_crash_size file which tells the reserved memory size for the crash kernel.

size

reserved memory size for the crash kernel, or 0 if not found.

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

KernelConf - file /boot/config-*

This parser parses the content from kernel config file for individual installed kernel. This parser will return the data in dictionary format.

Sample Content from /boot/config-3.10.0-862.el7.x86_64:

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.10.0-693.el7.x86_64 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_PREEMPT_RT_FULL=y

Examples

>>> type(kconfig)
<class 'insights.parsers.kernel_config.KernelConf'>
>>> kconfig.get("CONFIG_PREEMPT_RT_FULL") == "y"
True
>>> len(kconfig) == 8
True
>>> kconfig.kconf_file
'config-3.10.0-327.28.3.rt56.235.el7.x86_64'
class insights.parsers.kernel_config.KernelConf(context)[source]

Bases: insights.core.Parser, dict

Parase /boot/config-* file, returns a dict contains kernel configurations.

property kconf_file

It will return the kernel config file.

Type

(str)

parse_content(content)[source]

This method must be implemented by classes based on this class.

KeystoneConf - file /etc/keystone/keystone.conf

The KeystoneConf class parses the information in the file /etc/keystone/keystone.conf. See the IniConfigFile class for more information on attributes and methods.

Sample input data looks like:

[DEFAULT]

#
# From keystone
#
admin_token = ADMIN
compute_port = 8774

[identity]

# From keystone
default_domain_id = default
#domain_specific_drivers_enabled = false
domain_configurations_from_database = false

[identity_mapping]

driver = keystone.identity.mapping_backends.sql.Mapping
generator = keystone.identity.id_generators.sha256.Generator
#backward_compatible_ids = true

Examples

>>> kconf = shared[KeystoneConf]
>>> kconf.defaults()
{'admin_token': 'ADMIN', 'compute_port': '8774'}
>>> 'identity' in kconf
True
>>> kconf.has_option('identity', 'default_domain_id')
True
>>> kconf.has_option('identity', 'domain_specific_drivers_enabled')
False
>>> kconf.get('identity', 'default_domain_id')
'default'
>>> kconf.items('identity_mapping')
{'driver': 'keystone.identity.mapping_backends.sql.Mapping',
 'generator': 'keystone.identity.id_generators.sha256.Generator'}
class insights.parsers.keystone.KeystoneConf(context)[source]

Bases: insights.core.IniConfigFile

Parse contents of file /etc/keystone/keystone.conf.

KeystoneLog - file /var/log/keystone/keystone.log

Module for parsing the log file for Keystone

Typical content of keystone.log file is:

2016-11-09 14:31:46.834 818 INFO migrate.versioning.api [-] done
2016-11-09 14:31:46.834 818 INFO migrate.versioning.api [-] 3 -> 4...
2016-11-09 14:31:46.872 818 INFO migrate.versioning.api [-] done
2016-11-09 14:31:48.435 1082 WARNING keystone.assignment.core [-] Deprecated: Use of the identity driver config to automatically configure the same assignment driver has been deprecated, in the "O" release, the assignment driver will need to be expicitly configured if different than the default (SQL).
2016-11-09 14:31:48.648 1082 WARNING oslo_config.cfg [-] Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:31:48.680 1082 WARNING oslo_config.cfg [-] Option "rabbit_hosts" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:31:48.681 1082 WARNING oslo_config.cfg [-] Option "rabbit_userid" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:31:48.681 1082 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:31:48.774 1082 INFO keystone.cmd.cli [-] Created domain default
2016-11-09 14:31:48.802 1082 INFO keystone.cmd.cli [req-ace08b7c-d0d2-4b18-b792-1ec3402575b1 - - - - -] Created project admin
2016-11-09 14:31:48.886 1082 INFO keystone.cmd.cli [req-ace08b7c-d0d2-4b18-b792-1ec3402575b1 - - - - -] Created user admin
2016-11-09 14:31:48.895 1082 INFO keystone.cmd.cli [req-ace08b7c-d0d2-4b18-b792-1ec3402575b1 - - - - -] Created role admin
2016-11-09 14:31:48.916 1082 INFO keystone.cmd.cli [req-ace08b7c-d0d2-4b18-b792-1ec3402575b1 - - - - -] Granted admin on admin to user admin.
2016-11-09 14:31:48.986 1082 INFO keystone.assignment.core [req-ace08b7c-d0d2-4b18-b792-1ec3402575b1 - - - - -] Creating the default role 9fe2ff9ee4384b1894a90878d3e92bab because it does not exist.
2016-11-09 14:32:09.175 1988 WARNING keystone.assignment.core [-] Deprecated: Use of the identity driver config to automatically configure the same assignment driver has been deprecated, in the "O" release, the assignment driver will need to be expicitly configured if different than the default (SQL).
class insights.parsers.keystone_log.KeystoneLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/keystone/keystone.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

KpatchList - command /usr/sbin/kpatch list

The /usr/sbin/kpatch list command provides information about the installed patch modules.

Sample content from command /usr/sbin/kpatch list is::

Loaded patch modules: kpatch_3_10_0_1062_1_1_1_4 [enabled]

Installed patch modules: kpatch_3_10_0_1062_1_1_1_4 (3.10.0-1062.1.1.el7.x86_64)

Examples

>>> 'kpatch_3_10_0_1062_1_1_1_4' in kpatchs.loaded
True
>>> kpatchs.loaded.get('kpatch_3_10_0_1062_1_1_1_4')
'enabled'
>>> 'kpatch_3_10_0_1062_1_1_1_4' in kpatchs.installed
True
>>> kpatchs.installed.get('kpatch_3_10_0_1062_1_1_1_4')
'3.10.0-1062.1.1.el7.x86_64'
class insights.parsers.kpatch_list.KpatchList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for command: /usr/sbin/kpatch list

property installed

This will return the installed kpath modules

Type

(dict)

property loaded

This will return the loaded kpath modules

Type

(dict)

parse_content(content)[source]

This method must be implemented by classes based on this class.

KpatchPatches - report locally stored kpatch patches

This parser creates a list of the module names of locally stored kpatch modules returned by command ls /var/lib/kpatch/\`uname -r\`/. If no modules are installed, a ContentException will be raised.

class insights.parsers.kpatch_patches.KpatchPatches(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A parser for getting modules names of locally stored kpatch-patch files.

Sample output of ls /var/lib/kpatch/`uname -r`/ looks like:

kpatch-3_10_0-1062-1-5.ko kpatch-3_10_0-1062-1-6.ko
patches

List of the name of kpatch patches. The dashes are converted to underscores, file suffix are removed, and duplicated names are removed as well

Type

str

Examples

>>> kp.patches
['kpatch_3_10_0_1062_1_5', 'kpatch_3_10_0_1062_1_6']
parse_content(content)[source]

This method must be implemented by classes based on this class.

Krb5Configuration - files /etc/krb5.conf and /etc/krb5.conf.d/*

krb5 Configuration are /etc/krb5.conf and /etc/krb5.conf.d/*, and the content format is similar to INI config, but they include values that span multiple lines. Multi-line values start with a ‘{‘ and end with a ‘}’, and we join them together by setting the is_squ variable to True while in a multi-line value.

Example

>>> krb5_content = '''
[realms]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 default_ccache_name = KEYRING:persistent:%{uid}
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }
 pam = {
  debug = false
  krb4_convert = false
  ticket_lifetime = 36000
 }
 [libdefaults]
  dns_lookup_realm = false
  ticket_lifetime = 24h
  EXAMPLE.COM = {
   kdc = kerberos2.example.com
   admin_server = kerberos2.example.com
 }
# renew_lifetime = 7d
# forwardable = true
# rdns = false
'''.strip()
>>> from insights.tests import context_wrap
>>> shared = {Krb5Configuration: Krb5Configuration(context_wrap(krb5_content))}
>>> krb5_info = shared[Krb5Configuration]
>>> krb5_info["libdefaults"]["dnsdsd"]
"false"
>>> krb5_info["realms"]["EXAMPLE.COM"]["kdc"]
"kerberos.example.com"
>>> krb5_info.sections()
["libdefaults","realms"]
>>> krb5_info.has_section("realms")
True
>>> krb5_info.has_option("realms", "nosuchoption")
False
>>> krb5_info.options("libdefaults")
["dns_lookup_realm","ticket_lifetime","EXAMPLE.COM"]
class insights.parsers.krb5.Krb5Configuration(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Class for krb5.conf and krb5.conf.d configuration files.

The Kerberos .ini format is like an ordinary .ini file except that values can include a multiple line key-value pair ‘relation’ that starts with a ‘{‘ and end with a ‘}’ on a trailing line. So we track whether we’re in curly braces by setting is_squ when we enter a relation, and clearing it when we leave. Please fill in the remainder of the logic here.

includedir

The directory list that krb5.conf includes via includedir directive

Type

list

include

The configuration file list that krb5.conf includes via include directive

Type

list

module

The module list that krb5.conf specifed via module directive

Type

list

has_option(section, option)[source]

Check for the existence of a given option in a given section. Return True if the given option is present, and False if not present.

has_section(section)[source]

Indicate whether the named section is present in the configuration. Return True if the given section is present, and False if not present.

options(section)[source]

Return a list of option names for the given section name.

parse_content(content)[source]

This method must be implemented by classes based on this class.

sections()[source]

Return a list of section names.

Kerberos KDC Logs - file /var/log/krb5kdc.log

class insights.parsers.krb5kdc_log.KerberosKDCLog(context)[source]

Bases: insights.core.LogFileOutput

Read the /var/log/krb5kdc.log file.

Note

Please refer to its super-class insights.core.LogFileOutput for more usage information.

Find logs by keyword and parse them into a dictionary with the keys:

  • timestamp

  • system

  • service

  • pid

  • level

  • message

  • raw_message - the full line as originally given.

If the log line is not in the standard format, only the raw_message field will be stored in the dictionary.

Sample log file:

Apr 01 03:36:11 ldap.example.com krb5kdc[24569](info): TGS_REQ (4 etypes {18 17 16 23}) 10.250.3.150: ISSUE: authtime 1427873771, etypes {rep=18 tkt=18 ses=18}, sasher@EXAMPLE.COM for HTTP/sepdt138.example.com@EXAMPLE.COM
Apr 01 03:36:11 ldap.example.com krb5kdc[24569](info): AS_REQ (4 etypes {18 17 16 23}) 10.250.17.96: NEEDED_PREAUTH: niz@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM, Additional pre-authentication required
Apr 01 03:36:11 ldap.example.com krb5kdc[24549](info): AS_REQ (4 etypes {18 17 16 23}) 10.250.17.96: NEEDED_PREAUTH: niz@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM, Additional pre-authentication required
Apr 01 03:36:11 ldap.example.com krb5kdc[24546](info): AS_REQ (4 etypes {18 17 16 23}) 10.250.17.96: NEEDED_PREAUTH: niz@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM, Additional pre-authentication required
Apr 01 03:36:33 ldap.example.com krb5kdc[24556](info): preauth (encrypted_timestamp) verify failure: Decrypt integrity check failed
Apr 01 03:36:36 ldap.example.com krb5kdc[24568](info): preauth (encrypted_timestamp) verify failure: No matching key in entry
Apr 01 03:38:34 ldap.example.com krb5kdc[24551](info): preauth (encrypted_timestamp) verify failure: No matching key in entry
Apr 01 03:39:43 ldap.example.com krb5kdc[24549](info): preauth (encrypted_timestamp) verify failure: No matching key in entry

Examples

>>> log = shared[KerberosKDCLog]
>>> # log.get is a generator, so get list to test length
>>> len(list(log.get('Decrypt integrity check failed')))
1
>>> from datetime import datetime
>>> len(log.get_after(datetime(2017, 4, 1, 3, 36, 30)))  # Apr 01 03:36:30
4

Note

Because the Kerberos KDC log timestamps by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

KSMState - file /sys/kernel/mm/ksm/run

Parser to get the kernel samepage merging state by reading the file /sys/kernel/mm/ksm/run.

class insights.parsers.ksmstate.KSMState(context)[source]

Bases: insights.core.Parser

Parser to get the kernel samepage merging state by reading the file /sys/kernel/mm/ksm/run.

Typical output of /sys/kernel/mm/ksm/run likes:

0

From https://www.kernel.org/doc/Documentation/vm/ksm.txt:

set 0 to stop ksmd from running but keep merged pages,
set 1 to run ksmd e.g. "echo 1 > /sys/kernel/mm/ksm/run",
set 2 to stop ksmd and unmerge all pages currently merged, but leave
      mergeable areas registered for next run

Examples

>>> ksm.value == '0'
True
>>> ksm.is_running
False
Raises
parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.ksmstate.is_running(context)[source]

Warning

This function parser is deprecated, please use KSMState instead.

Check if Kernel Samepage Merging is enabled. ‘True’ if KSM is on (i.e. /sys/kernel/mm/ksm/run is ‘1’) or ‘False’ if not.

KubepodsCpuQuota - CPU quota for each Kubernetes pod

This parser reads the content of /sys/fs/cgroup/cpu/kubepods.slice/kubepods-burstable.slice/*.slice/cpu.cfs_quota_us.

class insights.parsers.kubepods_cpu_quota.KubepodsCpuQuota(context)[source]

Bases: insights.core.Parser

Class KubepodsCpuQuota parses the content of the /sys/fs/cgroup/cpu/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod*.slice/cpu.cfs_quota_us.

cpu_quota

It is used to show the value of cpu quota for a particular pod in a Kubernetes cluster or an OpenShift cluster.

Type

int

A typical sample of the content of this file looks like:

-1

Examples

>>> type(kubepods_cpu_quota)
<class 'insights.parsers.kubepods_cpu_quota.KubepodsCpuQuota'>
>>> kubepods_cpu_quota.cpu_quota
-1
parse_content(content)[source]

This method must be implemented by classes based on this class.

Parsers for detection of Linux/Ebury 1.6 malware indicators

Libkeyutils - command find -L /lib /lib64 -name 'libkeyutils.so*'

Parses output of command find -L /lib /lib64 -name 'libkeyutils.so*' to find all potentially affected libraries.

LibkeyutilsObjdumps - command find -L /lib /lib64 -name libkeyutils.so.1 -exec objdump -x "{}" \;

Parses output of command find -L /lib /lib64 -name libkeyutils.so.1 -exec objdump -x "{}" \; to verify linked libraries.

https://www.welivesecurity.com/2017/10/30/windigo-ebury-update-2/

class insights.parsers.libkeyutils.Libkeyutils(*args, **kwargs)[source]

Bases: insights.core.CommandParser

This parser finds all ‘libkeyutils.so*’ libraries in either /lib or /lib64 directory and its sub-directories.

Output of Command:

/lib/libkeyutils.so.1
/lib/tls/libkeyutils.so.1.6
/lib64/libkeyutils.so

Example:

>>> shared[Libkeyutils].libraries
['/lib/libkeyutils.so.1', '/lib/tls/libkeyutils.so.1.6', '/lib64/libkeyutils.so']
libraries = None

all ‘libkeyutils.so*’ libraries located in either /lib or /lib64 directory and its sub-directories.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.libkeyutils.LibkeyutilsObjdumps(*args, **kwargs)[source]

Bases: insights.core.CommandParser

This parser goes through objdumps of all ‘libkeyutils.so.1’ libraries in either /lib or /lib64 directory, and its sub-directories, to finds linked libraries.

Output of Command:

/lib/libkeyutils.so.1:     file format elf32-i386
/lib/libkeyutils.so.1
architecture: i386, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x00000f80
...

Dynamic Section:
  NEEDED               libdl.so.2
  NEEDED               libc.so.6
  NEEDED               libsbr.so
  SONAME               libkeyutils.so.1
  INIT                 0x00000e54
...


/lib64/libkeyutils.so.1:     file format elf64-x86-64
/lib64/libkeyutils.so.1
architecture: i386:x86-64, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x00000000000014b0
...

Dynamic Section:
  NEEDED               libdl.so.2
  NEEDED               libsbr.so.6
  NEEDED               libfake.so
  SONAME               libkeyutils.so.1
  INIT                 0x0000000000001390
...

Example:

>>> shared[LibkeyutilsObjdumps].linked_libraries
{'/lib/libkeyutils.so.1': ['libdl.so.2', 'libc.so.6', 'libsbr.so'],
 '/lib64/libkeyutils.so.1': ['libdl.so.2', 'libsbr.so.6', 'libfake.so']}
linked_libraries = None

found libraries and their linked libraries.

Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

Libvirtd Logs

This module contains the following parsers:

LibVirtdLog - file /var/log/libvirt/libvirtd.log

LibVirtdQemuLog - file /var/log/libvirt/qemu/*.log

class insights.parsers.libvirtd_log.LibVirtdLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/libvirt/libvirtd.log log file.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample input:

2013-10-23 17:32:19.909+0000: 14069: debug : do_open:1174 : trying driver 0 (Test) ...
2013-10-23 17:32:19.909+0000: 14069: debug : do_open:1180 : driver 0 Test returned DECLINED
2013-10-23 17:32:19.909+0000: 14069: debug : do_open:1174 : trying driver 1 (ESX) ...
2013-10-23 17:32:19.909+0000: 14069: debug : do_open:1180 : driver 1 ESX returned DECLINED
2013-10-23 17:32:19.909+0000: 14069: debug : do_open:1174 : trying driver 2 (remote) ...
2013-10-23 17:32:19.957+0000: 14069: error : virNetTLSContextCheckCertDN:418 : Certificate [session] owner does not match the hostname AA.BB.CC.DD <============= IP Address
2013-10-23 17:32:19.957+0000: 14069: warning : virNetTLSContextCheckCertificate:1102 : Certificate check failed Certificate [session] owner does not match the hostname AA.BB.CC.DD
2013-10-23 17:32:19.957+0000: 14069: error : virNetTLSContextCheckCertificate:1105 : authentication failed: Failed to verify peer's certificate

Examples

>>> "Certificate check failed Certificate" in libvirtd_log
True
>>> len(libvirtd_log.lines) # All lines, before filtering
8
>>> len(libvirtd_log.get('NetTLSContext')) # After filtering
3
class insights.parsers.libvirtd_log.LibVirtdQemuLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/libvirt/qemu/*.log log file.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample input from file /var/log/libvirt/qemu/bb912729-fa51-443b-bac6-bf4c795f081d.log:

2019-06-04 05:33:22.280743Z qemu-kvm: -vnc 10.xxx.xxx.xxx:0: Failed to start VNC server: Failed to bind socket: Cannot assign requested address
2019-06-04 05:33:2.285+0000: shutting down

Examples

>>> from datetime import datetime
>>> "shutting down" in libvirtd_qemu_log
True
>>> len(list(libvirtd_qemu_log.get_after(datetime(2019, 4, 26, 6, 55, 20))))
2
>>> libvirtd_qemu_log.file_name.strip('.log')  # Instance UUID
'bb912729-fa51-443b-bac6-bf4c795f081d'

Limits configuration - file /etc/security/limits.conf and others

The LimitsConf class parser, which provides a ‘rules’ list that is similar to the above but also provides a find_all method to find all the rules that match a given set of criteria and other properties that make it easier to use the contents of the parser.

class insights.parsers.limits_conf.LimitsConf(context)[source]

Bases: insights.core.Parser

Parse the /etc/security/limits.conf and files in /etc/security/limits.d.

This parser reads the files and records the domain, type, item and value for each line. This is available as a big list of dictionaries in the ‘items’ property. Each item also contains the ‘file’ key, which denotes the file that this rule was read from.

Lines with too few or too many parts, or with misspellings in the value such as ‘unlimitied’, are stored in a bad_lines list property.

Use the find_all method to find all the the limits that apply in a particular situation. Parameters are supplied as arguments - for example, find_all(domain='root') will find all rules that apply to root (including wildcard rules). These rules are sorted by domain, type and item, and the most specific rule is used.

bad_lines

all unparseable, non-comment lines found in order.

Type

list

domains

all the domains found, in alphabetical order.

Type

list

rules

a list of dictionares with the domain, type, item, value and file of each rule.

Type

list

Sample input:

# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     4096
root       soft    nproc     unlimited

Examples

>>> limits = shared[LimitsConf][0] # At the moment this is per filename
>>> len(limits.rules)
2
>>> limits.domains
['*', 'root']
>>> limits.rules[0] # note value is integer
{'domain': '*', 'type': 'soft', 'item': 'nproc', 'value': 4096, 'file': '/etc/security/limits.d/20-nproc.conf'}
>>> limits.find_all(domain='root')
[{'domain': '*', 'type': 'soft', 'item': 'nproc', 'value': 4096, 'file': '/etc/security/limits.d/20-nproc.conf'},
 {'domain': 'root', 'type': 'soft', 'item': 'nproc', 'value': 4096, 'file': '/etc/security/limits.d/20-nproc.conf'}]
>>> limits.find_all(item='data')
[]
find_all(**kwargs)[source]

Find all the rules that match the given parameters.

The three parameters that can be searched for are ‘domain’, ‘type’ and ‘item’. These are used as argument names in the keyword argument list. If no parameters are given, no matches are returned.

parse_content(content)[source]

This method must be implemented by classes based on this class.

LogrotateConf - files /etc/logrotate.conf and others

Class to parse logrotate confuration files: - /etc/logrotate.conf - /etc/logrotate.d/*

See: http://www.linuxmanpages.org/8/logrotate

class insights.parsers.logrotate_conf.LogrotateConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Class for parsing /etc/logrotate.conf and /etc/logrotate.d/* configuration files.

Sample logrotate configuration file:

# sample file
compress

/var/log/messages {
    rotate 5
    weekly
    postrotate
                /sbin/killall -HUP syslogd
    endscript
}

"/var/log/httpd/access.log" /var/log/httpd/error.log {
    rotate 5
    mail www@my.org
    size=100k
    sharedscripts
    postrotate
                /sbin/killall -HUP httpd
    endscript
}

/var/log/news/news.crit
/var/log/news/olds.crit  {
    monthly
    rotate 2
    olddir /var/log/news/old
    missingok
    postrotate
                kill -HUP `cat /var/run/inn.pid`
    endscript
    nocompress
}

Examples

>>> type(log_rt)
<class 'insights.parsers.logrotate_conf.LogrotateConf'>
>>> log_rt.options
['compress']
>>> log_rt.log_files
['/var/log/messages', '/var/log/httpd/access.log', '/var/log/httpd/error.log', '/var/log/news/news.crit', '/var/log/news/olds.crit']
>>> log_rt['compress']
True
>>> 'weekly' in log_rt['/var/log/messages']
True
>>> log_rt['/var/log/messages']['postrotate']
['/sbin/killall -HUP syslogd']
data

All parsed options and log files are stored in this dictionary

Type

dict

options

List of global options in the configuration file

Type

list

log_files

List of log files in the configuration file

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

LpstatPrinters - command lpstat -p

Parses the output of lpstat -p, to get locally configured printers.

Current available printer states are:

  • IDLE (PRINTER_STATUS_IDLE)

  • PROCESSING (PRINTER_STATUS_PROCESSING) -- printing

  • DISABLED (PRINTER_STATUS_DISABLED)

  • UNKNOWN (PRINTER_STATUS_UNKNOWN)

Examples

>>> from insights.parsers.lpstat import LpstatPrinters, PRINTER_STATUS_DISABLED
>>> from insights.tests import context_wrap
>>> LPSTAT_P_OUTPUT = '''
... printer idle_printer is idle.  enabled since Fri 20 Jan 2017 09:55:50 PM CET
... printer disabled_printer disabled since Wed 15 Feb 2017 12:01:11 PM EST -
...     reason unknown
... '''
>>> lpstat = LpstatPrinters(context_wrap(LPSTAT_P_OUTPUT))
>>> lpstat.printers
[{'status': 'IDLE', 'name': 'idle_printer'}, {'status': 'DISABLED', 'name': 'disabled_printer'}]
>>> lpstat.printer_names_by_status(PRINTER_STATUS_DISABLED)
['disabled_printer']
class insights.parsers.lpstat.LpstatPrinters(*args, **kwargs)[source]

Bases: insights.core.CommandParser

Class to parse lpstat -p command output.

Raises

ValueError -- Raised if any error occurs parsing the content.

parse_content(content)[source]

This method must be implemented by classes based on this class.

printer_names_by_status(status)[source]

Gives names of configured printers for a given status

Parameters

status (string) --

printers = None

Dictionary of locally configured printers, with keys ‘name’ and ‘status’

Type

dict

LsBoot - command ls -lanR /boot

The ls -lanR /boot command provides information for the listing of the /boot directory.

See the FileListing class for a more complete description of the available features of the class.

Sample directory listing:

/boot:
total 187380
dr-xr-xr-x.  3 0 0     4096 Mar  4 16:19 .
dr-xr-xr-x. 19 0 0     4096 Jul 14 09:10 ..
-rw-r--r--.  1 0 0   123891 Aug 25  2015 config-3.10.0-229.14.1.el7.x86_64

/boot/grub2:
total 36
drwxr-xr-x. 6 0 0  104 Mar  4 16:16 .
dr-xr-xr-x. 3 0 0 4096 Mar  4 16:19 ..
lrwxrwxrwx. 1 0 0     11 Aug  4  2014 menu.lst -> ./grub.conf
-rw-r--r--. 1 0 0   64 Sep 18  2015 device.map

Examples

>>> bootdir = shared[LsBoot]
>>> '/boot' in bootdir
True
>>> '/boot/grub' in bootdir
False
>>> bootdir.files_of('/boot')
['config-3.10.0-229.14.1.el7.x86_64']
>>> bootdir.dirs_of('/boot')
['.', '..', 'grub2']
>>> bootdir.dir_contains('/boot/grub2', 'menu.lst')
True
class insights.parsers.ls_boot.LsBoot(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parse the /boot directory listing using a standard FileListing parser.

LsDev - Command ls -lanR /dev

The ls -lanR /dev command provides information for the listing of the /dev directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Examples

>>> LS_DEV = '''
... /dev:
... total 3
... brw-rw----.  1 0  6 253,   0 Aug  4 16:56 dm-0
... brw-rw----.  1 0  6 253,   1 Aug  4 16:56 dm-1
... brw-rw----.  1 0  6 253,  10 Aug  4 16:56 dm-10
... crw-rw-rw-.  1 0  5   5,   2 Aug  5  2016 ptmx
... drwxr-xr-x.  2 0  0        0 Aug  4 16:56 pts
... lrwxrwxrwx.  1 0  0       25 Oct 25 14:48 initctl -> /run/systemd/initctl/fifo
...
... /dev/rhel:
... total 0
... drwxr-xr-x.  2 0 0  100 Jul 25 10:00 .
... drwxr-xr-x. 23 0 0 3720 Jul 25 12:43 ..
... lrwxrwxrwx.  1 0 0    7 Jul 25 10:00 home -> ../dm-2
... lrwxrwxrwx.  1 0 0    7 Jul 25 10:00 root -> ../dm-0
... lrwxrwxrwx.  1 0 0    7 Jul 25 10:00 swap -> ../dm-1
...'''
>>> ls_dev = LsDev(context_wrap(LS_DEV))
>>> ls_dev
<insights.parsers.ls_dev.LsDev object at 0x7f287406a1d0>
>>> "/dev/rhel" in ls_dev
True
>>> ls_dev.files_of("/dev/rhel")
['home', 'root', 'swap']
>>> ls_dev.dirs_of("/dev/rhel")
['.', '..']
>>> ls_dev.specials_of("/dev/rhel")
[]
>>> ls_dev.listing_of("/dev/rhel").keys()
['home', 'root', 'swap', '..', '.']
>>> ls_dev.dir_entry("/dev/rhel", "home")
{'group': '0', 'name': 'home', 'links': 1, 'perms': 'rwxrwxrwx.',
'raw_entry': 'lrwxrwxrwx.  1 0 0    7 Jul 25 10:00 home -> ../dm-2', 'owner': '0',
'link': '../dm-2', 'date': 'Jul 25 10:00', 'type': 'l', 'size': 7}
>>> ls_dev.listing_of('/dev/rhel')['.']['type'] == 'd'
True
>>> ls_dev.listing_of('/dev/rhel')['home']['link']
'../dm-2'
class insights.parsers.ls_dev.LsDev(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lanR /dev command.

LsDisk - Command ls -lanR /dev/disk

The ls -lanR /dev/disk command provides information for the listing of the directories under /dev/disk/ .

Sample input is shown in the Examples. See FileListing class for additional information.

Examples

>>> LS_DISK = '''
... /dev/disk/by-id:
... total 0
... drwxr-xr-x. 2 0 0 360 Sep 20 09:36 .
... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 ..
... lrwxrwxrwx. 1 0 0   9 Sep 20 09:36 ata-VBOX_CD-ROM_VB2-01700376 -> ../../sr0
... lrwxrwxrwx. 1 0 0   9 Sep 20 09:36 ata-VBOX_HARDDISK_VB4c56cb04-26932e6a -> ../../sdb
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 ata-VBOX_HARDDISK_VB4c56cb04-26932e6a-part1 -> ../../sdb1
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 scsi-SATA_VBOX_HARDDISK_VB4c56cb04-26932e6a-part1 -> ../../sdb1
...
... /dev/disk/by-path:
... total 0
... drwxr-xr-x. 2 0 0 160 Sep 20 09:36 .
... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 ..
... lrwxrwxrwx. 1 0 0   9 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0 -> ../../sdb
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0-part1 -> ../../sdb1
...
... /dev/disk/by-uuid:
... total 0
... drwxr-xr-x. 2 0 0 100 Sep 20 09:36 .
... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 ..
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 3ab50b34-d0b9-4518-9f21-05307d895f81 -> ../../dm-1
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 51c5cf12-a577-441e-89da-bc93a73a1ba3 -> ../../sda1
... lrwxrwxrwx. 1 0 0  10 Sep 20 09:36 7b0068d4-1399-4ce7-a54a-3e2fc1232299 -> ../../dm-0
... '''
>>> from insights.tests import context_wrap
>>> ls_disk = LsDisk(context_wrap(LS_DISK))
 <__main__.LsDisk object at 0x7f674914c690>
>>> "/dev/disk/by-path" in ls_disk
True
>>> ls_disk.files_of("/dev/disk/by-path")
['pci-0000:00:0d.0-scsi-1:0:0:0', 'pci-0000:00:0d.0-scsi-1:0:0:0-part1']
>>> ls_disk.dirs_of("/dev/disk/by-path")
['.', '..']
>>> ls_disk.specials_of("/dev/disk/by-path")
[]
>>> ls_disk.listing_of("/dev/disk/by-path").keys()
['pci-0000:00:0d.0-scsi-1:0:0:0-part1', 'pci-0000:00:0d.0-scsi-1:0:0:0', '..', '.']
>>> ls_disk.dir_entry("/dev/disk/by-path", "pci-0000:00:0d.0-scsi-1:0:0:0")
{'group': '0', 'name': 'pci-0000:00:0d.0-scsi-1:0:0:0', 'links': 1, 'perms': 'rwxrwxrwx.',
'raw_entry': 'lrwxrwxrwx. 1 0 0   9 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0 -> ../../sdb', 'owner': '0',
'link': '../../sdb', 'date': 'Sep 20 09:36', 'type': 'l', 'size': 9}
>>> ls_disk.listing_of('/dev/disk/by-path')['.']['type'] == 'd'
True
>>> ls_disk.listing_of('/dev/disk/by-path')['pci-0000:00:0d.0-scsi-1:0:0:0']['link']
'../../sdb'
class insights.parsers.ls_disk.LsDisk(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lanR /dev/disk command.

DockerVolumesDir - command ls -lanR /var/lib/docker/volumes/

A standard directory listing parser using the FileListing parser class.

Given a listing of:

/var/lib/docker/volumes/:
total 4
drwx------. 3 0 0   77 Mar 15 10:50 .
drwx-----x. 9 0 0 4096 Nov 18 22:04 ..
drwxr-xr-x. 3 0 0   18 Mar 15 10:50 7f9d945c3b3352308a44878a5da9e6046d63e34fafbac36486f4b94f5d372b61

/var/lib/docker/volumes/7f9d945c3b3352308a44878a5da9e6046d63e34fafbac36486f4b94f5d372b61:
total 0
drwxr-xr-x. 3 0 0 18 Mar 15 10:50 .
drwx------. 3 0 0 77 Mar 15 10:50 ..
drwxr-xr-x. 2 0 0  6 Mar 15 10:50 _data

/var/lib/docker/volumes/7f9d945c3b3352308a44878a5da9e6046d63e34fafbac36486f4b94f5d372b61/_data:
total 0
drwxr-xr-x. 2 0 0  6 Mar 15 10:50 .
drwxr-xr-x. 3 0 0 18 Mar 15 10:50 ..

Examples

>>> docker_dirs = shared[DockerVolumesDir]
>>> '/var/lib/docker/volumes' in docker_dirs
True
>>> docker_dirs.dirs_of('/var/lib/docker/volumes')
['.', '..', '97d7cd1a5d8fd7730e83bb61ecbc993742438e966ac5c11910776b5d53f4ae07']
>>> '/var/lib/docker/volumes/97d7cd1a5d8fd7730e83bb61ecbc993742438e966ac5c11910776b5d53f4ae07' in docker_dirs
True
>>> docker_dirs.dirs_of('/var/lib/docker/volumes/97d7cd1a5d8fd7730e83bb61ecbc993742438e966ac5c11910776b5d53f4ae07')
['.', '..', '_data']
class insights.parsers.ls_docker_volumes.DockerVolumesDir(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Read the directory for the docker volumes.

LsEdacMC - command ls -lan /sys/devices/system/edac/mc

The ls -lan /sys/devices/system/edac/mc command provides information for the listing of the /sys/devices/system/edac/mc directory. See the FileListing class for a more complete description of the available features of the class.

Sample ls -lan /sys/devices/system/edac/mc output:

/sys/devices/system/edac/mc:
total 90
drwxr-xr-x. 3 0 0 0 Jan 10 10:33 .
drwxr-xr-x. 3 0 0 0 Jan 10 10:33 ..
drwxr-xr-x. 2 0 0 0 Jan 10 10:33 power
drwxr-xr-x. 2 0 0 0 Jan 10 10:33 mc0
drwxr-xr-x. 2 0 0 0 Jan 10 10:33 mc1
drwxr-xr-x. 2 0 0 0 Jan 10 10:33 mc2

Examples

>>> '/sys/devices/system/edac/mc' in ls_edac_mc
True
>>> ls_edac_mc.dirs_of('/sys/devices/system/edac/mc') == ['.', '..', 'power', 'mc0', 'mc1', 'mc2']
True
class insights.parsers.ls_edac_mc.LsEdacMC(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parse the /sys/devices/system/edac/mc directory listing using a standard FileListing parser.

LsEtc - command ls -lanR /etc

The ls -lanR /etc command provides information for the listing of the /etc directory. See FileListing class for additional information.

Sample ls -lanR /etc output:

/etc/sysconfig:
total 96
drwxr-xr-x.  7 0 0 4096 Jul  6 23:41 .
drwxr-xr-x. 77 0 0 8192 Jul 13 03:55 ..
drwxr-xr-x.  2 0 0   41 Jul  6 23:32 cbq
drwxr-xr-x.  2 0 0    6 Sep 16  2015 console
-rw-------.  1 0 0 1390 Mar  4  2014 ebtables-config
-rw-r--r--.  1 0 0   72 Sep 15  2015 firewalld
lrwxrwxrwx.  1 0 0   17 Jul  6 23:32 grub -> /etc/default/grub

/etc/rc.d/rc3.d:
total 4
drwxr-xr-x.  2 0 0   58 Jul  6 23:32 .
drwxr-xr-x. 10 0 0 4096 Sep 16  2015 ..
lrwxrwxrwx.  1 0 0   20 Jul  6 23:32 K50netconsole -> ../init.d/netconsole
lrwxrwxrwx.  1 0 0   17 Jul  6 23:32 S10network -> ../init.d/network
lrwxrwxrwx.  1 0 0   15 Jul  6 23:32 S97rhnsd -> ../init.d/rhnsd

Examples

>>> "sysconfig" in ls_etc
False
>>> "/etc/sysconfig" in ls_etc
True
>>> len(ls_etc.files_of('/etc/sysconfig'))
3
>>> ls_etc.files_of("/etc/sysconfig")
['ebtables-config', 'firewalld', 'grub']
>>> ls_etc.dirs_of("/etc/sysconfig")
['.', '..', 'cbq', 'console']
>>> ls_etc.specials_of("/etc/sysconfig")
[]
>>> ls_etc.total_of("/etc/sysconfig")
96
>>> ls_etc.dir_entry('/etc/sysconfig', 'grub') == {'group': '0', 'name': 'grub', 'links': 1, 'perms': 'rwxrwxrwx.', 'raw_entry': 'lrwxrwxrwx.  1 0 0   17 Jul  6 23:32 grub -> /etc/default/grub', 'owner': '0', 'link': '/etc/default/grub', 'date': 'Jul  6 23:32', 'type': 'l', 'dir': '/etc/sysconfig', 'size': 17}
True
>>> ls_etc.files_of('/etc/rc.d/rc3.d')
['K50netconsole', 'S10network', 'S97rhnsd']
>>> sorted(ls_etc.listing_of("/etc/sysconfig").keys()) == sorted(['console', 'grub', '..', 'firewalld', '.', 'cbq', 'ebtables-config'])
True
>>> sorted(ls_etc.listing_of("/etc/sysconfig")['console'].keys()) == sorted(['group', 'name', 'links', 'perms', 'raw_entry', 'owner', 'date', 'type', 'dir', 'size'])
True
>>> ls_etc.listing_of("/etc/sysconfig")['console']['type']
'd'
>>> ls_etc.listing_of("/etc/sysconfig")['console']['perms']
'rwxr-xr-x.'
>>> ls_etc.dir_contains("/etc/sysconfig", "console")
True
>>> ls_etc.dir_entry("/etc/sysconfig", "console") == {'group': '0', 'name': 'console', 'links': 2, 'perms': 'rwxr-xr-x.', 'raw_entry': 'drwxr-xr-x.  2 0 0    6 Sep 16  2015 console', 'owner': '0', 'date': 'Sep 16  2015', 'type': 'd', 'dir': '/etc/sysconfig', 'size': 6}
True
>>> ls_etc.dir_entry("/etc/sysconfig", "grub")['type']
'l'
>>> ls_etc.dir_entry("/etc/sysconfig", "grub")['link']
'/etc/default/grub'
class insights.parsers.ls_etc.LsEtc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lanR /etc command.

Lists ALL the firmware packages

Parsers included in this module are:

LsLibFW - command /bin/ls -lanR /lib/firmware

class insights.parsers.ls_lib_firmware.LsLibFW(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

This parser will help to parse the output of command /bin/ls -lanR /lib/firmware

Typical output of the /bin/ls -lanR /lib/firmware command is:

/lib/firmware:
total 37592
drwxr-xr-x. 83 0 0    8192 Aug 14 02:43 .
dr-xr-xr-x. 26 0 0    4096 Aug 14 02:22 ..
drwxr-xr-x.  2 0 0      40 Aug 14 02:42 3com
lrwxrwxrwx.  1 0 0      16 Aug 14 02:42 a300_pfp.fw -> qcom/a300_pfp.fw
lrwxrwxrwx.  1 0 0      16 Aug 14 02:42 a300_pm4.fw -> qcom/a300_pm4.fw
drwxr-xr-x.  2 0 0      34 Aug 14 02:42 acenic
drwxr-xr-x.  2 0 0      50 Aug 14 02:42 adaptec
drwxr-xr-x.  2 0 0      73 Aug 14 02:42 advansys

/lib/firmware/3com:
total 84
drwxr-xr-x.  2 0 0    40 Aug 14 02:42 .
drwxr-xr-x. 83 0 0  8192 Aug 14 02:43 ..
-rw-r--r--.  1 0 0 24880 Jun  6 10:14 3C359.bin
-rw-r--r--.  1 0 0 44548 Jun  6 10:14 typhoon.bin

/lib/firmware/acenic:
total 160
drwxr-xr-x.  2 0 0    34 Aug 14 02:42 .
drwxr-xr-x. 83 0 0  8192 Aug 14 02:43 ..
-rw-r--r--.  1 0 0 73116 Jun  6 10:14 tg1.bin
-rw-r--r--.  1 0 0 77452 Jun  6 10:14 tg2.bin

Example

>>> type(lslibfw)
<class 'insights.parsers.ls_lib_firmware.LsLibFW'>
>>> lslibfw.files_of("/lib/firmware/bnx2x")
['bnx2x-e1-6.0.34.0.fw', 'bnx2x-e1-6.2.5.0.fw', 'bnx2x-e1-6.2.9.0.fw', 'bnx2x-e1-7.0.20.0.fw', 'bnx2x-e1-7.0.23.0.fw']
>>> lslibfw.dir_contains("/lib/firmware/bnx2x", "bnx2x-e1-6.0.34.0.fw")
True
>>> lslibfw.dirs_of("/lib/firmware")
['.', '..', '3com', 'acenic', 'adaptec', 'advansys']
>>> lslibfw.total_of("/lib/firmware")
37592

LsOcpCniOpenshiftSdn - command ls -l /var/lib/cni/networks/openshift-sdn

The ls -l /var/lib/cni/networks/openshift-sdn command is used to return the count of cni files and also could provide information for the listing of the /var/lib/cni/networks/openshift-sdn directory. See FileListing class for additional information. Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 52
-rw-r--r--. 1 root root 64 Aug  5 23:26 10.130.0.102
-rw-r--r--. 1 root root 64 Aug  5 23:26 10.130.0.103
-rw-r--r--. 1 root root 64 Aug  6 22:52 10.130.0.116
-rw-r--r--. 1 root root 64 Aug  6 22:52 10.130.0.117
-rw-r--r--. 1 root root 64 Aug  5 06:59 10.130.0.15
-rw-r--r--. 1 root root 64 Aug  5 07:02 10.130.0.20
-rw-r--r--. 1 root root 12 Aug  6 22:52 last_reserved_ip.0

Examples

>>> ls_ocp_cni_openshift_sdn.files_of("/var/lib/cni/networks/openshift-sdn")
['10.130.0.102', '10.130.0.103', '10.130.0.116', '10.130.0.117', '10.130.0.15', '10.130.0.20', 'last_reserved_ip.0']
class insights.parsers.ls_ocp_cni_openshift_sdn.LsOcpCniOpenshiftSdn(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -l /var/lib/cni/networks/openshift-sdn command.

LsOriginLocalVolumePods - command ls -l /var/lib/origin/openshift.local.volumes/pods

class insights.parsers.ls_origin_local_volumes_pods.LsOriginLocalVolumePods(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Class to parse the output of command ls -l /var/lib/origin/openshift.local.volumes/pods. See FileListing class for additional information.

The typical content is:

total 0
drwxr-x---. 5 root root 71 Oct 18 23:20 5946c1f644096161a1242b3de0ee5875
drwxr-x---. 5 root root 71 Oct 18 23:24 6ea3d5cd-d34e-11e8-a142-001a4a160152
drwxr-x---. 5 root root 71 Oct 18 23:31 77d6d959-d34f-11e8-a142-001a4a160152
drwxr-x---. 5 root root 71 Oct 18 23:24 7ad952a0-d34e-11e8-a142-001a4a160152
drwxr-x---. 5 root root 71 Oct 18 23:24 7b63e8aa-d34e-11e8-a142-001a4a160152

Examples

>>> ls_origin_local_volumes_pods.pods
['5946c1f644096161a1242b3de0ee5875', '6ea3d5cd-d34e-11e8-a142-001a4a160152', '77d6d959-d34f-11e8-a142-001a4a160152', '7ad952a0-d34e-11e8-a142-001a4a160152', '7b63e8aa-d34e-11e8-a142-001a4a160152']
pods

The list of pods uid under the directory /var/lib/origin/openshift.local.volumes/pods

Type

List

LsOsroot - command ls -lan /

The ls -lan / command provides information for only the / directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 5256
dr-xr-xr-x.  17 0 0     271 Apr  5 18:08 .
dr-xr-xr-x.  17 0 0     271 Apr  5 18:08 ..
-rw-r--r--.   1 0 0       0 Feb 25  2017 1
lrwxrwxrwx.   1 0 0       7 Feb 25  2017 bin -> usr/bin
dr-xr-xr-x.   3 0 0    4096 Feb 25  2017 boot
-rw-r--r--.   1 0 0 5168141 Oct 16  2017 channel-list
drwxr-xr-x.  21 0 0    3440 Apr 12 14:46 dev
drwxr-xr-x. 148 0 0    8192 Apr 18 09:17 etc
drwxr-xr-x.   5 0 0      37 Jul 31  2017 home
lrwxrwxrwx.   1 0 0       7 Feb 25  2017 lib -> usr/lib
lrwxrwxrwx.   1 0 0       9 Feb 25  2017 lib64 -> usr/lib64
drwxr-xr-x.   2 0 0       6 Mar 10  2016 media
drwxr-xr-x.   2 0 0       6 Mar 10  2016 mnt
drwxr-xr-x.   5 0 0      48 Mar 27 13:37 opt
dr-xr-xr-x. 265 0 0       0 Apr  6 02:07 proc
-rw-r--r--.   1 0 0  175603 Apr  5 18:08 .readahead
dr-xr-x---.  26 0 0    4096 Apr 18 09:17 root
drwxr-xr-x.  43 0 0    1340 Apr 18 09:17 run
lrwxrwxrwx.   1 0 0       8 Feb 25  2017 sbin -> usr/sbin
drwxr-xr-x.   2 0 0       6 Mar 10  2016 srv
dr-xr-xr-x.  13 0 0       0 Apr  5 18:07 sys
drwxrwxrwt.  40 0 0    8192 Apr 18 11:17 tmp
drwxr-xr-x.  13 0 0     155 Feb 25  2017 usr
drwxr-xr-x.  21 0 0    4096 Apr  6 02:07 var

Examples

>>> ls_osroot.listing_of("/")['tmp'] == {'group': '0', 'name': 'tmp', 'links': 40, 'perms': 'rwxrwxrwt.', 'raw_entry': 'drwxrwxrwt.  40 0 0    8192 Apr 18 11:17 tmp', 'owner': '0', 'date': 'Apr 18 11:17', 'type': 'd', 'dir': '/', 'size': 8192}
True
>>> ls_osroot.dir_entry("/", 'tmp')['perms']
'rwxrwxrwt.'
class insights.parsers.ls_osroot.LsOsroot(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lan / command.

LsRunSystemdGenerator - command ls -lan /run/systemd/generator

The ls -lan /run/systemd/generator command provides information for only the /run/systemd/generator directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 28
drwxr-xr-x.  6 0 0 260 Aug  5 07:35 .
drwxr-xr-x. 18 0 0 440 Aug  5 07:35 ..
-rw-r--r--.  1 0 0 254 Aug  5 07:35 boot.mount
-rw-r--r--.  1 0 0 259 Aug  5 07:35 boot-fake.mount
-rw-r--r--.  1 0 0 176 Aug  5 07:35 dev-mapper-rhel-swap.swap
drwxr-xr-x.  2 0 0 100 Aug  5 07:35 local-fs.target.requires
-rw-r--r--.  1 0 0 217 Aug  5 07:35 -.mount
drwxr-xr-x.  2 0 0  60 Aug  5 07:35 nfs-server.service.d
drwxr-xr-x.  2 0 0 100 Aug  5 07:35 remote-fs.target.requires
-rw-r--r--.  1 0 0 261 Aug  5 07:35 root-mnt_nfs3.mount
-rw-r--r--.  1 0 0 261 Aug  5 07:35 root-mnt-nfs1.mount
-rw-r--r--.  1 0 0 261 Aug  5 07:35 root-mnt-nfs2.mount
drwxr-xr-x.  2 0 0  60 Aug  5 07:35 swap.target.requires

Examples

>>> ls.files_of("/run/systemd/generator") == ['boot.mount', 'boot-fake.mount', 'dev-mapper-rhel-swap.swap', '-.mount', 'root-mnt_nfs3.mount', 'root-mnt-nfs1.mount', 'root-mnt-nfs2.mount']
True
>>> ls.dir_entry("/run/systemd/generator", '-.mount')['perms']
'rw-r--r--.'
class insights.parsers.ls_run_systemd_generator.LsRunSystemdGenerator(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lan /run/systemd/generator command.

LsSysFirmware - command ls /sys/firmware

The ls -lanR /sys/firmware command provides information for the listing of the /sys/firmware directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Examples

>>> LS_SYS_FIRMWARE = '''
... /sys/firmware:
... total 0
... drwxr-xr-x.  5 0 0 0 Dec 22 17:56 .
... dr-xr-xr-x. 13 0 0 0 Dec 22 17:56 ..
... drwxr-xr-x.  5 0 0 0 Dec 22 17:56 acpi
... drwxr-xr-x.  3 0 0 0 Dec 22 17:57 dmi
... drwxr-xr-x. 10 0 0 0 Dec 22 17:57 memmap
...
... /sys/firmware/acpi:
... total 0
... drwxr-xr-x. 5 0 0    0 Dec 22 17:56 .
... drwxr-xr-x. 5 0 0    0 Dec 22 17:56 ..
... drwxr-xr-x. 6 0 0    0 Feb 10 15:54 hotplug
... drwxr-xr-x. 2 0 0    0 Feb 10 15:54 interrupts
... -r--r--r--. 1 0 0 4096 Feb 10 15:54 pm_profile
... drwxr-xr-x. 3 0 0    0 Dec 22 17:56 tables
... '''
>>> ls_sys_firmware = LsSysFirmware(context_wrap(LS_SYS_FIRMWARE))
>>> "acpi" in ls_sys_firmware
False
>>> "/sys/firmware/acpi" in ls_sys_firmware
True
>>> ls_sys_firmware.dirs_of("/sys/firmware")
['.', '..', 'acpi', 'dmi', 'memmap']
>>> ls_sys_firmware.files_of("/sys/firmware/acpi")
['pm_profile']
class insights.parsers.ls_sys_firmware.LsSysFirmware(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lanR /sys/firmware command.

LsUsrLib64 - command ls -lan /usr/lib64

The ls -lan /usr/lib64 command provides information for the listing of the /usr/lib64 directory.

See insights.core.FileListing class for additional information.

Sample directory list:

total 447460
dr-xr-xr-x. 150 0 0    77824 Jul 30 16:39 .
drwxr-xr-x.  13 0 0     4096 Apr 30  2017 ..
drwxr-xr-x.   3 0 0       20 Nov  3  2016 krb5
-rwxr-xr-x.   1 0 0   155464 Oct 28  2016 ld-2.17.so
drwxr-xr-x.   3 0 0       20 Jun 10  2016 ldb
lrwxrwxrwx.   1 0 0       10 Apr 30  2017 ld-linux-x86-64.so.2 -> ld-2.17.so
lrwxrwxrwx.   1 0 0       21 Apr 30  2017 libabrt_dbus.so.0 -> libabrt_dbus.so.0.0.1

Examples

>>> "krb5" in ls_usr_lib64
False
>>> "/usr/lib64" in ls_usr_lib64
True
>>> "krb5" in ls_usr_lib64.dirs_of('/usr/lib64')
True
>>> ls_usr_lib64.dir_entry('/usr/lib64', 'ld-linux-x86-64.so.2')['type']
'l'
class insights.parsers.ls_usr_lib64.LsUsrLib64(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -lan /usr/lib64 command.

LsUsrSbin - command ls -ln /usr/sbin

The ls -ln /usr/sbin command provides information for the listing of the /usr/sbin directory.

Sample input is shown in the Examples. See FileListing class for additional information.

For ls_usr_sbin, it may collect a lot of files or directories that may not be necessary, so a default filter add_filter(Specs.ls_usr_sbin, “total”) has been added in this parser.

If addtional file or directory need to be collected by this parser, please add related filter to corresponding code.

Sample added filter:

>>> add_filter(Specs.ls_usr_sbin, "accessdb")

Sample directory list collected:

total 41472
-rwxr-xr-x. 1 0  0   11720 Mar 18  2014 accessdb

Examples

>>> "accessdb" in ls_usr_sbin
False
>>> "/usr/sbin" in ls_usr_sbin
True
>>> ls_usr_sbin.dir_entry('/usr/sbin', 'accessdb')['type']
'-'

Sample added filter:

>>> add_filter(Specs.ls_usr_sbin, "accessdb")
>>> add_filter(Specs.ls_usr_sbin, "postdrop")

Sample directory list collected:

total 41472
-rwxr-xr-x. 1 0  0   11720 Mar 18  2014 accessdb
-rwxr-sr-x. 1 0 90  218552 Jan 27  2014 postdrop

Examples

>>> "accessdb" in ls_usr_sbin
False
>>> "/usr/sbin" in ls_usr_sbin
True
>>> ls_usr_sbin.dir_entry('/usr/sbin', 'accessdb')['type']
'-'
>>> ls_usr_sbin.dir_entry('/usr/sbin', 'postdrop')['type']
'-'
class insights.parsers.ls_usr_sbin.LsUsrSbin(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -ln /usr/sbin command.

LsVarLibMongodb - command ls -la /var/lib/mongodb

The ls -la /var/lib/mongodb command provides information for the listing of the /var/lib/mongodb directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 6322200
drwxr-xr-x.  3 mongodb mongodb        256 Jun  7 10:07 .
drwxr-xr-x. 71 root    root          4096 Jun 22 10:35 ..
drwxr-xr-x.  2 mongodb mongodb         65 Jul 10 09:33 journal
-rw-------.  1 mongodb mongodb   67108864 Jul 10 09:32 local.0
-rw-------.  1 mongodb mongodb   16777216 Jul 10 09:32 local.ns

Examples

>>> "journal" in ls_var_lib_mongodb
False
>>> "/var/lib/mongodb" in ls_var_lib_mongodb
True
>>> ls_var_lib_mongodb.dir_entry('/var/lib/mongodb', 'journal')['type']
'd'
class insights.parsers.ls_var_lib_mongodb.LsVarLibMongodb(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -la /var/lib/mongodb command.

List files and dirs under /var/lib/nova/instances

The parsers class in this module uses base parser class CommandParser & FileListing to list files & directories.

Parsers included in this modules are:

LsRVarLibNovaInstances - command ls -laR /var/lib/nova/instances

LsVarLibNovaInstances - command ls -laRZ /var/lib/nova/instances

class insights.parsers.ls_var_lib_nova_instances.LsRVarLibNovaInstances(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

The class LsVarLibNovaInstances don’t show file size when the flag -Z is used. This class parses the output of ls -laR /var/lib/nova/instances to output file listing with file size.

Note: This issue is not seen in GNU coreutils-8.29. When the coreutils package is updated to 8.29 on RHEL7, this parser class can be deprecated.

Typical output of the ls -laR /var/lib/nova/instances command is:

/var/lib/nova/instances:
total 4
drwxr-xr-x. 5 nova nova  97 Feb 20  2017 .
drwxr-xr-x. 9 nova nova 111 Feb 17  2017 ..
drwxr-xr-x. 2 nova nova  54 Feb 17  2017 _base
-rw-r--r--. 1 nova nova  44 May 26  2017 compute_nodes
drwxr-xr-x. 2 nova nova  54 Feb 17  2017 e560e649-41fd-46a2-a3d2-5f4750ba2bb4
drwxr-xr-x. 2 nova nova  93 Feb 17  2017 locks

/var/lib/nova/instances/_base:
total 18176
drwxr-xr-x. 2 nova nova       54 Feb 17  2017 .
drwxr-xr-x. 5 nova nova       97 Feb 20  2017 ..
-rw-r--r--. 1 qemu qemu 41126400 May 26  2017 faf1184c098da91e90290a920b8fab1ee6e1d4c4

/var/lib/nova/instances/e560e649-41fd-46a2-a3d2-5f4750ba2bb4:
total 2104
drwxr-xr-x. 2 nova nova      54 Feb 17  2017 .
drwxr-xr-x. 5 nova nova      97 Feb 20  2017 ..
-rw-r--r--. 1 qemu qemu   48957 Feb 20  2017 console.log
-rw-r--r--. 1 qemu qemu 2097152 Feb 20  2017 disk
-rw-r--r--. 1 nova nova      79 Feb 17  2017 disk.info

/var/lib/nova/instances/locks:
total 0
drwxr-xr-x. 2 nova nova 93 Feb 17  2017 .
drwxr-xr-x. 5 nova nova 97 Feb 20  2017 ..
-rw-r--r--. 1 nova nova  0 Feb 17  2017 nova-faf1184c098da91e90290a920b8fab1ee6e1d4c4
-rw-r--r--. 1 nova nova  0 Feb 17  2017 nova-storage-registry-lock

Example

>>> ls_r_var_lib_nova_instances.dir_entry('/var/lib/nova/instances/e560e649-41fd-46a2-a3d2-5f4750ba2bb4', 'console.log')['size']
48957
class insights.parsers.ls_var_lib_nova_instances.LsVarLibNovaInstances(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses the output of ls -laRZ /var/lib/nova/instances command which provides the SELinux directory listings of the ‘/var/lib/nova/instances’ directory.

The ls -laRZ /var/lib/nova/instances command provides information for the SELinux directory listing of the /var/lib/nova/instances directory.

Typical output of the ls -laRZ /var/lib/nova/instances command is:

/var/lib/nova/instances/:
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 .
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 ..
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 11415c6c-a2a5-45f0-a198-724246b96631
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 _base
-rw-r--r--. nova nova system_u:object_r:nova_var_lib_t:s0 compute_nodes
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 locks

/var/lib/nova/instances/11415c6c-a2a5-45f0-a198-724246b96631:
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 .
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 ..
-rw-------. root root system_u:object_r:nova_var_lib_t:s0 console.log
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c92,c808 disk
-rw-r--r--. nova nova system_u:object_r:nova_var_lib_t:s0 disk.info

/var/lib/nova/instances/_base:
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 .
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 ..
-rw-r--r--. qemu qemu system_u:object_r:virt_content_t:s0 572dfdb7e1d9304342cbe1fd5e3da4ff2e55c7a6

/var/lib/nova/instances/locks:
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 .
drwxr-xr-x. nova nova system_u:object_r:nova_var_lib_t:s0 ..
-rw-r--r--. nova nova system_u:object_r:nova_var_lib_t:s0 nova-572dfdb7e1d9304342cbe1fd5e3da4ff2e55c7a6
-rw-r--r--. nova nova system_u:object_r:nova_var_lib_t:s0 nova-storage-registry-lock

Examples

>>> '/var/lib/nova/instances/' in ls_var_lib_nova_instances
True
>>> ls_var_lib_nova_instances.files_of('/var/lib/nova/instances/11415c6c-a2a5-45f0-a198-724246b96631')
['console.log', 'disk', 'disk.info']
>>> ls_var_lib_nova_instances.listings['/var/lib/nova/instances/11415c6c-a2a5-45f0-a198-724246b96631']['entries']['console.log']['se_type'] != 'nova_var_lib_t'
False
>>> len(ls_var_lib_nova_instances.listings['/var/lib/nova/instances/locks'])
6
>>> ls_var_lib_nova_instances.dir_entry('/var/lib/nova/instances/locks', 'nova-storage-registry-lock')['raw_entry']
'-rw-r--r--. nova nova system_u:object_r:nova_var_lib_t:s0 nova-storage-registry-lock'

LsVarLog - command ls -laR /var/log

This parser reads the /var/log directory listings and uses the FileListing parser class to provide a common access to them.

Examples

>>> varlog = shared[LsVarLog]
>>> '/var/log' in varlog
True
>>> varlog.dir_contains('/var/log', 'messages')
True
>>> messages = varlog.dir_entry('/var/log', 'messages')
>>> messages['type']
'-'
>>> messages['perms']
'rw-------'
class insights.parsers.ls_var_log.LsVarLog(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

A parser for accessing “ls -laR /var/log”.

get_filepermissions(dir_name_where_to_search, dir_or_file_name_to_get)[source]

Returns a FilePermissions object, if found, for the specified dir or file name in the specified directory. The directory must be specified by the full path without trailing slash. The dir or file name to get must be specified by the name only (without path).

This is provided for several parsers which rely on this functionality, and may be deprecated and removed in the future.

Parameters
  • dir_name_where_to_search (string) -- Full path without trailing slash where to search.

  • dir_or_file_name_to_getl (string) -- Name of the dir or file to get FilePermissions for.

Returns

If found or None if not found.

Return type

FilePermissions

LsDVarOptMSSql - command ls -ld /var/opt/mssql

class insights.parsers.ls_var_opt_mssql.LsDVarOptMSSql(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -ld /var/opt/mssql command.

The ls -ld /var/opt/mssql command provides information for the listing of the /var/opt/mssql directory. See FileListing class for addtional information.

Sample ls -ld /var/opt/mssql output:

drwxrwx---. 5 root root 58 Apr 16 07:20 /var/opt/mssql

Examples

>>> content.listing_of('/var/opt/mssql').get('/var/opt/mssql').get('owner')
'root'
>>> content.listing_of('/var/opt/mssql').get('/var/opt/mssql').get('group')
'root'

LsVarOptMssqlLog - command ls -la /var/opt/mssql/log

This parser reads the /var/opt/mssql/log directory listings and uses the FileListing parser class to provide a common access to them.

class insights.parsers.ls_var_opt_mssql_log.LsVarOptMssqlLog(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

A parser for accessing “ls -la /var/opt/mssql/log”.

Examples

>>> '/var/opt/mssql/log' in ls_mssql_log
True
>>> ls_mssql_log.dir_contains('/var/opt/mssql/log', 'messages')
False

LsVarRun - command ls -lnL /var/run

The ls -lnL /var/run command provides information for the listing of the /var/run directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 20
drwx--x---.  2   0 984   40 May 15 09:29 openvpn
drwxr-xr-x.  2   0   0   40 May 15 09:30 plymouth
drwxr-xr-x.  2   0   0   40 May 15 09:29 ppp
drwxr-xr-x.  2  75  75   40 May 15 09:29 radvd
-rw-r--r--.  1   0   0    5 May 15 09:30 rhnsd.pid
drwxr-xr-x.  2   0   0   60 May 30 09:31 rhsm
drwx------.  2  32  32   40 May 15 09:29 rpcbind
-r--r--r--.  1   0   0    0 May 17 16:26 rpcbind.lock

Examples

>>> "rhnsd.pid" in ls_var_run
False
>>> "/var/run" in ls_var_run
True
>>> ls_var_run.dir_entry('/var/run', 'openvpn')['type']
'd'
class insights.parsers.ls_var_run.LsVarRun(context)[source]

Bases: insights.core.FileListing

Parses output of ls -lnL /var/run command.

LsVarSpoolClientmq - command ls -ln /var/spool/clientmqueue

The ls -ln /var/spool/clientmqueue command provides information for the listing of the /var/spool/clientmqueue directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 40
-rw-rw---- 1 51 51   4 Jul 11 02:32 dfw6B6Wilr002718
-rw-rw---- 1 51 51   4 Jul 11 02:32 dfw6B6WixJ002715
-rw-rw---- 1 51 51   4 Jul 11 02:32 dfw6B6WjP6002721
-rw-rw---- 1 51 51 817 Jul 11 03:35 dfw6B7Z8BB002906
-rw-rw---- 1 51 51 817 Jul 11 04:02 dfw6B822T0011150

Examples

>>> "dfw6B6Wilr002718" in ls_var_spool_clientmq
False
>>> "/var/spool/clientmqueue" in ls_var_spool_clientmq
True
>>> ls_var_spool_clientmq.dir_entry('/var/spool/clientmqueue', 'dfw6B6Wilr002718')['type']
'-'
class insights.parsers.ls_var_spool_clientmq.LsVarSpoolClientmq(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -ln /var/spool/clientmqueue command.

LsVarSpoolPostfixMaildrop - command ls -ln /var/spool/postfix/maildrop

The ls -ln /var/spool/postfix/maildrop command provides information for the listing of the /var/spool/postfix/maildrop directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

total 20
-rwxr--r--. 1 0 90 258 Jul 11 15:54 55D6821C286
-rwxr--r--. 1 0 90 282 Jul 11 15:54 5852121C284
-rwxr--r--. 1 0 90 258 Jul 11 15:54 9FFEC21C287
-rwxr--r--. 1 0 90 258 Jul 11 15:54 E9A4521C285
-rwxr--r--. 1 0 90 258 Jul 11 15:54 EA60F21C288

Examples

>>> "55D6821C286" in ls_var_spool_postfix_maildrop
False
>>> "/var/spool/postfix/maildrop" in ls_var_spool_postfix_maildrop
True
>>> ls_var_spool_postfix_maildrop.dir_entry('/var/spool/postfix/maildrop', '55D6821C286')['type']
'-'
class insights.parsers.ls_var_spool_postfix_maildrop.LsVarSpoolPostfixMaildrop(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -ln /var/spool/postfix/maildrop command.

LsVarTmp - command ls -ln /var/tmp

The ls -ln /var/tmp command provides information for the listing of the /var/tmp directory.

Sample input is shown in the Examples. See FileListing class for additional information.

Sample directory list:

/var/tmp:
total 20
drwxr-xr-x.  2 0 0 4096 Mar 26 02:25 a1
drwxr-xr-x.  2 0 0 4096 Mar 26 02:25 a2
drwxr-xr-x.  2 0 0 4096 Apr 28  2018 foreman-ssh-cmd-fc3f65c9-2b35-480d-87e3-1d971433d6ad

Examples

>>> "a1" in ls_var_tmp
False
>>> "/var/tmp" in ls_var_tmp
True
>>> ls_var_tmp.dir_entry('/var/tmp', 'a1')['type']
'd'
class insights.parsers.ls_var_tmp.LsVarTmp(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.FileListing

Parses output of ls -ln /var/tmp command.

Block device listing

Module for processing output of the lsblk command. Different information is provided by the lsblk command depending upon the options. Parsers included here are:

LSBlock - Command lsblk

The LSBlock class parses output of the lsblk command with no options.

LSBlockPairs - Command lsblk -P -o [columns...]

The LSBlockPairs class parses output of the lsblk -P -o [columns...] command.

These classes based on BlockDevices which implements all of the functionality except the parsing of command specific information. Information is stored in the attribute self.rows which is a list of BlockDevice objects.

Each BlockDevice object provides the functionality for one row of data from the command output. Data in a BlockDevice object is accessible by multiple methods. For example the NAME field can be accessed in the following four ways:

lsblk_info.rows[0].data['NAME']
lsblk_info.rows[0].NAME
lsblk_info.rows[0].name
lsblk_info.rows[0].get('NAME')

Sample output of the lsblk command looks like:

NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           252:0    0    9G  0 disk
|-vda1        252:1    0  500M  0 part /boot
`-vda2        252:2    0  8.5G  0 part
  |-rhel-root 253:0    0  7.6G  0 lvm  /
  |-rhel-swap 253:1    0  924M  0 lvm  [SWAP]
sda             8:0    0  500G  0 disk
`-sda1          8:1    0  500G  0 part /data

Note the hierarchy demonstrated in the name column. For instance vda1 and vda2 are children of vda. Likewise, rhel-root and rhel-swap are children of vda2. This relationship is demonstrated in the PARENT_NAMES key, which is only present if the row is a child row. For example PARENT_NAMES value for rhel-root will be ['vda', 'vda2'] meaning that vda2 is the immediate parent and vda is parent of vda2.

Also note that column names that are not valid Python property names been changed. For example MAJ:MIN has been changed to MAJ_MIN.

Examples

>>> lsblk_info = shared[LSBlock]
>>> lsblk_info
<insights.parsers.lsblk.LSBlock object at 0x7f1f6a422d50>
>>> lsblk_info.rows
[disk:vda,
 part:vda1(/boot),
 part:vda2,
 lvm:rhel-root(/),
 lvm:rhel-swap([SWAP]),
 disk:sda,
 part:sda1(/data)]
>>> lsblk_info.rows[0]
disk:vda
>>> lsblk_info.rows[0].data
{'READ_ONLY': False, 'NAME': 'vda', 'REMOVABLE': False, 'MAJ_MIN': '252:0',
 'TYPE': 'disk', 'SIZE': '9G'}
>>> lsblk_info.rows[0].data['NAME']
'vda'
>>> lsblk_info.rows[0].NAME
'vda'
>>> lsblk_info.rows[0].name
'vda'
>>> lsblk_info.rows[0].data['MAJ_MIN']
'252:0'
>>> lsblk_info.rows[0].MAJ_MIN
'252:0'
>>> lsblk_info.rows[0].maj_min
'252:0'
>>> lsblk_info.rows[0].removable
False
>>> lsblk_info.rows[0].read_only
False
>>> lsblk_info.rows[2].data
{'READ_ONLY': False, 'PARENT_NAMES': ['vda'], 'NAME': 'vda2',
 'REMOVABLE': False, 'MAJ_MIN': '252:2', 'TYPE': 'part', 'SIZE': '8.5G'}
>>> lsblk_info.rows[2].parent_names
['vda']
>>> lsblk_info.rows[3].parent_names
['vda', 'vda2']
>>> lsblk_info.device_data['vda'] # Access devices by name
'disk:vda'
>>> lsblk_info.search(NAME='vda2')
[{'READ_ONLY': False, 'PARENT_NAMES': ['vda'], 'NAME': 'vda2',
 'REMOVABLE': False, 'MAJ_MIN': '252:2', 'TYPE': 'part', 'SIZE': '8.5G'}]
class insights.parsers.lsblk.BlockDevice(data)[source]

Bases: object

Class to contain one line of lsblk command information.

Contains all of the fields for a single line of lsblk output. Computed values are the column names except where the column name is an invalid variable name in Python such as MAJ:MIN. The get method is provided to access any value, including those that are not valid names in Python. All other valid names may be accessed as obj.column_name.

get(k, default=None)[source]

Get any value by keyword (column) name.

class insights.parsers.lsblk.BlockDevices(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to contain all information from lsblk command.

Output of the lsblk command is contained in this base class. Data may be accessed via the iterator and each item represents a row of output from the command in dict format.

rows

List of BlockDevice objects for each row of the input. Input column name matches key name except any ‘-‘ is replaced with ‘_’ and the following names are changed:

Column Name     Key Name
MAJ:MIN         MAJ_MIN
RM              REMOVABLE
RO              READD_ONLY
Type

list of BlockDevice

device_data

A dictionary of BlockDevice objects keyed on the ‘NAME’ column (e.g. sda or rhel-swap)

Type

dict of BlockDevice

search(**kwargs)[source]

Returns a list of the block devices (in order) matching the given criteria. Keys are searched for directly - see the insights.parsers.keyword_search() utility function for more details. If no search parameters are given, no rows are returned. Keys need to be in all upper case, as they appear in the source data.

Examples

>>> blockdevs.search(NAME='sda1')
[{'NAME': '/dev/sda1', 'TYPE': 'disk', 'SIZE', '80G', ...}]
>>> blockdevs.search(TYPE='lvm')
[{'NAME': 'volgrp01-root', 'TYPE': 'lvm', 'SIZE', '15G', ...}...]
Parameters

**kwargs (dict) -- Dictionary of key-value pairs to search for.

Returns

The list of mount points matching the given criteria.

Return type

(list)

class insights.parsers.lsblk.LSBlock(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lsblk.BlockDevices

Parse output of the lsblk command.

The specific lsblk commands are /bin/lsblk and /usr/bin/lsblk. Typical content of the lsblk command output looks like:

NAME                            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                               8:0    0   80G  0 disk
|-sda1                            8:1    0  256M  0 part  /boot
`-sda2                            8:2    0 79.8G  0 part
  |-volgrp01-root (dm-0)        253:0    0   15G  0 lvm   /
  `-volgrp01-swap (dm-1)        253:1    0    8G  0 lvm   [SWAP]

Note

See the discussion of the key PARENT_NAMES above.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.lsblk.LSBlockPairs(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lsblk.BlockDevices

Parse output of the lsblk -P -o command.

lsblk command with -P -o options provides explicit selection of output columns in keyword=value pairs.

The specific lsblk commands are /bin/lsblk -P -o column_names and /usr/bin/lsblk -P -o column_names. Typical content of the lsblk command output looks like:

ALIGNMENT="0" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"             FSTYPE="" GROUP="cdrom" KNAME="sr0" LABEL="" LOG-SEC="512" MAJ:MIN="11:0"             MIN-IO="512" MODE="brw-rw----" MODEL="DVD+-RW DVD8801 " MOUNTPOINT=""             NAME="sr0" OPT-IO="0" OWNER="root" PHY-SEC="512" RA="128" RM="1" RO="0"             ROTA="1" RQ-SIZE="128" SCHED="cfq" SIZE="1024M" STATE="running" TYPE="rom" UUID=""
ALIGNMENT="0" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"             FSTYPE="" GROUP="disk" KNAME="sda" LABEL="" LOG-SEC="512" MAJ:MIN="8:0"             MIN-IO="512" MODE="brw-rw----" MODEL="WDC WD1600JS-75N" MOUNTPOINT=""             NAME="sda" OPT-IO="0" OWNER="root" PHY-SEC="512" RA="128" RM="0" RO="0"             ROTA="1" RQ-SIZE="128" SCHED="cfq" SIZE="149G" STATE="running" TYPE="disk" UUID=""
ALIGNMENT="0" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"             FSTYPE="ext4" GROUP="disk" KNAME="sda1" LABEL="" LOG-SEC="512" MAJ:MIN="8:1"             MIN-IO="512" MODE="brw-rw----" MODEL="" MOUNTPOINT="/boot" NAME="sda1"             OPT-IO="0" OWNER="root" PHY-SEC="512" RA="128" RM="0" RO="0" ROTA="1"             RQ-SIZE="128" SCHED="cfq" SIZE="500M" STATE="" TYPE="part"             UUID="c7c4c016-8b00-4ded-bffb-5cc4719b7d45"
rows

List of BlockDevice objects for each row of the input. Input column name matches key name except that any ‘-‘, ‘:’, or ‘.’ is replaced with ‘_’ and the following names are changed:

Column Name     Key Name
RM              removable
RO              read_only
Type

list of BlockDevice

failed_device_paths

Set of device names that failed to get device path

Type

set

Note

PARENT_NAMES is not available as a key because it is not listed in the LsBlockPairs output and cannot always be correctly inferred from the other data present.

parse_content(content)[source]

This method must be implemented by classes based on this class.

LsCPU - command lscpu

This module provides the information about the CPU architecture using the output of the command lscpu.

class insights.parsers.lscpu.LsCPU(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of /usr/bin/lscpu. It uses the CommandParser as the base class. The parse_content method also converts plural keys for better accessibility.

Ex: “CPU(s)” is converted to “CPUs”

Typical output of lscpu command is:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    2
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Model name:            Intel Core Processor (Haswell, no TSX)
Stepping:              1
CPU MHz:               2793.530
BogoMIPS:              5587.06
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
NUMA node0 CPU(s):     0,1

Examples

>>> output.info['Architecture']
'x86_64'
>>> len(output.info)
22
>>> output.info['CPUs']
'2'
>>> output.info['Threads per core']
'2'
>>> output.info['Cores per socket']
'1'
>>> output.info['Sockets']
'1'
parse_content(content)[source]

This method must be implemented by classes based on this class.

Lsinitrd - command lsinitrd

This parser parses the filtered output of command lsinitrd and provides the info of listed files.

class insights.parsers.lsinitrd.Lsinitrd(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A parser for command “lsinitrd”.

data

The key is the filename, the value is a dict describe the file’s info.

Type

dict

unparsed_lines

List of strings for unparsed lines.

Type

list

As this lsinitrd spec is set to filterable, the structure of the output is broken. Hence, this parser will parse only filelisting like lines in output of ‘lisinitrd’, and also store all the unparsed lines. If the other parts of the output structure are required in the future, an enhancement may be performed then.

Examples

>>> len(ls.data)
5
>>> assert ls.search(name__contains='kernel') == [
...    {'group': 'root', 'name': 'kernel/x86', 'links': 3, 'perms': 'rwxr-xr-x',
...     'raw_entry': 'drwxr-xr-x   3 root     root            0 Apr 20 15:58 kernel/x86',
...     'owner': 'root', 'date': 'Apr 20 15:58', 'type': 'd', 'dir': '', 'size': 0}
... ]
>>> "udev-rules" in ls.unparsed_lines
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kwargs)[source]

Search the listed files for matching rows based on key-value pairs.

This uses the insights.parsers.keyword_search() function for searching; see its documentation for usage details. If no search parameters are given, no rows are returned.

Returns

A list of dictionaries of files that match the given search criteria.

Return type

list

Examples

>>> lsdev = ls.search(name__contains='dev')
>>> len(lsdev)
3
>>> dev_console = {
...     'type': 'c', 'perms': 'rw-r--r--', 'links': 1, 'owner': 'root', 'group': 'root',
...     'major': 5, 'minor': 1, 'date': 'Apr 20 15:57', 'name': 'dev/console', 'dir': '',
...     'raw_entry': 'crw-r--r--   1 root     root       5,   1 Apr 20 15:57 dev/console'
... }
>>> dev_console in lsdev
True
>>> 'dev/kmsg' in [l['name'] for l in lsdev]
True
>>> 'dev/null' in [l['name'] for l in lsdev]
True

LsMod - command /sbin/lsmod

This parser reads the output of /sbin/lsmod into a dictionary, keyed on the module name. Each item is a dictionary with three keys:

  • size - the size of the module’s memory footprint in bytes

  • depnum - the number of modules dependent on this module

  • deplist - the list of dependent modules as presented (i.e. as a string)

This dictionary is available in the data attribute.

The parser also provides pseudo-dictionary access so it can be checked for the existence of a module or module data retrieved as if it was a dictionary.

Sample input:

Module                  Size  Used by
xt_CHECKSUM            12549  1
ipt_MASQUERADE         12678  3
nf_nat_masquerade_ipv4    13412  1 ipt_MASQUERADE
tun                    27141  3
ip6t_rpfilter          12546  1

Examples

>>> modules = shared[LsMod]
>>> 'ip6t_rpfilter' in modules
True
>>> 'bridge' in modules
False
>>> modules['tun']['deplist']
''
>>> modules['nf_nat_masquerade_ipv4']['deplist']
'ipt_MASQUERADE'
class insights.parsers.lsmod.LsMod(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of /sbin/lsmod.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Lsof - command /usr/sbin/lsof

This parser reads the output of the /usr/sbin/lsof command and makes each line available as a dictionary keyed on the fields in the lsof output (with names in upper case).

Because of the large quantity of output from this command, this class is based on the Scannable parser class. There are several ways to use this:

  • If you simply want to know whether a search matched, use the any method.

  • If you want all lines that match, use the collect method.

The way these scanner functions work is:

  1. You provide a function, which returns True if a match is found, and an attribute name.

  2. The parser runs every scanner function across every line of data read and successfully parsed.

  3. The attribute then contains the result of the match (True or False in the case of the any method, or the list of matching rows in the case of the collect method.

As an easier way of finding all the lines that match by key=value pairs, use the collect_keys method, giving the name of the scanner attribute to set and one or more ‘key=value’ pairs in the method call. This then returns the list of rows for which the data in all those keywords matched the respective given values. (Note: the SIZE/OFF column is searched for using the key SIZE_OFF - see example below)

Sample output:

COMMAND     PID  TID           USER   FD      TYPE             DEVICE  SIZE/OFF       NODE NAME
systemd       1                root  cwd       DIR              253,1      4096        128 /
systemd       1                root  rtd       DIR              253,1      4096        128 /
systemd       1                root  txt       REG              253,1   1230920    1440410 /usr/lib/systemd/systemd
systemd       1                root  mem       REG              253,1     37152  135529970 /usr/lib64/libnss_sss.so.2
abrt-watc  8619                root    0r      CHR                1,3       0t0       4674 /dev/null
wpa_suppl   641                root    0r      CHR                1,3       0t0       4674 /dev/null
polkitd     642             polkitd    0u      CHR                1,3       0t0       4674 /dev/null
polkitd     642             polkitd    1u      CHR                1,3       0t0       4674 /dev/null

Examples

>>> Lsof.any('systemd_commands', lambda x: 'systemd' in x['COMMAND'])
>>> Lsof.collect('polkitd_user', lambda x: x['USER'] == 'polkitd')
>>> Lsof.collect_keys('root_stdin', USER='root', FD='0r', SIZE_OFF='0t0')
>>> l = shared[Lsof]
>>> l.systemd_commands
True
>>> len(l.polkitd_user)
2
>>> l.polkitd_user[0]['DEVICE']
'1,3'
>>> len(l.root_stdin)
2
>>> l.root_stdin[0]['COMMAND']
'abrt-watc'
class insights.parsers.lsof.Lsof(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.Scannable

A parser for the output of /usr/sbin/lsof - determines the column widths from the first row and then puts the data in each row into a dictionary keyed on the column name and found by the locations of each column. Leading and trailing spaces are stripped from data.

classmethod collect_keys(result_key, **kwargs)[source]

Store a list of lines having keyword=value matches in the given attribute name.

Keyword argument names that exist as column names in the data are searched for in the lines of the file and stored in the result_key attribute. All columns must match (i.e. this is a bolean AND search).

Call this class method before using the class data.

Examples

collect_keys(‘root_block_devs’, USER=’root’, TYPE=’BLK’)

parse(content)[source]

Parse the content for the entire input file.

LsPci - Command lspci -k

To parse the PCI device information gathered from the /sbin/lspci -k command.

class insights.parsers.lspci.LsPci(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LogFileOutput

Class to parse the PCI device information gathered from the /sbin/lspci -k command.

Typical output of the lspci -k command is:

00:00.0 Host bridge: Intel Corporation 5500 I/O Hub to ESI Port (rev 13)
        Subsystem: Cisco Systems Inc Device 0101
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
        Kernel driver in use: pcieport
        Kernel modules: shpchp
00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13)
        Kernel driver in use: pcieport
        Kernel modules: shpchp
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)
        Subsystem: Cisco Systems Inc Device 004a
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe
06:00.0 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)
        Subsystem: Cisco Systems Inc Device 004a
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

Examples

>>> type(lspci)
<class 'insights.parsers.lspci.LsPci'>
>>> lspci.get("Intel Corporation")[0]['raw_message']
'00:00.0 Host bridge: Intel Corporation 5500 I/O Hub to ESI Port (rev 13)'
>>> len(lspci.get("Network controller"))
1
>>> "Centrino Advanced-N 6205" in lspci
True
>>> "0d:00.0" in lspci
False
>>> sorted(lspci.pci_dev_list)
['00:00.0', '00:01.0', '00:02.0', '03:00.0', '06:00.0']
>>> lspci.pci_dev_details('00:00.0')['Subsystem']
'Cisco Systems Inc Device 0101'
>>> lspci.pci_dev_details('00:00.0')['Dev_Details']
'Host bridge: Intel Corporation 5500 I/O Hub to ESI Port (rev 13)'
data

Dict where the keys are the device number and values are details of the device.

Type

dict

lines

List of details of each listed device, the same to the values of self.data

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

pci_dev_details(dev_name)[source]

It will return the PCI device and it’s details.

Parameters

Bus Device function number eg (PCI) -- ‘00:01:0’

Returns

Returns device details along with ‘Subsystem’, ‘Kernel Driver in Use’, ‘Kernel Modules’. Returns None if device doesn’t exists

Return type

(dict)

property pci_dev_list

The list of PCI devices.

Lssap - command /usr/sap/hostctrl/exe/lssap

This module provides processing for the output of the lssap command on SAP systems. The spec handled by this command inlude:

"lssap"                     : CommandSpec("/usr/sap/hostctrl/exe/lssap")

Class Lssap parses the output of the lssap command. Sample output of this command looks like:

- lssap version 1.0 -
==========================================
  SID   Nr   Instance    SAPLOCALHOST                        Version                 DIR_EXECUTABLE
  HA2|  16|       D16|         lu0417|749, patch 10, changelist 1698137|          /usr/sap/HA2/D16/exe
  HA2|  22|       D22|         lu0417|749, patch 10, changelist 1698137|          /usr/sap/HA2/D22/exe
  HA2|  50|       D50|         lu0417|749, patch 10, changelist 1698137|          /usr/sap/HA2/D50/exe
  HA2|  51|       D51|         lu0417|749, patch 10, changelist 1698137|          /usr/sap/HA2/D51/exe

Examples

>>> lssap.instances
['D16', 'D22', 'D50', 'D51']
>>> lssap.version('D51')
'749, patch 10, changelist 1698137'
>>> lssap.is_hana()
False
>>> lssap.data[3]['Instance']
'D51'
class insights.parsers.lssap.Lssap(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse lssap command output.

Raises
data

List of dicts, where the keys in each dict are the column headers and each item in the list represents a SID.

Type

list

sid

List of the SIDs from the SID column.

Type

list

instances

List of instances running on the system.

Type

list

instance_types

List of instance types running on the system.

Type

list

is_ascs()[source]

bool: Is any SAP System Central Services instance detected?

is_hana()[source]

bool: Is any SAP HANA instance detected?

is_netweaver()[source]

bool: Is any SAP NetWeaver instance detected?

parse_content(content)[source]

This method must be implemented by classes based on this class.

version(instance)[source]

str: returns the Version column corresponding to the instance in Instance or None if instance is not found.

LsSCSI - command /usr/bin/lsscsi

This module provides processing for the output of the /usr/bin/lsscsi command.

class insights.parsers.lsscsi.LsSCSI(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This parser reads the output of /usr/bin/lsscsi into a list of dictionarys. Each item is a dictionary with six keys:

  • HCTL - the scsi_host,channel,target_number,LUN tuple

  • Peripheral-Type - the SCSI peripheral type

  • Vendor - the vendor name

  • Model - the model name

  • Revision - the revision string

  • Primary-Device-Node - the primary device node name

data

List of the input lines, where each line is a dictionary having the keys identified above.

Type

list of dict

Parsing refers to http://sg.danny.cz/scsi/lsscsi.html.

Sample input:

[1:0:0:0]    storage IET      Controller       0001  -
[1:0:0:1]    cd/dvd  QEMU     QEMU DVD-ROM     2.5+  /dev/sr0
[1:0:0:2]    disk    IET      VIRTUAL-DISK     0001  /dev/sdb
[3:0:5:0]    tape    HP       C5713A           H910  /dev/st0

Examples

>>> lsscsi[0] == {'Model': 'Controller', 'Vendor': 'IET', 'HCTL': '[1:0:0:0]', 'Peripheral-Type': 'storage', 'Primary-Device-Node': '-', 'Revision': '0001'}
True
>>> lsscsi.device_nodes
['-', '/dev/sr0', '/dev/sdb', '/dev/st0']
>>> len(lsscsi.data)
4
>>> lsscsi[1]['Peripheral-Type']
'cd/dvd'
property device_nodes

All lines’ Primary-Device-Node values.

Type

list

property device_vendors

All lines’ Vendor values

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

LVDisplay - command /sbin/lvdisplay

The normal lvdisplay content looks like this:

Adding lvsapp01ap01:0 as an user of lvsapp01ap01_mlog
--- Volume group ---
VG Name               vgp01app
...
VG Size               399.98 GiB
VG UUID               JVgCxE-UY84-C0Gk-8Cmn-UGXu-UHo0-9Qa4Re
--- Logical volume ---
global/lvdisplay_shows_full_device_path not found in config: defaulting to 0
LV Path                /dev/vgp01app/lvsapp01ap01-old
LV Name                lvsapp01ap01-old
...
VG Name                vgp01app
--- Logical volume ---
global/lvdisplay_shows_full_device_path not found in config: defaulting to 0
LV Path                /dev/vgp01app/lvsapp01ap02
LV Name                lvsapp01ap02
...
VG Name                vgp01app

The data is compiled into two keys in the data attribute:

  • Logical volume: a list of logical volume dictionaries.

  • Volume group: a list of volume group dictionaries.

The keys in each dictionary correspond to the headings found in the output - for example, the keys in each Volume group list entry will include VG Name, VG Size, etc.

In addition, the debug key in both the data attribute dictionary and the Logical volume and Volume group dictionaries stores any debug or warning messages found while parsing the output for that section.

Logical volumes are also available as a dictionary in the lvs property and volume groups in the vgs property, both arranged by name. Both contain the same information as the associated list entry in the volumes dictionary.

Examples

>>> lvs = shared(LvDisplay)
>>> 'volumes' in lvs # direct access via LegacyItemAccess
True
>>> 'debug' in lvs.data['volumes'] # access via data property
True
>>> for lv in lvs.data['volumes']['Logical volume']:
---     print lv['LV Name']
---
lvsapp01ap01-old
lvsapp01ap02
>>> lvs.lvs['lvsapp01ap02']['VG Name'] # access to LVs by name
'vgp01app'
>>> lvs.vgs['vgp01app']['VG Size'] # access to VGs by name
'399.98 GiB'
class insights.parsers.lvdisplay.LvDisplay(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Read the output of /sbin/lvdisplay.

data

The full data parsed from the output of lvdisplay.

Type

dict

lvs

A dictionary of logical volumes by name.

Type

dict

vgs

A dictionary of volume groups by name.

Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

Logical Volume Management configuration and status

Parsers for lvm data based on output of various commands and file contents.

This module contains the classes that parse the output of the commands lvs, pvs, and vgs, and the contents of the file /etc/lvm/lvm.conf.

Pvs - command /sbin/pvs --nameprefixes --noheadings --separator='|' -a -o pv_all

PvsHeadings - command pvs -a -v -o +pv_mda_free,pv_mda_size,pv_mda_count,pv_mda_used_count,pe_count --config="global{locking_type=0}"

Vgs - command /sbin/vgs --nameprefixes --noheadings --separator='|' -a -o vg_all

VgsHeadings - command vgs -v -o +vg_mda_count,vg_mda_free,vg_mda_size,vg_mda_used_count,vg_tags --config="global{locking_type=0}"

Lvs - command /sbin/lvs --nameprefixes --noheadings --separator='|' -a -o lv_all

LvsHeadings - command /sbin/lvs -a -o +lv_tags,devices --config="global{locking_type=0}"

LvmConf - file /etc/lvm/lvm.conf

class insights.parsers.lvm.Lvm(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base class for parsing LVM data in key=value format.

property locking_disabled

Returns True if any lines in input data indicate locking is disabled.

Type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

property warnings

Returns a list of lines from input data containing warning/error/info strings.

Type

list

class insights.parsers.lvm.LvmConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parses contents of the /etc/lvm/lvm.conf file.

Sample Input:

locking_type = 1
#locking_type = 2
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
volume_list = [ "vg2", "vg3/lvol3", "@tag2", "@*" ]
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

filter = [ "r/sda[0-9]*$/",  "a/sd.*/" ]
filter = [ "a/sda[0-9]*$/",  "r/sd.*/" ]
shell {
    history_size = 100
}

Examples

>>> lvm_conf_data = shared[LvmConf]
>>> lvm_conf_data.data
{"locking_type": 1, "volume_list": ["vg1", "vg2/lvol1", "@tag1", "@*"],
 "filter": ["a/sda[0-9]*$/", "r/sd.*/"], "history_size": 100}
>>> lvm_conf_data.get("locking_type")
1
parse_content(content)[source]

Returns a dict: locking_type : 1 filter : [‘a/sda[0-9]*$/’, ‘r/sd.*/’] volume_list : [‘vg2’, ‘vg3/lvol3’, @tag2’, ‘@*’]

class insights.parsers.lvm.LvmHeadings(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base class for parsing LVM data in table format.

class insights.parsers.lvm.Lvs(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Lvm

Parse the output of the /sbin/lvs --nameprefixes --noheadings --separator=’|’ -a -o lv_all command.

Parse each line in the output of lvs based on the lvs datasource in insights/specs/:

Output sample of lvs:

LVM2_LV_UUID='KX68JI-8ISN-YedH-ZYDf-yZbK-zkqE-3aVo6m'|LVM2_LV_NAME='docker-poolmeta'|LVM2_LV_FULL_NAME='rhel/docker-poolmeta'|...
LVM2_LV_UUID='123456-8ISN-YedH-ZYDf-yZbK-zkqE-123456'|LVM2_LV_NAME='rhel_root'|LVM2_LV_FULL_NAME='rhel/rhel_root'|LVM2_LV_PATH='/dev/rhel/docker-poolmeta'|...

Return a list, as shown below:

[
    {
        'LVM2_LV_UUID'      : 'KX68JI-8ISN-YedH-ZYDf-yZbK-zkqE-3aVo6m',
        'LVM2_LV_NAME'      : 'docker-poolmeta',
        'LVM2_LV_FULL_NAME'   : 'rhel/docker-poolmeta',
        ...
    },
    {
        'LVM2_LV_UUID'      : '123456-8ISN-YedH-ZYDf-yZbK-zkqE-123456',
        'LVM2_LV_NAME'      : 'rhel_root',
        'LVM2_LV_FULL_NAME'   : 'rhel/rhel_root',
        ...
    }
]
parse_content(content)[source]

This method must be implemented by classes based on this class.

vg(name)[source]

Return all logical volumes in the given volume group

class insights.parsers.lvm.LvsAll(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Lvs

Parse the output of the /sbin/lvs --nameprefixes --noheadings --separator=’|’ -a -o lv_name,lv_size,lv_attr,mirror_log,vg_name,devices,region_size,data_percent,metadata_percent --config=’global{locking_type=0} devices{filter=[“a|.*|”]}’ command.

Uses the Lvs class defined in this module.

class insights.parsers.lvm.LvsHeadings(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.LvmHeadings

Process output of the command /sbin/lvs -a -o +lv_tags,devices --config=”global{locking_type=0}”.

Sample Input data:

WARNING: Locking disabled. Be careful! This could corrupt your metadata.
LV          VG      Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert LV Tags Devices
lv_app      vg_root -wi-ao---- 71.63g                                                             /dev/sda2(7136)
lv_home     vg_root -wi-ao----  2.00g                                                             /dev/sda2(2272)
lv_opt      vg_root -wi-ao----  5.00g                                                             /dev/sda2(2784)
lv_root     vg_root -wi-ao----  5.00g                                                             /dev/sda2(0)
lv_tmp      vg_root -wi-ao----  1.00g                                                             /dev/sda2(4064)
lv_usr      vg_root -wi-ao----  5.00g                                                             /dev/sda2(4320)
lv_usrlocal vg_root -wi-ao----  1.00g                                                             /dev/sda2(5600)
lv_var      vg_root -wi-ao----  5.00g                                                             /dev/sda2(5856)
swap        vg_root -wi-ao----  3.88g                                                             /dev/sda2(1280)
data

List of dicts, each dict containing one row of the table with column headings as keys.

Type

list

Examples

>>> lvs_info = shared[LvsHeadings]
>>> lvs_info.data[0]
{'LV': 'lv_app', 'VG': 'vg_root', 'Attr': '-wi-ao----', 'LSize': '71.63',
 'Pool': '', 'Origin': '', 'Data%': '', 'Meta%': '', 'Move': '', 'Log': '',
 'Cpy%Sync': '', 'Convert': '', 'LV_Tags': '', 'Devices': '/dev/sda2(7136)'}
>>> lvs_info.data[2]['LSize']
'2.00g'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.lvm.Pvs(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Lvm

Parse the output of the /sbin/pvs --nameprefixes --noheadings --separator=’|’ -a -o pv_all command.

Parse each line in the output of pvs based on the of pvs datasource in insights/specs/ Output sample of pvs:

LVM2_PV_FMT=''|LVM2_PV_UUID=''|LVM2_DEV_SIZE='500.00m'|...
LVM2_PV_FMT='lvm2'|LVM2_PV_UUID='JvSULk-ileq-JbuS-GGgg-jkif-thuW-zvFBEl'|LVM2_DEV_SIZE='476.45g'|...

Returns a list like:

[
    {
        'LVM2_PV_FMT'    : '',
        'LVM2_PV_UUID'    : '',
        'LVM2_DEV_SIZE'   : '500.00m',
        ...
    },
    {
        'LVM2_PV_FMT'    : 'lvm2',
        'LVM2_PV_UUID'    : 'JvSULk-ileq-JbuS-GGgg-jkif-thuW-zvFBEl',
        'LVM2_DEV_SIZE'   : '476.45g',
        ...
    }
]

Since it is possible to have two PV’s with the same name (for example unknown device) a unique key for each PV is created by joining the PV_NAME and PV_UUID fields with a `+ character. This key is added to the dictionary as the PV_KEY field.

parse_content(content)[source]

This method must be implemented by classes based on this class.

vg(name)[source]

Return all physical volumes assigned to the given volume group

class insights.parsers.lvm.PvsAll(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Pvs

Parse the output of the /sbin/pvs --nameprefixes --noheadings --separator=’|’ -a -o pv_all,vg_name --config=’global{locking_type=0} devices{filter=[“a|.*|”]}’ command.

Uses the Pvs class defined in this module.

class insights.parsers.lvm.PvsHeadings(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.LvmHeadings

Parses the output of the pvs -a -v -o +pv_mda_free,pv_mda_size,pv_mda_count,pv_mda_used_count,pe_count --config=”global{locking_type=0}” command.

Since it is possible to have two PV’s with the same name (for example unknown device) a unique key for each PV is created by joining the PV_NAME and PV_UUID fields with a `+ character. This key is added to the resulting dictionary as the PV_KEY field.

Sample input:

WARNING: Locking disabled. Be careful! This could corrupt your metadata.
  Scanning all devices to update lvmetad.
  No PV label found on /dev/loop0.
  No PV label found on /dev/loop1.
  No PV label found on /dev/sda1.
  No PV label found on /dev/fedora/root.
  No PV label found on /dev/sda2.
  No PV label found on /dev/fedora/swap.
  No PV label found on /dev/fedora/home.
  No PV label found on /dev/mapper/docker-253:1-2361272-pool.
  Wiping internal VG cache
  Wiping cache of LVM-capable devices
PV                                                    VG     Fmt  Attr PSize   PFree DevSize PV UUID                                PMdaFree  PMdaSize  #PMda #PMdaUse PE
/dev/fedora/home                                                  ---       0     0  418.75g                                               0         0      0        0      0
/dev/fedora/root                                                  ---       0     0   50.00g                                               0         0      0        0      0
/dev/fedora/swap                                                  ---       0     0    7.69g                                               0         0      0        0      0
/dev/loop0                                                        ---       0     0  100.00g                                               0         0      0        0      0
/dev/loop1                                                        ---       0     0    2.00g                                               0         0      0        0      0
/dev/mapper/docker-253:1-2361272-pool                             ---       0     0  100.00g                                               0         0      0        0      0
/dev/mapper/luks-7430952e-7101-4716-9b46-786ce4684f8d fedora lvm2 a--  476.45g 4.00m 476.45g FPLCRf-d918-LVL7-6e3d-n3ED-aiZv-EesuzY        0   1020.00k     1        1 121970
/dev/sda1                                                         ---       0     0  500.00m                                               0         0      0        0      0
/dev/sda2                                                         ---       0     0  476.45g                                               0         0      0        0      0
  Reloading config files
  Wiping internal VG cache
data

List of dicts, each dict containing one row of the table with column headings as keys.

Type

list

Examples

>>> pvs_data = shared[PvsHeadings]
>>> pvs_data[0]
{'PV': '/dev/fedora/home', 'VG': '', 'Fmt': '', 'Attr': '---', 'PSize': '0',
 'PFree': '0', 'DevSize': '418.75g', 'PV_UUID': '', 'PMdaFree': '0',
 'PMdaSize': '0', '#PMda': '0', '#PMdaUse': '0', 'PE': '0', 'PV_KEY': '/dev/fedora/home+no_uuid'}
>>> pvs_data[0]['PV']
'/dev/fedora/home'
parse_content(content)[source]

This method must be implemented by classes based on this class.

vg(name)[source]

Return all physical volumes assigned to the given volume group

class insights.parsers.lvm.Vgs(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Lvm

Parse the output of the /sbin/vgs --nameprefixes --noheadings --separator=’|’ -a -o vg_all command.

Parse each line in the output of vgs based on the vgs datasource in insights/specs/ Output sample of vgs:

LVM2_VG_FMT='lvm2'|LVM2_VG_UUID='YCpusB-LEly-THGL-YXhC-t3q6-mUQV-wyFZrx'|LVM2_VG_NAME='rhel'|LVM2_VG_ATTR='wz--n-'|...
LVM2_VG_FMT='lvm2'|LVM2_VG_UUID='123456-LEly-THGL-YXhC-t3q6-mUQV-123456'|LVM2_VG_NAME='fedora'|LVM2_VG_ATTR='wz--n-'|...

Returns a list like:

[
    {
        'LVM2_PV_FMT'    : 'lvm2',
        'LVM2_VG_UUID'    : 'YCpusB-LEly-THGL-YXhC-t3q6-mUQV-wyFZrx',
        'LVM2_VG_NAME'   : 'rhel',
        ...
    },
    {
        'LVM2_PV_FMT'    : 'lvm2',
        'LVM2_VG_UUID'    : '123456-LEly-THGL-YXhC-t3q6-mUQV-123456',
        'LVM2_VG_NAME'   : 'fedora',
        ...
    }
]
class insights.parsers.lvm.VgsAll(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.Vgs

Parse the output of the /sbin/vgs --nameprefixes --noheadings --separator=’|’ -a -o vg_all --config=’global{locking_type=0} devices{filter=[“a|.*|”]}’ command.

Uses the Vgs class defined in this module.

class insights.parsers.lvm.VgsHeadings(context, extra_bad_lines=[])[source]

Bases: insights.parsers.lvm.LvmHeadings

Parses output of the vgs -v -o +vg_mda_count,vg_mda_free,vg_mda_size,vg_mda_used_count,vg_tags --config=”global{locking_type=0}” command.

Sample input:

WARNING: Locking disabled. Be careful! This could corrupt your metadata.
  Using volume group(s) on command line.
VG            Attr   Ext   #PV #LV #SN VSize   VFree    VG UUID                                VProfile #VMda VMdaFree  VMdaSize  #VMdaUse VG Tags
DATA_OTM_VG   wz--n- 4.00m   6   1   0   2.05t 1020.00m xK6HXk-xl2O-cqW5-2izb-LI9M-4fV0-dAzfcc              6   507.00k  1020.00k        6
ITM_VG        wz--n- 4.00m   1   1   0  16.00g    4.00m nws5dd-INe6-1db6-9U1N-F0G3-S1z2-5XTdO4              1   508.00k  1020.00k        1
ORABIN_OTM_VG wz--n- 4.00m   2   3   0 190.00g       0  hfJwg8-hset-YgUY-X6NJ-gkWE-EunZ-KuCXGP              2   507.50k  1020.00k        2
REDO_OTM_VG   wz--n- 4.00m   1   3   0  50.00g       0  Q2YtGy-CWKU-sEYj-mqHk-rbdP-Hzup-wi8jsf              1   507.50k  1020.00k        1
SWAP_OTM_VG   wz--n- 4.00m   1   1   0  24.00g    8.00g hAerzZ-U8QU-ICkc-xxCj-N2Ny-rWzq-pmTpWJ              1   508.00k  1020.00k        1
rootvg        wz--n- 4.00m   1   6   0  19.51g    1.95g p4tLLb-ikeo-Ankk-2xJ6-iHYf-D4E6-KFCFvr              1   506.50k  1020.00k        1
  Reloading config files
  Wiping internal VG cache
data

List of dicts, each dict containing one row of the table with column headings as keys.

Type

list

Examples

>>> vgs_info = shared[VgsHeadings]
>>> vgs_info.data[0]
{}
>>> vgs_info.data[2]['LSize']
'2.00g'
parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.lvm.cls

alias of insights.parsers.lvm.Vgs

insights.parsers.lvm.find_warnings(content)[source]

Look for lines containing warning/error/info strings instead of data.

insights.parsers.lvm.map_keys(pvs, keys)[source]

Add human readable key names to dictionary while leaving any existing key names.

Manila configuration - file /etc/manila/manila.conf

The Manila configuration file is a standard ‘.ini’ file and this parser uses the IniConfigFile class to read it.

Sample configuration:

[DEFAULT]
osapi_max_limit = 1000
osapi_share_base_URL = <None>

use_forwarded_for = false
api_paste_config = api-paste.ini
state_path = /var/lib/manila

scheduler_topic = manila-scheduler

share_topic = manila-share

share_driver = manila.share.drivers.generic.GenericShareDriver

enable_v1_api = false
enable_v2_api = false

[cors]
allowed_origin = <None>
allow_credentials = true

expose_headers = Content-Type,Cache-Control,Content-Language,Expires,Last-Modified,Pragma
allow_methods = GET,POST,PUT,DELETE,OPTIONS
allow_headers = Content-Type,Cache-Control,Content-Language,Expires,Last-Modified,Pragma

Examples

>>> conf = shared[ManilaConf]
>>> conf.sections()
['DEFAULT', 'cors']
>>> 'cors' in conf
True
>>> conf.has_option('DEFAULT', 'share_topic')
True
>>> conf.get("DEFAULT", "share_topic")
"manila-share"
>>> conf.get("DEFAULT", "enable_v2_api")
"false"
>>> conf.getboolean("DEFAULT", "enable_v2_api")
False
>>> conf.getint("DEFAULT", "osapi_max_limit")
1000
class insights.parsers.manila_conf.ManilaConf(context)[source]

Bases: insights.core.IniConfigFile

Manila configuration parser class, based on the IniConfigFile class.

MariaDBLog - File /var/log/mariadb/mariadb.log

Module for parsing the log file for MariaDB

Typical content of mariadb.log file is:

161109  9:25:42 [Note] WSREP: Read nil XID from storage engines, skipping position init
161109  9:25:42 [Note] WSREP: wsrep_load(): loading provider library 'none'
161109  9:25:42 [Warning] Failed to setup SSL
161109  9:25:42 [Warning] SSL error: SSL_CTX_set_default_verify_paths failed
161109  9:25:42 [Note] WSREP: Service disconnected.
161109  9:25:43 [Note] WSREP: Some threads may fail to exit.
161109  9:25:43 [Note] WSREP: Read nil XID from storage engines, skipping position init
161109  9:25:43 [Note] WSREP: wsrep_load(): loading provider library 'none'
161109  9:25:43 [Warning] Failed to setup SSL
161109  9:25:43 [Warning] SSL error: SSL_CTX_set_default_verify_paths failed
161109  9:25:43 [Note] WSREP: Service disconnected.
161109  9:25:44 [Note] WSREP: Some threads may fail to exit.
161109 14:28:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
161109 14:28:22 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.OkURTZ' --pid-file='/var/lib/mysql/overcloud-controller-0.localdomain-recover.pid'
161109 14:28:22 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295

Examples

>>> mdb = shared[MariaDBLog]
>>> mdb.get('mysqld_safe')[0]['raw_message']
'161109 14:28:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql'
>>> 'SSL_CTX_set_default_verify_paths' in mdb
True
class insights.parsers.mariadb_log.MariaDBLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/mariadb/mariadb.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

MaxUID - command /bin/awk -F':' '{ if($3 > max) max = $3 } END { print max }' /etc/passwd

This module provides the MaxUID value gathered from the /etc/passwd file.

class insights.parsers.max_uid.MaxUID(context)[source]

Bases: insights.core.Parser

Class for parsing the MaxUID value from the /etc/passwd file returned by the command:

/bin/awk -F':' '{ if($3 > max) max = $3 } END { print max }' /etc/passwd

Typical output of the /etc/passwd file is:

root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin

Typical output of this parser is:

65534
Raises

Examples

>>> max_uid.value
65534
parse_content(content)[source]

This method must be implemented by classes based on this class.

NormalMD5 - md5 checksums of specified binary or library files

Module for processing output of the md5sum command.

The name and md5 checksums of the specified file are stored as attributes.

class insights.parsers.md5check.NormalMD5(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse the md5sum command information.

The output of this command contains two fields, the first is the md5 checksum and the second is the file name.

Sample output of the md5sum command:

d1e6613cfb62d3f111db7bdda39ac821  /usr/lib64/libsoftokn3.so

Examples

>>> type(md5info)
<class 'insights.parsers.md5check.NormalMD5'>
>>> md5info.filename
'/etc/localtime'
>>> md5info.md5sum
'7d4855248419b8a3ce6616bbc0e58301'
filename

Filename for which the MD5 checksum was computed.

Type

str

md5sum

MD5 checksum value.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

Mdstat - file /proc/mdstat

Represents the information in the /proc/mdstat file. Several examples of possible data containe in the file can be found on the MDstat kernel.org wiki page.

In particular, the discussion here will focus on initial extraction of information form lines such as:

Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
      136448 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
      129596288 blocks [2/2] [UU]

md3 : active raid5 sdl1[9] sdk1[8] sdj1[7] sdi1[6] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] sdc1[0]
      1318680576 blocks level 5, 1024k chunk, algorithm 2 [10/10] [UUUUUUUUUU]

The data contained in mdstat is represented with three top level members - personalities, components and mds.

Examples

>>> mdstat = shared[Mdstat]
>>> mdstat.personalities
['raid1', 'raid6', 'raid5', 'raid4']
>>> len(mdstat.components) # The individual component devices
14
>>> mdstat.components[0]['device_name']
'md1'
>>> sdb2 = mdstat.components[0]
>>> sdb2['component_name']
'sdb2'
>>> sdb2['active']
True
>>> sdb2['raid']
'raid1'
>>> sdb2['role']
1
>>> sdb2['up']
True
>>> sorted(mdstat.mds.keys()) # dictionary of MD devices by device name
['md1', 'md2', 'md3']
>>> mdstat.mds['md1']['active']
True
>>> len(mdstat.mds['md1']['devices']) # list of devices in this MD
2
>>> mdstat.mds['md1']['devices'][0]['component_name'] # device information
'sdb2'
class insights.parsers.mdstat.Mdstat(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Represents the information in the /proc/mdstat file.

personalities

A list of RAID levels the kernel currently supports

Type

list

components

A list containing a dict of md component device information Each of these dicts contains the following keys

  • device_name : string - name of the array device

  • active : boolean - True if the array is active, False if it is inactive.

  • component_name : string - name of the component device

  • raid : string - with the raid level, e.g., “raid1” for “md1”

  • role : int - raid role number

  • device_flag : str - device component status flag. Known values include ‘F’ (failed device), ‘S’, and ‘W’

  • up : boolean - True if the component device is up

  • auto_read_only : boolean - True if the array device is “auto-read-only”

  • blocks : the number of blocks in the device

  • level : the current RAID level, if found in the status line

  • chunk : the device chunk size, if found in the status line

  • algorithm : the current conflict resolution algorithm, if found in the status line

Type

list of dicts

mds

A dictionary keyed on the MD device name, with the following keys

  • name: Name of the MD device

  • active: Whether the MD device is active

  • raid: The RAID type string

  • devices: a list of the devices in this

  • blocks, level, chunk and algorithm - the same information given above per component device (if found)

Type

dict of dicts

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.mdstat.apply_upstring(upstring, component_list)[source]

Update the dictionaries resulting from parse_array_start with the “up” key based on the upstring returned from parse_upstring.

The function assumes that the upstring and component_list parameters passed in are from the same device array stanza of a /proc/mdstat file.

The function modifies component_list in place, adding or updating the value of the “up” key to True if there is a corresponding U in the upstring string, or to False if there is a corresponding _.

If there the number of rows in component_list does not match the number of characters in upstring, an AssertionError is raised.

Parameters
  • upstring (str) -- String sequence of U``s and ``_``s as determined by the ``parse_upstring method

  • component_list (list) -- List of dictionaries output from the parse_array_start method.

insights.parsers.mdstat.parse_array_start(md_line)[source]

Parse the initial line of a device array stanza in /proc/mdstat.

Lines are expected to be like:

md2 : active raid1 sdb3[1] sda3[0]

If they do not have this format, an error will be raised since it would be considered an unexpected parsing error.

Parameters

md_line (str) -- A single line from the start of a device array stanza from a /proc/mdstat file.

Returns

  • A list of dictionaries, one dictionrary for each component

  • device making up the array.

insights.parsers.mdstat.parse_array_status(line, components)[source]

Parse the array status line, e.g.:

1318680576 blocks level 5, 1024k chunk, algorithm 2 [10/10] [UUUUUUUUUU]

This retrieves the following pieces of information:

  • blocks - (int) number of blocks in the whole MD device (always present)

  • level - (int) if found, the present RAID level

  • chunksize - (str) if found, the size of the data chunk in kilobytes

  • algorithm - (int) if found, the current algorithm in use.

Because of the way data is stored per-component and not per-array, this then puts the above keys into each of the component dictionaries in the list we’ve been given.

Sample data:

1250241792 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUUU]
1465151808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
136448 blocks [2/2] [UU]
6306 blocks super external:imsm<Paste>
insights.parsers.mdstat.parse_personalities(personalities_line)[source]

Parse the “personalities” line of /proc/mdstat.

Lines are expected to be like:

Personalities : [linear] [raid0] [raid1] [raid5] [raid4] [raid6]

If they do not have this format, an error will be raised since it would be considered an unexpected parsing error.

Parameters

personalities_line (str) -- A single “Personalities” line from an /proc/mdstat files.

Returns

Return type

A list of raid “personalities” listed on the line.

insights.parsers.mdstat.parse_upstring(line)[source]

Parse the subsequent lines of a device array stanza in /proc/mdstat for the “up” indictor string.

Lines are expected to be like:

129596288 blocks [2/2] [UU]

or

1318680576 blocks level 5, 1024k chunk, algorithm 2 [10/10] [UUU_UUUUUU]

In particular, this method searchs for the string like [UU] which indicates whether component devices or up, U or down, _.

Parameters

line (str) -- A single line from a device array stanza.

Returns

  • The string containing a series of U and \_ characters if

  • found in the string, and None if the uptime string is not found.

meminfo - file /proc/meminfo

This suite of parsers deals with various parts of the contents of /proc/meminfo. They store the data for many different groupings of memory usage information as key-value pairs and attributes. Key strings are converted to lower case, and all values are stored as integers. Data stored in kilobytes (i.e. everything but the hugepage values) are converted to bytes by multiplying by 1024.

All keys are stored in the parser class as properties. The information relevant to particular uses of memory are also available in the following properties:

swap: for swap related information:

  • total - the SwapTotal information

  • free - the SwapFree information

  • cached - the SwapCached information

  • used - total - (free + cached)

anon: for anonymous page information:

  • active - the Active(anon) information

  • inactive - the Inactive(anon) information

  • pages - the AnonPages information

file: for file mapping information:

  • active - the Active(file) information

  • inactive - the Inactive(file) information

slab: for SLAB allocator information:

  • total - the Slab information

  • reclaimable - the SReclaimable information

  • unreclaimable - the SUnreclaim information

huge_pages: for HugePage allocator information:

  • total - the Hugepages_Total information

  • free - the Hugepages_Free information

  • reserved - the Hugepages_Rsvd information

  • surplus - the Hugepages_Surp information

  • size - the HugepageSize information

  • anon - the AnonHugepages information

huge_pages also contains two properties to help parsers determine whether huge pages are in use:

  • using - are huge pages in use? Is Hugepages_Total > 0?

  • using_transparent - are transparent huge pages in use? Is AnonHugePages > 0?

commit: for memory overcommit information:

  • total - the Committed_As information

  • limit - the CommitLimit information

vmalloc: for virtual memory allocation information:

  • total - the VMAllocTotal information

  • used - the VMAllocUsed information

  • chunk - the VMAllocChunk information

cma: the CMA information:

  • total - the CMAllocTotal information

  • free - the CMAllocFree information

direct_map: the direct memory map information

  • kb - the DirectMap4K information

  • mb - the DirectMap2M information

  • gb - the DirectMap1G information

Sample data:

MemTotal:        8009912 kB
MemFree:          538760 kB
MemAvailable:    6820236 kB
Buffers:          157048 kB
Cached:          4893932 kB
SwapCached:          120 kB
Active:          2841500 kB
Inactive:        2565560 kB
Active(anon):     311596 kB
Inactive(anon):   505800 kB
Active(file):    2529904 kB
Inactive(file):  2059760 kB
...

Examples

>>> mem = shared[MemInfo]
>>> m.data.['memtotal'] # Old style accessor
8202149888
>>> mem.total # New property-based accessor
8202149888
>>> mem.used # Calculated
7650459648
>>> m.swap.total
3221221376
>>> m.swap.free
3211624448
>>> m.swap.used # Calculated
9474048
class insights.parsers.meminfo.MemInfo(context)[source]

Bases: insights.core.Parser

Meminfo field names are wildly inconsistent (imho). This class attempts to bring a bit of order to the chaos. All values are in bytes.

KB describing /proc/meminfo

https://access.redhat.com/solutions/406773

parse_content(content)[source]

This method must be implemented by classes based on this class.

Messages file /var/log/messages

class insights.parsers.messages.Messages(context)[source]

Bases: insights.core.Syslog

Read the /var/log/messages file.

Note

Please refer to its super-class insights.core.Syslog for more details.

Sample log lines:

May 18 15:13:34 lxc-rhel68-sat56 jabberd/sm[11057]: session started: jid=rhn-dispatcher-sat@lxc-rhel6-sat56.redhat.com/superclient
May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon
May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: Launching a JVM...
May 18 15:24:28 lxc-rhel68-sat56 yum[11597]: Installed: lynx-2.8.6-27.el6.x86_64
May 18 15:36:19 lxc-rhel68-sat56 yum[11954]: Updated: sos-3.2-40.el6.noarch

Note

Because /var/log/messages timestamps by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

Examples

>>> Messages.filters.append('wrapper')
>>> Messages.token_scan('daemon_start', 'Wrapper Started as Daemon')
>>> msgs = shared[Messages]
>>> len(msgs.lines)
>>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present
>>> wrapper_msgs[0]
{'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',
 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',
 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'
}
>>> msgs.daemon_start # Token set if matching lines present in logs
True

MistralExecutorLog - file /var/log/mistral/executor.log

class insights.parsers.mistral_log.MistralExecutorLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/mistral/executor.log file.

Provide access to mistral executor log using the LogFileOutput parser class.

Typical content of executor.log file is:

2019-03-01 13:54:41.091 26749 DEBUG oslo_concurrency.lockutils [-] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
2019-03-01 13:54:41.091 26749 DEBUG oslo_concurrency.lockutils [-] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
2019-03-01 13:54:41.092 26749 DEBUG oslo_service.service [-] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python2.7/site-packages/oslo_service/service.py:303
2019-03-01 13:54:41.133 26749 DEBUG oslo_concurrency.lockutils [-] Lock "service_coordinator" released by "mistral.service.coordination.register_membership" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-03-01 13:56:05.329 26749 INFO mistral.executors.executor_server [req-a1d40531-b3b9-4fdf-802f-0371bf364551 89665ff87df144128ada2b8c59ec62f1 e4d0868a0715411fa1481f8ccfb61414 - default default] Received RPC request 'run_action'[action_ex_id=7c2c79ba-d7c4-4c4d-9e51-684bab00e6a9, action_cls_str=mistral.actions.std_actions.NoOpAction, action_cls_attrs={}, params={}, timeout=None]
2019-03-01 14:36:55.134 26749 ERROR mistral.executors.default_executor ActionException: HeatAction.stacks.get failed: ERROR: The Stack (overcloud) could not be found.

Examples

>>> type(executor_log)
<class 'insights.parsers.mistral_log.MistralExecutorLog'>
>>> executor_log.get('mistral.executors.default_executor')[0].get('raw_message')
'2019-03-01 14:36:55.134 26749 ERROR mistral.executors.default_executor ActionException: HeatAction.stacks.get failed: ERROR: The Stack (overcloud) could not be found.'
>>> from datetime import datetime
>>> len(list(executor_log.get_after(datetime(2019, 3, 1, 13, 56, 0))))
2

Mlx4Port - file /sys/bus/pci/devices/*/mlx4_port[0-9]

This module provides processing for the contents of each file matching the glob spec /sys/bus/pci/devices/*/mlx4_port[0-9].

Sample contents of this file looks like:

ib

or:

eth
insights.parsers.mlx4_port.name

The mlx4 port name.

Type

str

insights.parsers.mlx4_port.contents

List of string values representing each line in the file.

Type

list

Examples

>>> type(mlx4_port)
<class 'insights.parsers.mlx4_port.Mlx4Port'>
>>> mlx4_port.name
'mlx4_port1'
>>> mlx4_port.contents
['ib']
class insights.parsers.mlx4_port.Mlx4Port(context)[source]

Bases: insights.core.Parser

Parse the contents of the mlx4_port file

parse_content(content)[source]

This method must be implemented by classes based on this class.

ModInfo - Commands modinfo <module_name>

Parsers to parse the output of modinfo <module_name> commands.

ModInfoI40e - Command modinfo i40e

ModInfoVmxnet3 - Command modinfo vmxnet3

ModInfoIgb - Command modinfo igb

ModInfoIxgbe - Command modinfo ixgbe

ModInfoVeth - Command modinfo veth

ModInfoEach - Command modinfo *

for any module listed by lsmod

ModInfoAll - Command modinfo *(all modules)

for all modules listed by lsmod

class insights.parsers.modinfo.ModInfo[source]

Bases: dict

Base class for the information about a kernel module, the module info will be stored in dictionary format. Besides of that, the following utility properties are provided as well.

classmethod from_content(content)[source]

A classmethod to generated a ModInfo object from the given content list. Two more keys module_name and module_deps will be created as well per the content.

Raises

SkipException -- When nothing need to check to a dict.

property module_alias

This will return the list of alias to this kernel module when set, else [].

Type

(list)

property module_deps

This will return the list of kernel modules depend on the kernel module when set, else [].

Type

(list)

property module_details

This will return the kernel module details when set.

Type

(dict)

property module_firmware

This will return the list of firmwares used by this module when set, else [].

Type

(list)

property module_name

This will return kernel module name when set, else empty str.

Type

(str)

property module_parm

This will return the list of parms for this kernel module when set, else [].

Type

(list)

property module_path

This will return kernel module path when set, else None.

Type

(str)

property module_signer

This will return the signer of kernel module when set, else empty string.

Type

(str)

property module_version

This will return the kernel module version when set, else empty string.

Type

(str)

class insights.parsers.modinfo.ModInfoAll(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class to parse the information about all kernel modules, the module info will be stored in dictionary format.

Sample output:

filename:       /lib/modules/3.10.0-957.10.1.el7.x86_64/kernel/drivers/net/vmxnet3/vmxnet3.ko.xz
version:        1.4.14.0-k
license:        GPL v2
description:    VMware vmxnet3 virtual NIC driver
author:         VMware, Inc.
retpoline:      Y
rhelversion:    7.6
srcversion:     7E672688ACACBDD2E363B63
alias:          pci:v000015ADd000007B0sv*sd*bc*sc*i*
depends:
intree:         Y
vermagic:       3.10.0-957.10.1.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        A5:70:18:DF:B6:C9:D6:1F:CF:CE:0A:3D:02:8B:B3:69:BD:76:CA:ED
sig_hashalgo:   sha256

filename:       /lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz
firmware:       i40e/i40e-e2-7.13.1.0.fw
firmware:       i40e/i40e-e1h-7.13.1.0.fw
version:        2.3.2-k
license:        GPL
description:    Intel(R) Ethernet Connection XL710 Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
retpoline:      Y
rhelversion:    7.7
srcversion:     DC5C250666ADD8603966656
alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
depends:        ptp
intree:         Y
vermagic:       3.10.0-993.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        81:7C:CB:07:72:4E:7F:B8:15:24:10:F9:27:2D:AA:CF:80:3E:CE:59
sig_hashalgo:   sha256
parm:           debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)
parm:           int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)
Raises

SkipException -- When nothing need to parse.

Examples

>>> type(modinfo_all)
<class 'insights.parsers.modinfo.ModInfoAll'>
>>> 'i40e' in modinfo_all
True
>>> modinfo_all['i40e'].module_version
'2.3.2-k'
>>> modinfo_all['i40e'].module_path
'/lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz'
>>> sorted(modinfo_all['i40e'].module_firmware)
['i40e/i40e-e1h-7.13.1.0.fw', 'i40e/i40e-e2-7.13.1.0.fw']
>>> sorted(modinfo_all['i40e'].module_alias)
['pci:v00008086d0000158Asv*sd*bc*sc*i*', 'pci:v00008086d0000158Bsv*sd*bc*sc*i*']
>>> sorted(modinfo_all['i40e'].module_parm)
['debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)', 'int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)']
>>> 'vmxnet3' in modinfo_all
True
retpoline_y

A set of names of the modules with the attribute “retpoline: Y”.

Type

set

retpoline_n

A set of names of the modules with the attribute “retpoline: N”.

Type

set

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.modinfo.ModInfoEach(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.parsers.modinfo.ModInfo

Parses the output of modinfo %s command, where %s is any of the loaded modules.

Sample output:

filename:       /lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz
firmware:       i40e/i40e-e2-7.13.1.0.fw
firmware:       i40e/i40e-e1h-7.13.1.0.fw
version:        2.3.2-k
license:        GPL
description:    Intel(R) Ethernet Connection XL710 Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
retpoline:      Y
rhelversion:    7.7
srcversion:     DC5C250666ADD8603966656
alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
depends:        ptp
intree:         Y
vermagic:       3.10.0-993.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        81:7C:CB:07:72:4E:7F:B8:15:24:10:F9:27:2D:AA:CF:80:3E:CE:59
sig_hashalgo:   sha256
parm:           debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)
parm:           int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)
Raises

SkipException -- When nothing is need to parse

Examples

>>> type(modinfo_obj)
<class 'insights.parsers.modinfo.ModInfoEach'>
>>> modinfo_obj.module_name
'i40e'
>>> modinfo_obj.module_version
'2.3.2-k'
>>> modinfo_obj.module_path
'/lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz'
>>> sorted(modinfo_obj.module_firmware)
['i40e/i40e-e1h-7.13.1.0.fw', 'i40e/i40e-e2-7.13.1.0.fw']
>>> sorted(modinfo_obj.module_alias)
['pci:v00008086d0000158Asv*sd*bc*sc*i*', 'pci:v00008086d0000158Bsv*sd*bc*sc*i*']
>>> sorted(modinfo_obj.module_parm)
['debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)', 'int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)']
property data

This will return the kernel module details when set.

Type

(dict)

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.modinfo.ModInfoI40e(context, extra_bad_lines=[])[source]

Bases: insights.parsers.modinfo.ModInfoEach

Parses output of modinfo i40e command. Sample modinfo i40e output:

filename:       /lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz
firmware:       i40e/i40e-e2-7.13.1.0.fw
firmware:       i40e/i40e-e1h-7.13.1.0.fw
version:        2.3.2-k
license:        GPL
description:    Intel(R) Ethernet Connection XL710 Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
retpoline:      Y
rhelversion:    7.7
srcversion:     DC5C250666ADD8603966656
alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
depends:        ptp
intree:         Y
vermagic:       3.10.0-993.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        81:7C:CB:07:72:4E:7F:B8:15:24:10:F9:27:2D:AA:CF:80:3E:CE:59
sig_hashalgo:   sha256
parm:           debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)
parm:           int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)

Examples

>>> type(modinfo_i40e)
<class 'insights.parsers.modinfo.ModInfoI40e'>
>>> modinfo_i40e.module_name
'i40e'
>>> modinfo_i40e.module_version
'2.3.2-k'
>>> modinfo_i40e.module_path
'/lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz'
>>> 'firmware' in modinfo_i40e
True
>>> sorted(modinfo_i40e.module_firmware) == sorted(['i40e/i40e-e2-7.13.1.0.fw', 'i40e/i40e-e1h-7.13.1.0.fw'])
True
>>> sorted(modinfo_i40e.module_alias) == sorted(['pci:v00008086d0000158Asv*sd*bc*sc*i*', 'pci:v00008086d0000158Bsv*sd*bc*sc*i*'])
True
>>> sorted(modinfo_i40e.module_parm) == sorted(['debug:Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX) (uint)', 'int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)'])
True
class insights.parsers.modinfo.ModInfoIgb(context, extra_bad_lines=[])[source]

Bases: insights.parsers.modinfo.ModInfoEach

Parses output of modinfo igb command. Sample modinfo igb output:

filename:       /lib/modules/3.10.0-327.10.1.el7.jump7.x86_64/kernel/drivers/net/ethernet/intel/igb/igb.ko
version:        5.2.15-k
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
rhelversion:    7.2
srcversion:     9CF4D446FA2E882F6BA0A17
alias:          pci:v00008086d000010D6sv*sd*bc*sc*i*
depends:        i2c-core,ptp,dca,i2c-algo-bit
intree:         Y
vermagic:       3.10.0-327.10.1.el7.jump7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        C9:10:C7:BB:C3:C7:10:A1:68:A6:F3:6D:45:22:90:B7:5A:D4:B0:7A
sig_hashalgo:   sha256
parm:           max_vfs:Maximum number of virtual functions to allocate per physical function (uint)
parm:           debug:Debug level (0=none,...,16=all) (int)

Examples

>>> type(modinfo_igb)
<class 'insights.parsers.modinfo.ModInfoIgb'>
>>> modinfo_igb.module_name
'igb'
>>> modinfo_igb.module_version
'5.2.15-k'
>>> modinfo_igb.module_signer
'Red Hat Enterprise Linux kernel signing key'
>>> modinfo_igb.module_alias
'pci:v00008086d000010D6sv*sd*bc*sc*i*'
class insights.parsers.modinfo.ModInfoIxgbe(context, extra_bad_lines=[])[source]

Bases: insights.parsers.modinfo.ModInfoEach

Parses output of modinfo ixgbe command. Sample modinfo ixgbe output:

filename:       /lib/modules/3.10.0-514.6.1.el7.jump3.x86_64/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
version:        4.4.0-k-rh7.3
license:        GPL
description:    Intel(R) 10 Gigabit PCI Express Network Driver
author:         Intel Corporation, <linux.nics@intel.com>
rhelversion:    7.3
srcversion:     24F0195E8A357701DE1B32E
alias:          pci:v00008086d000015CEsv*sd*bc*sc*i*
depends:        i2c-core,ptp,dca,i2c-algo-bit
intree:         Y
vermagic:       3.10.0-514.6.1.el7.jump3.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        69:10:6E:D5:83:0D:2C:66:97:41:91:7B:0F:57:D4:1D:95:A2:8A:EB
sig_hashalgo:   sha256
parm:           max_vfs:Maximum number of virtual functions to allocate per physical function (uint)
parm:           debug:Debug level (0=none,...,16=all) (int)

Examples

>>> type(modinfo_ixgbe)
<class 'insights.parsers.modinfo.ModInfoIxgbe'>
>>> modinfo_ixgbe.module_name
'ixgbe'
>>> modinfo_ixgbe.module_version
'4.4.0-k-rh7.3'
>>> modinfo_ixgbe.module_signer
'Red Hat Enterprise Linux kernel signing key'
>>> modinfo_ixgbe.module_alias
'pci:v00008086d000015CEsv*sd*bc*sc*i*'
class insights.parsers.modinfo.ModInfoVeth(context, extra_bad_lines=[])[source]

Bases: insights.parsers.modinfo.ModInfoEach

Parses output of modinfo veth command. Sample modinfo veth output:

filename:       /lib/modules/3.10.0-327.el7.x86_64/kernel/drivers/net/veth.ko
alias:          rtnl-link-veth
license:        GPL v2
description:    Virtual Ethernet Tunnel
rhelversion:    7.2
srcversion:     25C6BF3D2F35CAF3A252F12
depends:
intree:         Y
vermagic:       3.10.0-327.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        BC:73:C3:CE:E8:9E:5E:AE:99:4A:E5:0A:0D:B1:F0:FE:E3:FC:09:13
sig_hashalgo:   sha256

Examples

>>> type(modinfo_veth)
<class 'insights.parsers.modinfo.ModInfoVeth'>
>>> modinfo_veth.module_name
'veth'
>>> modinfo_veth.module_signer
'Red Hat Enterprise Linux kernel signing key'
class insights.parsers.modinfo.ModInfoVmxnet3(context, extra_bad_lines=[])[source]

Bases: insights.parsers.modinfo.ModInfoEach

Parses output of modinfo vmxnet3 command. Sample modinfo vmxnet3 output:

filename:       /lib/modules/3.10.0-957.10.1.el7.x86_64/kernel/drivers/net/vmxnet3/vmxnet3.ko.xz
version:        1.4.14.0-k
license:        GPL v2
description:    VMware vmxnet3 virtual NIC driver
author:         VMware, Inc.
retpoline:      Y
rhelversion:    7.6
srcversion:     7E672688ACACBDD2E363B63
alias:          pci:v000015ADd000007B0sv*sd*bc*sc*i*
depends:
intree:         Y
vermagic:       3.10.0-957.10.1.el7.x86_64 SMP mod_unload modversions
signer:         Red Hat Enterprise Linux kernel signing key
sig_key:        A5:70:18:DF:B6:C9:D6:1F:CF:CE:0A:3D:02:8B:B3:69:BD:76:CA:ED
sig_hashalgo:   sha256

Examples

>>> type(modinfo_drv)
<class 'insights.parsers.modinfo.ModInfoVmxnet3'>
>>> modinfo_drv.module_name
'vmxnet3'
>>> modinfo_drv.module_version
'1.4.14.0-k'
>>> modinfo_drv.module_signer
'Red Hat Enterprise Linux kernel signing key'
>>> modinfo_drv.module_alias
'pci:v000015ADd000007B0sv*sd*bc*sc*i*'

Modprobe configuration - files /etc/modprobe.conf and /etc/modprobe.d/*.conf

This parser collects command information from the Modprobe configuration files and stores information about each module mentioned. Lines such as comments and those without the commands ‘alias’, ‘blacklist’, ‘install’, ‘options’, ‘remove’ and ‘softdep’ are ignored.

Blacklisted modules simply have True as their value in the dictionary. Alias lines list the module last, and these are recorded as a list of aliases. For all other commands the module name (after the command) is used as the key, and the rest of the line is split up and stored as a list. Any lines that don’t parse, either because they’re not long enough or because they don’t start with a valid keyword, are stored in the bad_lines property list.

Sample file /etc/modprobe.conf:

alias scsi_hostadapter2 qla2xxx
alias scsi_hostadapter3 usb-storage
alias net-pf-10 off
alias ipv6 off
alias bond0 bonding
alias bond1 bonding
options bonding max_bonds=2
options bnx2 disable_msi=1

Examples

>>> mconf_list = shared[ModProbe] # A list: multiple files may be found
>>> for mconf in mconf_list:
...     print "File:", mconf.file_name
...     print "Modules with aliases:", sorted(mconf.data['alias'].keys())
File: /etc/modprobe.conf
Modules with aliases: ['bonding', 'off', 'qla2xxx', 'usb-storage']
class insights.parsers.modprobe.ModProbe(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse Modprobe configuration files - /etc/modprobe.conf and files in the /etc/modprobe.d/ directory.

parse_content(content)[source]

This method must be implemented by classes based on this class.

MongodbConf - files /etc/mongod.conf, /etc/mongod.conf and /etc/opt/rh/rh-mongodb26/mongod.conf

This module contains the following files:

/etc/mongod.conf, /etc/mongodb.conf , /etc/opt/rh/rh-mongodb26/mongod.conf.

They are provided by package mongodb-server or rh-mongodb26-mongodb-server. These MongoDB configuration files may use the YAML format or the standard key-value pair format.

Sample input(YAML format):

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /var/lib/mongo
  journal:
    enabled: true

Sample input(key-value pair format):

# mongodb.conf - generated from Puppet
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
# Set this option to configure the mongod or mongos process to bind to and
# listen for connections from applications on this address.
# You may concatenate a list of comma separated values to bind mongod to multiple IP addresses.
bind_ip = 127.0.0.1
# fork and run in background
fork=true
dbpath=/var/lib/mongodb
# location of pidfile
pidfilepath=/var/run/mongodb/mongodb.pid
# Enables journaling
journal = true
# Turn on/off security.  Off is currently the default
noauth=true

Examples

>>> mongod_conf1 = shared[MongodConf]
>>> mongod_conf2 = shared[MongodConf]
>>> MongodbConf1.is_yaml
True
>>> MongodbConf2.is_yaml
False
>>> mongod_conf1.fork
True
>>> mongod_conf2.fork
'true'
>>> mongod_conf1.dbpath
'/var/lib/mongo'
>>> mongod_conf2.dbpath
'/var/lib/mongo'
>>> mongod_conf1.get("systemlog", {}).get("logAppend")
True
>>> MongodbConf2.get("logappend")
'true'
class insights.parsers.mongod_conf.MongodbConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parse the /etc/mongod.conf config file in key-value pair or YAML format. Make several frequently used config options as properties.

Raises

ParseException -- Raised when any problem parsing the file content.

is_yaml

True if this is a yaml format file.

Type

boolean

property bindip

Return option value of net.bindIp if a yaml conf and bind_ip if a key-value pair conf.

property dbpath

Return option value of storage.dbPath if a yaml conf and dbPath if a key-value pair conf.

property fork

Return option value of processManagement.fork if a yaml conf and fork if a key-value pair conf.

property logpath

Return option value of systemLog.path if a yaml conf and logpath if a key-value pair conf.

parse_content(content)[source]

This method must be implemented by classes based on this class.

property pidfilepath

Return option value of processManagement.pidFilePath if a yaml conf and pidFilePath if a key-value pair conf.

property port

Return option value of net.port if a yaml conf and port if a key-value pair conf.

property syslog

Return option value of systemLog.destination if a yaml conf, this can be ‘file’ or ‘syslog’. Return value of syslog if a key-value pair conf, ‘true’ means log to syslog. Return None means value is not specified in configuration file.

Mount Entries

Parsers provided in this module includes:

Mount - command /bin/mount

ProcMounts - file /proc/mounts

The Mount class implements parsing for the mount command output which looks like:

/dev/mapper/rootvg-rootlv on / type ext4 (rw,relatime,barrier=1,data=ordered)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/HostVG-Config on /etc/shadow type ext4 (rw,noatime,seclabel,stripe=256,data=ordered)
dev/sr0 on /run/media/root/VMware Tools type iso9660 (ro,nosuid,nodev,relatime,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2) [VMware Tools]

The information is stored as a list of MountEntry objects. Each MountEntry object contains attributes for the following information that are listed in the same order as in the command output:

  • filesystem - Name of filesystem or the mounted device

  • mount_point - Name of mount point for filesystem

  • mount_type - Name of filesystem type

  • mount_options - Mount options as MountOpts object

  • mount_label - Optional label of this mount entry, empty string by default

  • mount_clause - Full raw string from command output

The MountOpts class contains the mount options as attributes accessible via the attribute name as it appears in the command output. For instance the options (rw,dmode=0500) may be accessed as ‘’mnt_row_info.rw`` with the value True and mnt_row_info.dmode with the value “0500”. The in operator may be used to determine if an option is present.

MountEntry lines are also available in a mounts property, keyed on the mount point.

class insights.parsers.mount.Mount(context, extra_bad_lines=[])[source]

Bases: insights.parsers.mount.MountedFileSystems

Class of information for all output from mount command.

Note

Please refer to its super-class MountedFileSystems for more details.

The typical output of mount command looks like:

/dev/mapper/rootvg-rootlv on / type ext4 (rw,relatime,barrier=1,data=ordered)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/HostVG-Config on /etc/shadow type ext4 (rw,noatime,seclabel,stripe=256,data=ordered)
dev/sr0 on /run/media/root/VMware Tools type iso9660 (ro,nosuid,nodev,relatime,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2) [VMware Tools]

Examples

>>> type(mnt_info)
<class 'insights.parsers.mount.Mount'>
>>> len(mnt_info)
4
>>> mnt_info[3].filesystem
'dev/sr0'
>>> mnt_info[3].mount_label
'[VMware Tools]'
>>> mnt_info[3].mount_type
'iso9660'
>>> 'ro' in mnt_info[3].mount_options
True
>>> mnt_info['/run/media/root/VMware Tools'].filesystem
'dev/sr0'
>>> mnt_info['/run/media/root/VMware Tools'].mount_label
'[VMware Tools]'
>>> mnt_info['/run/media/root/VMware Tools'].mount_options.ro
True
class insights.parsers.mount.MountEntry(data=None)[source]

Bases: insights.parsers.mount.AttributeAsDict

An object representing an mount entry of mount command or /proc/mounts file. Each entry contains below fixed attributes:

filesystem

Name of filesystem of mounted device

Type

str

mount_point

Name of mount point for filesystem

Type

str

mount_type

Name of filesystem type

Type

str

mount_options

Mount options as MountOpts

Type

MountOpts

mount_label

Optional label of this mount entry, an empty string by default

Type

str

mount_clause

Full raw string from command output

Type

str

class insights.parsers.mount.MountOpts(data=None)[source]

Bases: insights.parsers.mount.AttributeAsDict

An object representing the mount options found in mount or fstab entry as attributes accessible via the attribute name as it appears in the command output. For instance, the options (rw,dmode=0500) may be accessed as mnt_row_info.rw with the value True and mnt_row_info.dmode with the value “0500”.

The in operator may be used to determine if an option is present.

class insights.parsers.mount.MountedFileSystems(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base Class for Mount and ProcMounts.

rows

List of MountEntry objects for each row of the content.

Type

list

mounts

Dict with the mount_point as the key and the MountEntry objects as the value.

Type

dict

Raises
get_dir(path)[source]

This finds the most specific mount path that contains the given path, by successively removing the directory or file name on the end of the path and seeing if that is a mount point. This will always terminate since / is always a mount point. Strings that are not absolute paths will return None.

Parameters

path (str) -- The path to check.

Returns

The mount point that contains the given path.

Return type

MountEntry

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kwargs)[source]

Returns a list of the mounts (in order) matching the given criteria. Keys are searched for directly - see the insights.parsers.keyword_search() utility function for more details. If no search parameters are given, no rows are returned.

Examples

>>> mounts.search(filesystem='proc')[0].mount_clause
'proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)'
>>> mounts.search(mount_options__contains='seclabel')[0].mount_clause
'/dev/mapper/HostVG-Config on /etc/shadow type ext4 (rw,noatime,seclabel,stripe=256,data=ordered)'
Parameters

**kwargs (dict) -- Dictionary of key-value pairs to search for.

Returns

The list of mount points matching the given criteria.

Return type

(list)

class insights.parsers.mount.ProcMounts(context, extra_bad_lines=[])[source]

Bases: insights.parsers.mount.MountedFileSystems

Class to parse the content of /proc/mounts file.

This class is required to parse the /proc/mounts file in addition to the /bin/mount command because it lists the mount points of those process’s which are not present in the output of the /bin/mount command.

Note

Please refer to its super-class MountedFileSystems for more details.

The typical content of /proc/mounts file looks like:

/dev/mapper/rootvg-rootlv / ext4 rw,relatime,barrier=1,data=ordered 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
/dev/mapper/HostVG-Config /etc/shadow ext4 rw,noatime,seclabel,stripe=256,data=ordered 0 0
dev/sr0 /run/media/root/VMware Tools iso9660 ro,nosuid,nodev,relatime,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2 0 0

Examples

>>> type(proc_mnt_info)
<class 'insights.parsers.mount.ProcMounts'>
>>> len(proc_mnt_info)
4
>>> proc_mnt_info[3].filesystem == 'dev/sr0'
True
>>> proc_mnt_info[3].mounted_device == 'dev/sr0'
True
>>> proc_mnt_info[3].mounted_device == proc_mnt_info[3].filesystem
True
>>> proc_mnt_info[3].mount_type == 'iso9660'
True
>>> proc_mnt_info[3].filesystem_type == 'iso9660'
True
>>> proc_mnt_info['/run/media/root/VMware Tools'].mount_label == ['0', '0']
True
>>> proc_mnt_info['/run/media/root/VMware Tools'].mount_options.ro
True
>>> proc_mnt_info['/run/media/root/VMware Tools'].mounted_device == 'dev/sr0'
True

Microsoft SQL Server Database Engine configuration - file /var/opt/mssql/mssql.conf

The Microsoft SQL Server configuration file is a standard ‘.ini’ file and uses the IniConfigfile class to read it.

Sample configuration:

[sqlagent]
enabled = false

[EULA]
accepteula = Y

[memory]
memorylimitmb = 3328

Examples

>>> conf.has_option('memory', 'memorylimitmb')
True
>>> conf.get('memory', 'memorylimitmb') == '3328'
True
class insights.parsers.mssql_conf.MsSQLConf(context)[source]

Bases: insights.core.IniConfigFile

Microsoft SQL Server Database Engine configuration parser class, based on the IniConfigFile class.

MulticastQuerier - command find /sys/devices/virtual/net/ -name multicast_querier -print -exec cat {} \;

This module provides processing for the output of the find -name multicast_querier ... command.

Sample output of this command looks like:

/sys/devices/virtual/net/br0/bridge/multicast_querier
0
/sys/devices/virtual/net/br1/bridge/multicast_querier
1
/sys/devices/virtual/net/br2/bridge/multicast_querier
0

The bri_val method is to return a dictionary contains bridge interface and its multicast_querier value as the parsing result:

{'br0': 0, 'br1': 1, 'br2': 0}

Examples

>>> multicast_querier_content = '''
... /sys/devices/virtual/net/br0/bridge/multicast_querier
... 0
... /sys/devices/virtual/net/br1/bridge/multicast_querier
... 1
... /sys/devices/virtual/net/br2/bridge/multicast_querier
... 0
... '''.strip()
>>> from insights.tests import context_wrap
>>> from insights.parsers.multicast_querier import MulticastQuerier
>>> shared = {MulticastQuerier: MulticastQuerier(context_wrap(multicast_querier_content))}
>>> mq_results = MulticastQuerier(context_wrap(multicast_querier_content))
>>> mq_results.bri_val
{'br0': 0, 'br1': 1, 'br2': 0}
class insights.parsers.multicast_querier.MulticastQuerier(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of the command:

find /sys/devices/virtual/net/ -name multicast_querier -print -exec cat {} ;

Get a dictionary of “bridge interface” and the value of the parameter “multicast_querier”

parse_content(content)[source]

This method must be implemented by classes based on this class.

multipath.conf file content

The base class is the MultipathConfParser class, which reads the multipath daemon’s /etc/multipath.conf configuration file. This is in a pseudo-JSON format.

MultipathConf - file /etc/multipath.conf

MultipathConfInitramfs - command lsinitrd -f /etc/multipath.conf

class insights.parsers.multipath_conf.MultipathConf(context)[source]

Bases: insights.parsers.multipath_conf.MultipathConfParser

Parser for the file /etc/multipath.conf.

Examples

>>> conf = shared[MultipathConf]
>>> conf.data['blacklist']['devnode']  # Access via data property
'^hd[a-z]'
>>> conf['defaults']['user_friendly_names']  # Pseudo-dict access
'yes'
>>> len(conf['multipaths'])
2
>>> conf['multipaths'][0]['alias']
'yellow'
class insights.parsers.multipath_conf.MultipathConfInitramfs(context)[source]

Bases: insights.parsers.multipath_conf.MultipathConfParser

Parser for the output of lsinitrd -f /etc/multipath.conf applied to /boot/initramfs-<kernel-version>.img.

Examples

>>> conf = shared[MultipathConfInitramfs]
>>> conf.data['blacklist']['devnode']  # Access via data property
'^hd[a-z]'
>>> conf['defaults']['user_friendly_names']  # Pseudo-dict access
'yes'
>>> len(conf['multipaths'])
2
>>> conf['multipaths'][0]['alias']
'yellow'
class insights.parsers.multipath_conf.MultipathConfParser(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Shared parser for the file /etc/multipath.conf and output of lsinitrd -f /etc/multipath.conf applied to /boot/initramfs-<kernel-version>.img.

Return a dict where the keys are the name of sections in multipath configuraion file. If there are subsections, the value is a list of dictionaries with parameters as key and value. Otherwise the value is just a single dictionary.

Configuration File Example:

defaults {
       path_selector           "round-robin 0"
       user_friendly_names      yes
}

multipaths {
       multipath {
               alias                   yellow
               path_grouping_policy    multibus
      }
       multipath {
               wwid                    1DEC_____321816758474
               alias                   red
      }
}

devices {
       device {
               path_selector           "round-robin 0"
               no_path_retry            queue
      }
       device {
               vendor                  1DEC_____321816758474
               path_grouping_policy    red
      }
}

blacklist {
      wwid 26353900f02796769
      devnode "^hd[a-z]"
}

Parse Result:

data = {
  "blacklist": {
    "devnode": "^hd[a-z]",
    "wwid": "26353900f02796769"
  },
  "devices": [
    {
      "path_selector": "round-robin 0",
      "no_path_retry": "queue"
    },
    {
      "path_grouping_policy": "red",
      "vendor": "1DEC_____321816758474"
    }
  ],
  "defaults": {
    "path_selector": "round-robin 0",
    "user_friendly_names": "yes"
  },
  "multipaths": [
    {
      "alias": "yellow",
      "path_grouping_policy": "multibus"
    },
    {
      "alias": "red",
      "wwid": "1DEC_____321816758474"
    }
  ]
}
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.multipath_conf.MultipathConfTree(context)[source]

Bases: insights.core.ConfigParser

Exposes multipath configuration through the parsr query interface.

See the insights.core.ConfigComponent class for example usage.

class insights.parsers.multipath_conf.MultipathConfTreeInitramfs(context)[source]

Bases: insights.core.ConfigParser

Exposes the multipath configuration from initramfs image through the parsr query interface.

See the insights.core.ConfigComponent class for example usage.

insights.parsers.multipath_conf.get_tree(root=None)[source]

This is a helper function to get a multipath configuration component for your local machine or an archive. It’s for use in interactive sessions.

insights.parsers.multipath_conf.get_tree_from_initramfs(root=None)[source]

This is a helper function to get a multipath configuration(from initramfs image) component for your local machine or an archive. It’s for use in interactive sessions.

MultipathDevices - command multipath -v4 -ll

This function converts the output of the multipath -v4 -ll command and stores the data around each multipath device given.

Examples

>>> type(mpaths)
<class 'insights.parsers.multipath_v4_ll.MultipathDevices'>
>>> len(mpaths)  # Can treat the object as a list to iterate through
3
>>> mpaths[0]['alias']
'mpathg'
>>> mpaths[0]['size']
'54T'
>>> mpaths[0]['dm_name']
'dm-2'
>>> mpaths[0]['wwid']
'36f01faf000da360b0000033c528fea6d'
>>> groups = mpaths[0]['path_group']  # List of path groups for this device
>>> groups[0]['status']
'active'
>>> len(groups[0]['path'])
4
>>> path0 = groups[0]['path'][0]  # Each path group has an array of paths
>>> path0[1]  # Paths are stored as a list of items
'sdc'
>>> path0[-1]
'running'
>>> mpaths.dms  # List of device names found
['dm-2', 'dm-4', 'dm-5']
>>> mpaths.by_dm['dm-2']['alias']  # Access by device name
'mpathg'
>>> mpaths.aliases  # Aliases found (again, in order)
['mpathg', 'mpathe']
>>> mpaths.by_alias['mpathg']['dm_name']  # Access by alias
'dm-2'
>>> mpaths.by_wwid['36f01faf000da360b0000033c528fea6d']['dm_name']
'dm-2'
class insights.parsers.multipath_v4_ll.MultipathDevices(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

multipath_-v4_ll command output

Example input:

===== paths list =====
uuid hcil    dev dev_t pri dm_st chk_st vend/prod/rev       dev_st
     0:0:0:0 sda 8:0   -1  undef ready  VMware,Virtual disk running
     3:0:0:1 sdb 8:16  -1  undef ready  IET,VIRTUAL-DISK    running
     4:0:0:1 sdc 8:32  -1  undef ready  IET,VIRTUAL-DISK    running
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = E, len = 1
Oct 28 14:02:44 | *word = 1, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
Oct 28 14:02:44 | *word = A, len = 1
Oct 28 14:02:44 | *word = 0, len = 1
mpathg (36f01faf000da360b0000033c528fea6d) dm-2 DELL,MD36xxi
size=54T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 12:0:0:1 sdc 8:32   active ready running
| |- 11:0:0:1 sdi 8:128  active ready running
| |- 15:0:0:1 sdo 8:224  active ready running
| `- 17:0:0:1 sdv 65:80  active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 13:0:0:1 sdf 8:80   active ready running
  |- 14:0:0:1 sdl 8:176  active ready running
  |- 16:0:0:1 sdr 65:16  active ready running
  `- 18:0:0:1 sdx 65:112 active ready running
mpathe (36f01faf000da3761000004323aa6fbce) dm-4 DELL,MD36xxi
size=54T features='3 queue_if_no_path pg_init_retries 55' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 13:0:0:2 sdg 8:96   active faulty running
| |- 14:0:0:2 sdm 8:192  active faulty running
| |- 16:0:0:2 sds 65:32  active faulty running
| `- 18:0:0:2 sdy 65:128 active faulty running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 12:0:0:2 sdd 8:48   active faulty running
  |- 11:0:0:2 sdj 8:144  active faulty running
  |- 15:0:0:2 sdp 8:240  active faulty running
  `- 17:0:0:2 sdw 65:96  active faulty running
36001405b1629f80d52a4c898f8856e43 dm-5 LIO-ORG ,block0_sdb
size=2.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 3:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 4:0:0:0 sdb 8:16 active ready running

Example data structure produced:

devices = [
  {
    "alias": "mpathg",
    "wwid": "36f01faf000da360b0000033c528fea6d",
    "dm_name": "dm-2",
    "venprod": "DELL,MD36xxi",
    "size": "54T",
    "features": "3 queue_if_no_path pg_init_retries 50",
    "hwhandler": "1 rdac",
    "wp": "rw",
    "path_group": [
         {
            "policy": "round-robin 0",
            "prio": "0"
            "status": "active"
            "path": [
                ['12:0:0:1', 'sdc', '8:32', 'active', 'ready', 'running'],
                ['11:0:0:1', 'sdi', '8:128', 'active', 'ready', 'running'],
                ['15:0:0:1', 'sdo', '8:224', 'active', 'ready', 'running'],
                ['17:0:0:1', 'sdv', '65:80', 'active', 'ready', 'running']
            ]
         }, {
            "policy": "round-robin 0",
            "prio": "0"
            "status": "enabled"
            "path": [
                ['13:0:0:1', 'sdf', '8:80', 'active', 'ready', 'running'],
                ['14:0:0:1', 'sdl', '8:176', 'active', 'ready', 'running'],
                ['16:0:0:1', 'sdr', '65:16', 'active', 'ready', 'running'],
                ['18:0:0:1', 'sdx', '65:112','active', 'ready', 'running']
            ]
        }
    ]
  },...
]

raw_info_lines = [
    "===== paths list =====",
    "uuid hcil    dev dev_t pri dm_st chk_st vend/prod/rev       dev_st",
    "     0:0:0:0 sda 8:0   -1  undef ready  VMware,Virtual disk running",
    "     3:0:0:1 sdb 8:16  -1  undef ready  IET,VIRTUAL-DISK    running",
    "     4:0:0:1 sdc 8:32  -1  undef ready  IET,VIRTUAL-DISK    running",
    "Oct 28 14:02:44 | *word = 0, len = 1",
    ...
]
devices

List of devices found, in order

Type

list

dms

Device mapper names of each device, in order found

Type

list

aliases

Alias of each device, in order found

Type

list

wwids

World Wide ID

Type

list

by_dm

Access to each device by device mapper name

Type

dict

by_alias

Access to each device by alias

Type

dict

by_wwid

Access to each device by World Wide ID

Type

dict

raw_info_lines

List of raw info lines found, in order

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.multipath_v4_ll.get_multipath_v4_ll(context)[source]

Warning

Deprecated parser, please use MultipathDevices instead.

MysqlLog - File /var/log/mysqld.log

Module for parsing the log file for Mysql, including /var/log/mysql/mysqld.log, /var/log/mysqld.log and /var/opt/rh/rh-mysql*/log/mysql/mysqld.log.

Note

By default, logrotate of mysql.log is set to daily. In some lines of log, there are no timestamp.

Typical content of mysql.log file is:

2018-03-13T06:37:39.651387Z 0 [Warning] InnoDB: New log files created, LSN=45790
2018-03-13T06:37:39.719166Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2018-03-13T06:37:39.784406Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 0
698a7d6-2689-11e8-8944-0800274ac5ef.
2018-03-13T06:37:39.789636Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2018-03-13T06:37:40.498084Z 0 [Warning] CA certificate ca.pem is self signed.
2018-03-13T06:37:41.080591Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
md5_dgst.c(80): OpenSSL internal error, assertion failed: Digest MD5 forbidden in FIPS mode!
06:37:41 UTC - mysqld got signal 6 ;
2018-03-13T07:43:31.450772Z 0 [Note] Event Scheduler: Loaded 0 events
2018-03-13T07:43:31.450988Z 0 [Note] /opt/rh/rh-mysql57/root/usr/libexec/mysqld: ready for connections.
Version: '5.7.16'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

Examples

>>> my = MysqlLog((context_wrap(MYSQLLOG)))
>>> my.get('mysqld')[0]['raw_message']
'06:37:41 UTC - mysqld got signal 6 ;'
>>> 'ready for connections' in my
True
class insights.parsers.mysql_log.MysqlLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/mysqld.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

mysqladmin command - Command

Parsing and extracting data from output of command /bin/mysqladmin variables. Parsers contained in this module are:

MysqladminStatus - command /bin/mysqladmin status

MysqladminVars - command /bin/mysqladmin variables

class insights.parsers.mysqladmin.MysqladminStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Module for parsing the output of the mysqladmin status command.

Typical output looks like:

Uptime: 1103965 Threads: 1820 Questions: 44778091 Slow queries: 0 Opens: 1919 Flush tables: 1 Open tables: 592 Queries per second avg: 40.561

Examples

>>> result.status['Uptime'] == '1103965'
True
>>> result.status['Threads']
'1820'
>>> result.status['Queries per second avg'] == '1919'
False
grouper(iterable, n, fillvalue=None)[source]

Collect data into fixed-length chunks or blocks

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.mysqladmin.MysqladminVars(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

The output of command /bin/mysqladmin variables is in mysql table format, contains ‘Variable_name’ and ‘Value’ two columns. This parser will parse the table and set each variable as an class attribute. The unparsable lines are stored in the bad_lines property list.

Example

>>> output.get('version')
'5.5.56-MariaDB'
>>> 'datadir' in output
True
>>> output.get('what', '233')
'233'
>>> output.getint('aria_block_size')
8192
getint(keyword, default=None)[source]

Get value for specified keyword, use default if keyword not found.

Example

>>> output.getint('wait_timeout')
28800
>>> output.getint('wait_what', 100)
100
Parameters
  • keyword (str) -- Key to get from self.data.

  • default (int) -- Default value to return if key is not present.

Returns

Int value of the stored item, or the default if not found.

Return type

value (int)

parse_content(content)[source]

Parse output content table of command /bin/mysqladmin variables. Set each variable as an class attribute.

NetworkNamespace = /bin/ls /var/run/netns

This specs provides list of network namespace created on the host machine.

Typical output of this command is as below:

temp_netns  temp_netns_2  temp_netns_3

The /bin/ls /var/run/netns is prefered over /bin/ip netns list because it works on all RHEL versions, no matter ip package is installed or not.

Examples

>>> type(netns_obj)
<class 'insights.parsers.net_namespace.NetworkNamespace'>
>>> netns_obj.netns_list
['temp_netns', 'temp_netns_2', 'temp_netns_3']
>>> len(netns_obj.netns_list)
3
class insights.parsers.net_namespace.NetworkNamespace(context)[source]

Bases: insights.core.Parser

property netns_list

This method returns list of network namespace created in process memory.

Returns

list of network namepaces if exists.

parse_content(content)[source]

This method must be implemented by classes based on this class.

NetConsole - file /etc/sysconfig/netconsole

This parser reads the /etc/sysconfig/netconsole file. It uses the SysconfigOptions parser class to convert the file into a dictionary of options.

Sample data:

# This is the configuration file for the netconsole service.  By starting
# this service you allow a remote syslog daemon to record console output
# from this system.

# The local port number that the netconsole module will use
LOCALPORT=6666

Examples

>>> config = shared[NetConsole]
>>> 'LOCALPORT' in config.data
True
>>> 'DEV' in config # Direct access to options
False
class insights.parsers.netconsole.NetConsole(*args, **kwargs)[source]

Bases: insights.core.SysconfigOptions, insights.core.LegacyItemAccess

Warning

This parser is deprecated, please use insights.parsers.sysconfig.NetconsoleSysconfig instead.

Contents of the /etc/sysconfig/netconsole file. Uses the SysconfigOptions shared parser class.

netstat and ss - Commands

Shared mappers for parsing and extracting data from variations of the netstat and ss commands. Mappers contained in this module are:

NetstatS - command netstat -s

NetstatAGN - command netstat -agn

Netstat - command netstat -neopa

Netstat_I - command netstat -i

SsTULPN - command ss -tulpn

SsTUPNA - command ss -tupna

ProcNsat - File /proc/net/netstat

insights.parsers.netstat.ACTIVE_INTERNET_CONNECTIONS = 'Active Internet connections (servers and established)'

The key in Netstat data to internet connection information

Type

str

insights.parsers.netstat.ACTIVE_UNIX_DOMAIN_SOCKETS = 'Active UNIX domain sockets (servers and established)'

The key in Netstat data UNIX domain socket information

Type

str

class insights.parsers.netstat.Netstat(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parsing the /bin/netstat -neopa command output.

Example output:

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name     Timer
tcp        0      0 0.0.0.0:5672            0.0.0.0:*               LISTEN      996        19422      1279/qpidd           off (0.00/0/0)
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      184        20380      2007/mongod          off (0.00/0/0)
tcp        0      0 127.0.0.1:53644         0.0.0.0:*               LISTEN      995        1154674    12387/Passenger Rac  off (0.00/0/0)
tcp        0      0 0.0.0.0:5646            0.0.0.0:*               LISTEN      991        20182      1272/qdrouterd       off (0.00/0/0)
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ]         DGRAM                    11776    1/systemd            /run/systemd/shutdownd
unix  2      [ ACC ]     STREAM     LISTENING     535      1/systemd            /run/lvm/lvmetad.socket
unix  2      [ ACC ]     STREAM     LISTENING     16411    738/NetworkManager   /var/run/NetworkManager/private

The following attributes are all keyed on the header as it appears complete in the input - e.g. active connections are stored by the key ‘Active Internet connections (servers and established)’. For convenience, these two keys are stored in this module under the constant names:

  • ACTIVE_INTERNET_CONNECTIONS

  • ACTIVE_UNIX_DOMAIN_SOCKETS

Access to the data in this class is using the following attributes:

data

Keyed as above, each item is a dictionary of lists, corresponding to a column and row lookup from the table data. For example, the first line’s State is [‘State’][0]

Type

dict

datalist

Keyed as above, each item is a list of dictionaries corresponding to a row and column lookup from the table. For example, the first line’s State is [0][‘State’]

Type

dict

lines

Keyed as above, each item is a list of the original line of data from the input, in the same order that the data appears in the datalist attribute’s list.

Type

dict

The keys in the data dictionary and each element of the datalist lists are the same as the headers in the table (e.g. Proto, Recv-Q, etc for ‘Active Internet connections (servers and established)’ and Proto, RefCnt, Flags, etc. for ‘Active UNIX domain sockets (servers and established)’). The datalist row dictionaries also have the following keys:

  • Local IP - (for internet connections) the address portion of the ‘Local Address’ field.

  • Port - (for internet connections) the port portion of the ‘Local Address’ field.

  • PID - the process ID from the ‘PID/Program name’ field.

  • Program name - the process ID from the ‘PID/Program name’ field.

Examples

>>> type(ns)
<class 'insights.parsers.netstat.Netstat'>
>>> sorted(ns.data.keys())  # Both tables stored in dictionary by name
['Active Internet connections (servers and established)', 'Active UNIX domain sockets (servers and established)']
>>> intcons = 'Active Internet connections (servers and established)'
>>> sorted(ns.data[intcons].keys())  # Data stored by column:
['Foreign Address', 'Inode', 'Local Address', 'PID/Program name', 'Proto', 'Recv-Q', 'Send-Q', 'State', 'Timer', 'User']
>>> ns.data[intcons]['Local Address'][1]  # ... and then by row
'127.0.0.1:27017'
>>> ns.datalist[intcons][1]['Local Address']  # Data in a list by row then column
'127.0.0.1:27017'
>>> ns.lines[intcons][1]  # The raw line
'tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      184        20380      2007/mongod          off (0.00/0/0)'
>>> ns.get_original_line(intcons, 1)  # Alternative way of getting line
'tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      184        20380      2007/mongod          off (0.00/0/0)'
>>> 'qpidd' in ns.running_processes  # All running processes on internet ports
True
>>> 'systemd' in ns.running_processes  # Does not look at UNIX sockets
False
>>> pids = ns.listening_pid  # All PIDs listening on internet ports, with info
>>> sorted(pids.keys())  # Note: keys are strings
['12387', '1272', '1279', '2007']
>>> pids['12387']['addr']
'127.0.0.1'
>>> pids['12387']['port']
'53644'
>>> pids['12387']['name']
'Passenger Rac'
>>> datagrams = ns.search(Type='DGRAM')  # List of data row dictionaries
>>> len(datagrams)
1
>>> datagrams[0]['RefCnt']
'unix  2'
>>> datagrams[0]['Flags']
'[ ]'
>>> datagrams[0]['Type']
'DGRAM'
>>> datagrams[0]['State']
''
>>> datagrams[0]['I-Node']
'11776'
>>> datagrams[0]['PID/Program name']
'1/systemd'
>>> datagrams[0]['Path']
'/run/systemd/shutdownd'
get_original_line(section_id, index)[source]

Get the original netstat line that is stripped white spaces

property listening_pid

Find PIDs of all LISTEN processes

Returns

If any are found, they are returned in a dictionary following the format:

{'pid': ("addr": ip_address, 'port': port, 'name': process_name)}

Return type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

property running_processes

List all the running processes given in the Active Internet Connections part of the netstat output.

Returns

set of process names (with spaces, as given in netstat output)

Return type

set

search(**kwargs)[source]

Search for rows in the data matching keywords in the search.

This method searches both the active internet connections and active UNIX domain sockets. If you only want to search one, specify the name via the search_list keyword, e.g.:

from insights.parsers import Netstat, ACTIVE_UNIX_DOMAIN_SOCKETS
conns.search(search_list=[ACTIVE_UNIX_DOMAIN_SOCKETS], State='LISTEN')

The search_list can be either a list, or a string, containing one of the named constants defined in this module. If search_list is not given, both the active internet connections and active UNIX domain sockets are searched, in that order.

The results of the search are compiled into one list. This allows you to search for all listening processes, whether for internet connections or UNIX sockets, by e.g.:

conns.search(State__contains='LISTEN')

This method uses the insights.parsers.keyword_search() function - see its documentation for a complete description of its keyword recognition capabilities.

class insights.parsers.netstat.NetstatAGN(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the netstat -agn command to get interface multicast infomation.

Sample command output:

IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      224.0.0.1
eth0            1      224.0.0.1
lo              3      ff02::1
eth0            4      ff02::1
eth0            1      ff01::1

Examples

>>> type(multicast)
<class 'insights.parsers.netstat.NetstatAGN'>
>>> multicast.data[0]['interface']  # Access by row
'lo'
>>> multicast.data[0]['refcnt']  # Values are strings
'1'
>>> multicast.data[0]['group']  # Column names are lower case
'224.0.0.1'
>>> mc_ifs = multicast.group_by_iface()  # Lists by interface name
>>> len(mc_ifs['lo'])
2
>>> mc_ifs['eth0'][1]['refcnt']  # Listed in order of appearance
'4'
group_by_iface()[source]
Group Netstat AGN data by Iface name, return like this:
>>> content= '''
... {
...     "lo":[
...         {"refcnt":"1", "group":"224.0.0.1"},
...         {"refcnt":"1", "group":"ff02::1"}
...     ]
... }
... '''
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.netstat.NetstatS(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parses data from the netstat -s command.

The output of the netstat -s command looks like:

Ip:
    3405107 total packets received
    0 forwarded
    0 incoming packets discarded
    2900146 incoming packets delivered
    2886201 requests sent out
    456 outgoing packets dropped
    4 fragments received ok
    8 fragments created
Icmp:
    114 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 107
        echo requests: 4
        echo replies: 3
    261 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 254
        echo request: 3
        echo replies: 4
IcmpMsg:
        InType0: 3
        InType3: 107
        InType8: 4
        OutType0: 4
        OutType3: 254
        OutType8: 3
Tcp:
    1648 active connections openings
    1525 passive connection openings
    105 failed connection attempts
    69 connection resets received
    139 connections established
    2886370 segments received
    2890303 segments send out
    428 segments retransmited
    0 bad segments received.
    212 resets sent
Udp:
    4901 packets received
    107 packets to unknown port received.
    0 packet receive errors
    1793 packets sent
    0 receive buffer errors
    0 send buffer errors

Examples

>>> type(stats)
<class 'insights.parsers.netstat.NetstatS'>
>>> sorted(stats.data.keys())  # Stored by heading, lower case
['icmp', 'icmpmsg', 'ip', 'ipext', 'tcp', 'tcpext', 'udp', 'udplite']
>>> 'ip' in stats.data
True
>>> 'forwarded' in stats.data['ip']   # Then by keyword and value
True
>>> stats.data['ip']['forwarded']  # Values are strings
'0'
>>> stats['ip']['forwarded']  # Direct access via LegacyItemAccess
'0'
>>> stats['ip']['requests_sent_out']  # Spaces converted to underscores
'2886201'
>>> stats['tcp']['bad_segments_received']  # Dots are removed
'0'
>>> stats['icmp']['icmp_output_histogram']['destination_unreachable'] # Sub-table
'254'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.netstat.Netstat_I(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the netstat -i command output to get interface traffic info such as “TX-OK” and “RX-OK”.

The output of netstat -i looks like:

Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0      1500   0   845265      0      0      0     1753      0      0      0 BMmRU
bond1      1500   0   842447      0      0      0     4233      0      0      0 BMmRU
eth0       1500   0   422518      0      0      0     1703      0      0      0 BMsRU
eth1       1500   0   422747      0      0      0       50      0      0      0 BMsRU
eth2       1500   0   421192      0      0      0     3674      0      0      0 BMsRU
eth3       1500   0   421255      0      0      0      559      0      0      0 BMsRU
lo        65536   0        0      0      0      0        0      0      0      0 LRU

Examples

>>> type(traf)
<class 'insights.parsers.netstat.Netstat_I'>
>>> traf.data[0]['Iface']  # A list of the interfaces and stats.
'bond0'
>>> 'bond0' in traf.group_by_iface  # A dictionary keyed on interface.
True
>>> 'enp0s25' in traf.group_by_iface
False
>>> 'MTU' in traf.group_by_iface['bond0']
True
>>> traf.group_by_iface['bond0']['MTU']  # as string
'1500'
>>> traf.group_by_iface['bond0']['RX-OK']
'845265'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.netstat.ProcNsat(context)[source]

Bases: insights.core.Parser

Parse the content of the /proc/net/netstat file

Sample input data looks like:

TcpExt: SyncookiesSent SyncookiesRecv SyncookiesFailed EmbryonicRsts PruneCalled RcvPruned OfoPruned OutOfWindowIcmps LockDroppedIcmps ArpFilter TW TWRecycled TWKilled PAWSPassive PAWSActive PAWSEstab DelayedACKs DelayedACKLocked DelayedACKLost ListenOverflows ListenDrops TCPPrequeued TCPDirectCopyFromBacklog TCPDirectCopyFromPrequeue TCPPrequeueDropped TCPHPHits TCPHPHitsToUser TCPPureAcks TCPHPAcks TCPRenoRecovery TCPSackRecovery TCPSACKReneging TCPFACKReorder TCPSACKReorder TCPRenoReorder TCPTSReorder TCPFullUndo TCPPartialUndo TCPDSACKUndo TCPLossUndo TCPLostRetransmit TCPRenoFailures TCPSackFailures TCPLossFailures TCPFastRetrans TCPForwardRetrans TCPSlowStartRetrans TCPTimeouts TCPLossProbes TCPLossProbeRecovery TCPRenoRecoveryFail TCPSackRecoveryFail TCPSchedulerFailed TCPRcvCollapsed TCPDSACKOldSent TCPDSACKOfoSent TCPDSACKRecv TCPDSACKOfoRecv TCPAbortOnData TCPAbortOnClose TCPAbortOnMemory TCPAbortOnTimeout TCPAbortOnLinger TCPAbortFailed TCPMemoryPressures TCPSACKDiscard TCPDSACKIgnoredOld TCPDSACKIgnoredNoUndo TCPSpuriousRTOs TCPMD5NotFound TCPMD5Unexpected TCPSackShifted TCPSackMerged TCPSackShiftFallback TCPBacklogDrop PFMemallocDrop TCPMinTTLDrop TCPDeferAcceptDrop IPReversePathFilter TCPTimeWaitOverflow TCPReqQFullDoCookies TCPReqQFullDrop TCPRetransFail TCPRcvCoalesce TCPOFOQueue TCPOFODrop TCPOFOMerge TCPChallengeACK TCPSYNChallenge TCPFastOpenActive TCPFastOpenActiveFail TCPFastOpenPassive TCPFastOpenPassiveFail TCPFastOpenListenOverflow TCPFastOpenCookieReqd TCPSpuriousRtxHostQueues BusyPollRxPackets TCPAutoCorking TCPFromZeroWindowAdv TCPToZeroWindowAdv TCPWantZeroWindowAdv TCPSynRetrans TCPOrigDataSent TCPHystartTrainDetect TCPHystartTrainCwnd TCPHystartDelayDetect TCPHystartDelayCwnd TCPACKSkippedSynRecv TCPACKSkippedPAWS TCPACKSkippedSeq TCPACKSkippedFinWait2 TCPACKSkippedTimeWait TCPACKSkippedChallenge TCPWqueueTooBig
TcpExt: 10 20 30 40 0 0 0 0 0 0 8387793 2486 0 0 0 3 27599330 35876 309756 0 0 84351589 9652226708 54271044841 0 10507706759 112982361 177521295 3326559442 0 26212 0 36 33090 0 14345 959 8841 425 833 399 0 160 2 633809 11063 7056 233144 1060065 640242 0 228 54 0 310709 0 820887 112 900268 31664 0 232144 0 0 0 261 1048 808390 9 0 0 120433 244126 450077 0 0 0 5625 0 0 0 0 0 6772744900 19251701 0 0 465 463 0 0 0 0 0 0 1172 0 623074473 51282 51282 142025 465090 8484708872 836920 18212118 88 4344 0 0 5 4 3 2 1
IpExt: InNoRoutes InTruncatedPkts InMcastPkts OutMcastPkts InBcastPkts OutBcastPkts InOctets OutOctets InMcastOctets OutMcastOctets InBcastOctets OutBcastOctets InCsumErrors InNoECTPkts InECT1Pkts InECT0Pkts InCEPkts ReasmOverlaps
IpExt: 100 200 300 400 500 0 10468977960762 8092447661930 432 0 3062938 0 0 12512350267 400 300 200 100

Examples

>>> type(pnstat)
<class 'insights.parsers.netstat.ProcNsat'>
>>> len(pnstat.data) == 132
True
>>> pnstat.get_stats('ReasmOverlaps')
100
>>> pnstat.get_stats('EmbryonicRsts')
40
get_stats(key_stats)[source]
(int): The parser method will return the integer stats of the key if key is present in

TcpExt or IpExt else it will return None.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.netstat.SsTULPN(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of the /usr/sbin/ss -tulpn command.

This class parse the input as a table with header:

“Netid State Recv-Q Send-Q Local-Address-Port Peer-Address-Port Process”

Sample input data looks like:

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port
udp    UNCONN     0      0                  *:55898                 *:*
udp    UNCONN     0      0          127.0.0.1:904                   *:*                   users:(("rpc.statd",pid=29559,fd=7))
udp    UNCONN     0      0                  *:111                   *:*                   users:(("rpcbind",pid=953,fd=9))
udp    UNCONN     0      0                 :::37968                :::12345               users:(("rpc.statd",pid=29559,fd=10))
tcp    LISTEN     0      128                *:111                   *:*                   users:(("rpcbind",pid=1139,fd=5),("systemd",pid=1,fd=41))

Examples

>>> type(ss)
<class 'insights.parsers.netstat.SsTULPN'>
>>> sorted(ss.data[1].keys())  # Rows stored by column headings
['Local-Address-Port', 'Netid', 'Peer-Address-Port', 'Process', 'Recv-Q', 'Send-Q', 'State']
>>> ss.data[0]['Local-Address-Port']
'*:55898'
>>> ss.data[0]['State']
'UNCONN'
>>> rpcbind = ss.get_service("rpcbind")  # All connections opened by rpcbind
>>> len(rpcbind)
2
>>> rpcbind[0]['State']
'UNCONN'
>>> rpcbind[1]['State']
'LISTEN'
>>> rpcbind[0]['Process']
'users:(("rpcbind",pid=953,fd=9))'
>>> rpcbind[1]['Process']
'users:(("rpcbind",pid=1139,fd=5),("systemd",pid=1,fd=41))'
>>> using_55898 = ss.get_port("55898")  # Both local and peer port searched
>>> len(using_55898)
1
>>> 'Process' in using_55898  # Not in dictionary if field not found
False
>>> rpcbind == ss.get_localport('111')  # Only local port or address searched
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.netstat.SsTUPNA(context, extra_bad_lines=[])[source]

Bases: insights.parsers.netstat.SsTULPN

Parse the output of the /usr/sbin/ss -tupna command.

This class parse the input as a table with header:

“Netid State Recv-Q Send-Q Local-Address-Port Peer-Address-Port Process”

Sample input data looks like:

Netid State      Recv-Q Send-Q    Local Address:Port    Peer Address:Port
tcp   UNCONN     0      0                     *:68                 *:*      users:(("dhclient",1171,6))
tcp   LISTEN     0      100           127.0.0.1:25                 *:*      users:(("master",1326,13))
tcp   ESTAB      0      0         192.168.0.106:22     192.168.0.101:59232  users:(("sshd",11427,3))
tcp   ESTAB      0      0         192.168.0.106:739    192.168.0.105:2049
tcp   LISTEN     0      128                  :::111               :::*      users:(("rpcbind",483,11))

Examples

>>> type(ssa)
<class 'insights.parsers.netstat.SsTUPNA'>
>>> sorted(ssa.data[2].items())
[('Local-Address-Port', '192.168.0.106:22'), ('Netid', 'tcp'), ('Peer-Address-Port', '192.168.0.101:59232'), ('Process', 'users:(("sshd",11427,3))'), ('Recv-Q', '0'), ('Send-Q', '0'), ('State', 'ESTAB')]
>>> sorted(ssa.get_service("sshd")[0].items())  # All connections opened by rpcbind
[('Local-Address-Port', '192.168.0.106:22'), ('Netid', 'tcp'), ('Peer-Address-Port', '192.168.0.101:59232'), ('Process', 'users:(("sshd",11427,3))'), ('Recv-Q', '0'), ('Send-Q', '0'), ('State', 'ESTAB')]
>>> sorted(ssa.get_port("2049")[0].items())  # Both local and peer port searched
[('Local-Address-Port', '192.168.0.106:739'), ('Netid', 'tcp'), ('Peer-Address-Port', '192.168.0.105:2049'), ('Recv-Q', '0'), ('Send-Q', '0'), ('State', 'ESTAB')]
>>> sorted(ssa.get_localport("739")[0].items())  # local port searched
[('Local-Address-Port', '192.168.0.106:739'), ('Netid', 'tcp'), ('Peer-Address-Port', '192.168.0.105:2049'), ('Recv-Q', '0'), ('Send-Q', '0'), ('State', 'ESTAB')]
>>> sorted(ssa.get_peerport("59232")[0].items())  # peer port searched
[('Local-Address-Port', '192.168.0.106:22'), ('Netid', 'tcp'), ('Peer-Address-Port', '192.168.0.101:59232'), ('Process', 'users:(("sshd",11427,3))'), ('Recv-Q', '0'), ('Send-Q', '0'), ('State', 'ESTAB')]
parse_content(content)[source]

This method must be implemented by classes based on this class.

NeutronConf - file /etc/neutron/neutron.conf

This class provides parsing for the file /etc/neutron/neutron.conf.

Sample input data is in the format:

[DEFAULT]
# debug = False
debug = False
# verbose = True
verbose = False
core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

[quotas]
default_quota = -1
quota_network = 10
[agent]
report_interval = 60

[keystone_authtoken]
auth_host = ost-controller-lb-del.om-l.dsn.inet
auth_port = 35357

See the IniConfigFile class for examples.

class insights.parsers.neutron_conf.NeutronConf(context)[source]

Bases: insights.core.IniConfigFile

Class to parse file neutron.conf.

NeutronDhcpAgentIni - file /etc/neutron/dhcp_agent.ini

The NeutronDhcpAgentIni class parses the dhcp-agent configuration file. See the IniConfigFile class for more usage information.

class insights.parsers.neutron_dhcp_agent_conf.NeutronDhcpAgentIni(context)[source]

Bases: insights.core.IniConfigFile

Parse the /etc/neutron/dhcp_agent.ini configuration file.

Sample configuration:

[DEFAULT]

ovs_integration_bridge = br-int
ovs_use_veth = false
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_vsctl_timeout = 10
resync_interval = 30
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
force_metadata = False
enable_metadata_network = False
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron

[AGENT]

report_interval = 30
log_agent_heartbeats = false
availability_zone = nova

Examples

>>> data.has_option("AGENT", "log_agent_heartbeats")
True
>>> data.get("DEFAULT", "force_metadata") == "True"
True
>>> data.getint("DEFAULT", "resync_interval")
30

NeutronL3AgentIni - file /etc/neutron/l3_agent.ini

The NeutronL3AgentIni class parses the l3_agent configuration file. See the IniConfigFile class for more usage information.

class insights.parsers.neutron_l3_agent_conf.NeutronL3AgentIni(context)[source]

Bases: insights.core.IniConfigFile

Parse the /etc/neutron/l3_agent.ini configuration file.

Sample configuration:

[DEFAULT]

#
# From neutron.base.agent
#

# Name of Open vSwitch bridge to use (string value)
ovs_integration_bridge = br-int

# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
ovs_use_veth = false

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
# commands will fail with ALARMCLOCK error. (integer value)
ovs_vsctl_timeout = 10

#
# From neutron.l3.agent
#

# The working mode for the agent. Allowed modes are: 'legacy' - this preserves
# the existing behavior where the L3 agent is deployed on a centralized
# networking node to provide L3 services like DNAT, and SNAT. Use this mode if
# you do not want to adopt DVR. 'dvr' - this mode enables DVR functionality and
# must be used for an L3 agent that runs on a compute host. 'dvr_snat' - this
# enables centralized SNAT support in conjunction with DVR.  This mode must be
# used for an L3 agent running on a centralized node (or in single-host
# deployments, e.g. devstack) (string value)
# Allowed values: dvr, dvr_snat, legacy
agent_mode = dvr

# TCP Port used by Neutron metadata namespace proxy. (port value)
# Minimum value: 0
# Maximum value: 65535
metadata_port = 9697

# Send this many gratuitous ARPs for HA setup, if less than or equal to 0, the
# feature is disabled (integer value)
#send_arp_for_ha = 3

# Allow running metadata proxy. (boolean value)
enable_metadata_proxy = true

# DEPRECATED: Name of bridge used for external network traffic. When this
# parameter is set, the L3 agent will plug an interface directly into an
# external bridge which will not allow any wiring by the L2 agent. Using this
# will result in incorrect port statuses. This option is deprecated and will be
# removed in Ocata. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
external_network_bridge =

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
debug = False

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
log_date_format = %Y-%m-%d %H:%M:%S

[AGENT]

#
# From neutron.base.agent
#

# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
report_interval = 30

# Log agent heartbeats (boolean value)
log_agent_heartbeats = false

# Availability zone of this node (string value)
availability_zone = nova

Examples

>>> l3_agent_ini.has_option("AGENT", "log_agent_heartbeats")
True
>>> l3_agent_ini.get("DEFAULT", "agent_mode") == "dvr"
True
>>> l3_agent_ini.getint("DEFAULT", "metadata_port")
9697

NeutronL3AgentLog - file /var/log/neutron/l3-agent.log

class insights.parsers.neutron_l3_agent_log.NeutronL3AgentLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/neutron/l3-agent.log file.

Note

Please refer to its super-class insights.core.LogFileOutput for more details.

Sample log lines:

2017-09-17 10:05:06.241 141544 INFO neutron.agent.l3.ha [-] Router 01d51830-0e3e-4100-a891-efd7dbc000b1 transitioned to backup
2017-09-17 10:05:07.828 141544 WARNING neutron.agent.linux.iptables_manager [-] Duplicate iptables rule detected. This may indicate a bug in the the iptables rule generation code. Line: -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
2017-09-17 10:05:07.829 141544 WARNING neutron.agent.linux.iptables_manager [-] Duplicate iptables rule detected. This may indicate a bug in the the iptables rule generation code. Line: -A neutron-l3-agent-INPUT -m mark --mark 0x1/0xffff -j ACCEP

Examples

>>> len(agent_log.get("Duplicate iptables rule detected")) == 2
True
>>> from datetime import datetime
>>> len(list(agent_log.get_after(datetime(2017, 2, 17, 10, 5, 7))))
3

NeutronMetadataAgentIni - file /etc/neutron/metadata_agent.ini

The NeutronMetadataAgentIni class parses the metadata-agent configuration file. See the IniConfigFile class for more usage information.

class insights.parsers.neutron_metadata_agent_conf.NeutronMetadataAgentIni(context)[source]

Bases: insights.core.IniConfigFile

Parse the /etc/neutron/metadata_agent.ini configuration file.

Sample configuration:

[DEFAULT]
# Show debugging output in log (sets DEBUG log level output)
# debug = True

# The Neutron user information for accessing the Neutron API.
auth_url = http://localhost:5000/v2.0
auth_region = RegionOne
# Turn off verification of the certificate for ssl
# auth_insecure = False
# Certificate Authority public key (CA cert) file for ssl
# auth_ca_cert =
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%

# Network service endpoint type to pull from the keystone catalog
# endpoint_type = adminURL

# IP address used by Nova metadata server
# nova_metadata_ip = 127.0.0.1

# TCP Port used by Nova metadata server
# nova_metadata_port = 8775

# Which protocol to use for requests to Nova metadata server, http or https
# nova_metadata_protocol = http

# Whether insecure SSL connection should be accepted for Nova metadata server
# requests
# nova_metadata_insecure = False

# Client certificate for nova api, needed when nova api requires client
# certificates
# nova_client_cert =

# Private key for nova client certificate
# nova_client_priv_key =

# When proxying metadata requests, Neutron signs the Instance-ID header with a
# shared secret to prevent spoofing.  You may select any string for a secret,
# but it must match here and in the configuration used by the Nova Metadata
# Server. NOTE: Nova uses the same config key, but in [neutron] section.
# metadata_proxy_shared_secret =

# Location of Metadata Proxy UNIX domain socket
# metadata_proxy_socket = $state_path/metadata_proxy

# Metadata Proxy UNIX domain socket mode, 3 values allowed:
# 'deduce': deduce mode from metadata_proxy_user/group values,
# 'user': set metadata proxy socket mode to 0o644, to use when
# metadata_proxy_user is agent effective user or root,
# 'group': set metadata proxy socket mode to 0o664, to use when
# metadata_proxy_group is agent effective group,
# 'all': set metadata proxy socket mode to 0o666, to use otherwise.
# metadata_proxy_socket_mode = deduce

# Number of separate worker processes for metadata server. Defaults to
# half the number of CPU cores
# metadata_workers =

# Number of backlog requests to configure the metadata server socket with
# metadata_backlog = 4096

# URL to connect to the cache backend.
# default_ttl=0 parameter will cause cache entries to never expire.
# Otherwise default_ttl specifies time in seconds a cache entry is valid for.
# No cache is used in case no value is passed.
# cache_url = memory://?default_ttl=5

[AGENT]
# Log agent heartbeats from this Metadata agent
# log_agent_heartbeats = False

Examples

>>> metadata_agent_ini.has_option('AGENT', 'log_agent_heartbeats')
True
>>> metadata_agent_ini.get("DEFAULT", "auth_url") == 'http://localhost:35357/v2.0'
True
>>> metadata_agent_ini.getint("DEFAULT", "metadata_backlog")
4096

NeutronMetadataAgentLog - file /var/log/neutron/metadata-agent.log

class insights.parsers.neutron_metadata_agent_log.NeutronMetadataAgentLog(context)[source]

Bases: insights.core.LogFileOutput

Parse the /var/log/neutron/metadata-agent.log file.

Note

Please refer to its super-class insights.core.LogFileOutput for more details.

Sample log lines:

2018-06-08 17:29:55.894 11770 WARNING neutron.agent.metadata.agent [-] Server does not support metadata RPC, fallback to using neutron client
2018-06-08 17:29:55.907 11770 ERROR neutron.agent.metadata.agent [-] Unexpected error
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent Traceback (most recent call last):
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/agent.py", line 109, in __call__
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent     self._authenticate_keystone()
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 218, in _authenticate_keystone
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent     raise exceptions.Unauthorized(message=resp_body)
2018-06-08 17:29:56.126 11770 TRACE neutron.agent.metadata.agent Unauthorized: {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}

Examples

>>> len(metadata_agent_log.get("Server does not support metadata RPC, fallback to using neutron client")) == 1
True
>>> from datetime import datetime
>>> len(list(metadata_agent_log.get_after(datetime(2018, 6, 8, 17, 29, 56))))
6

NeutronOVSAgentLog - file /var/log/neutron/openvswitch-agent.log

Parser plugin for parsing the log file for Neutron OVS Agent

Typical content of openvswitch-agent.log file is:

2016-11-09 14:39:25.343 3153 INFO neutron.common.config [-] Logging enabled!
2016-11-09 14:39:25.343 3153 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 9.1.0
2016-11-09 14:39:25.347 3153 WARNING oslo_config.cfg [-] Option "rabbit_hosts" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:39:25.348 3153 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:39:25.348 3153 WARNING oslo_config.cfg [-] Option "rabbit_userid" from group "oslo_messaging_rabbit" is deprecated for removal.  Its value may be silently ignored in the future.
2016-11-09 14:39:25.352 3153 INFO ryu.base.app_manager [-] loading app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
2016-11-09 14:39:27.171 3153 INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
2016-11-09 14:39:27.190 3153 INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
2016-11-09 14:39:27.209 3153 INFO ryu.base.app_manager [-] instantiating app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of OVSNeutronAgentRyuApp
2016-11-09 14:39:27.210 3153 INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of OFPHandler
2016-11-09 14:39:27.210 3153 INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of OfctlService
2016-11-09 14:39:28.255 3153 INFO oslo_rootwrap.client [-] Spawned new rootwrap daemon process with pid=3925
2016-11-09 14:39:29.376 3153 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] Bridge br-int has datapath-ID 0000c2670b638c45
class insights.parsers.neutron_ovs_agent_log.NeutronOVSAgentLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/neutron/openvswitch-agent.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

NeutronPlugin - file /etc/neutron/plugin.ini

The NeutronPlugin class parses the Neutron plugin configuration file. See the IniConfigFile class for more usage information.

class insights.parsers.neutron_plugin.NeutronPlugin(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.IniConfigFile

Parse the /etc/neutron/plugin.ini configuration file.

Sample configuration file:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = local,flat,vlan,gre,vxlan
mechanism_drivers =openvswitch,linuxbridge
extension_drivers =

[ml2_type_flat]
flat_networks =*
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
network_vlan_ranges =physnet1:1000:2999
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
tunnel_id_ranges =20:100

[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1

[ml2_type_geneve]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of Geneve VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

[securitygroup]
enable_security_group = True

Examples

>>> nconf = shared[NeutronPlugin]
>>> 'ml2' in nconf
True
>>> nconf.has_option('ml2', 'type_drivers')
True
>>> nconf.get("ml2_type_flat", "flat_networks")
"*"
>>> nconf.items('ml2_type_vxlan')
{'vni_ranges': '10:100',
 'vxlan_group': '224.0.0.1'}

NeutronServerLog - file /var/log/neutron/server.log

class insights.parsers.neutron_server_log.NeutronServerLog(context)[source]

Bases: insights.core.LogFileOutput

Read the /var/log/neutron/server.log file.

Note

Please refer to its super-class insights.core.LogFileOutput for more usage information

Sample log file:

2016-09-13 05:56:45.155 30586 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: b45405915eb44e608885f894028d37b9", "code": 404, "title": "Not Found"}}
2016-09-13 05:56:45.156 30586 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2016-09-13 06:06:45.884 30588 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2016-09-13 06:06:45.886 30588 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: fd482ef0ba1144bf944a0a6c2badcdf8", "code": 404, "title": "Not Found"}}
2016-09-13 06:06:45.887 30588 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2016-09-13 06:06:46.131 30586 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2016-09-13 06:06:46.131 30586 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: bc029dbe33f84fbcb67ef7d592458e60", "code": 404, "title": "Not Found"}}
2016-09-13 06:06:46.132 30586 WARNING keystonemiddleware.auth_token [-] Authorization failed for token

Examples

>>> neutron_log = shared[NeutronServerLog]
>>> neutron_log.get('Authorization')[0]['raw_message']
'2016-09-13 05:56:45.156 30586 WARNING keystonemiddleware.auth_token [-] Authorization failed for token'
>>> len(list(neutron_log.get_after(datetime.datetime(2016, 9, 13, 6, 0, 0))))
6
>>> neutron_log.get_after(datetime.datetime(2016, 9, 13, 6, 0, 0))[0]['raw_message']
'2016-09-13 06:06:45.884 30588 WARNING keystonemiddleware.auth_token [-] Authorization failed for token'

NFS exports configuration

NFSExports and NFSExportsD provide a parsed output of the content of an exports file as defined in man exports(5). The content is parsed into a dictionary, where the key is the export path and the value is another dictionary, where the key is the hostname and the value is the option list, parsed into an actual list.

The default ("-") hostname is not specially handled, nor are wildcards.

If export paths are defined multiple times in a file, only the first one is parsed. All subsequent redefinitions are not parsed and the raw line is added to the ignored_lines member.

All raw lines are kept in raw_lines, which is a dict where the key is the export path and the value is the stripped raw line.

Parsers included in this module are:

NFSExports - file nfs_exports

NFSExportsD - files in the nfs_exports.d directory

Sample content of the /etc/exports file:

/home/utcs/shared/ro                    @group(ro,sync)   ins1.example.com(rw,sync,no_root_squash) ins2.example.com(rw,sync,no_root_squash)
/home/insights/shared/rw                @group(rw,sync)   ins1.example.com(rw,sync,no_root_squash) ins2.example.com(ro,sync,no_root_squash)
/home/insights/shared/special/all/mail  @group(rw,sync,no_root_squash)
/home/insights/ins/special/all/config   @group(ro,sync,no_root_squash)  ins1.example.com(rw,sync,no_root_squash)
#/home/insights                          ins1.example.com(rw,sync,no_root_squash)
/home/example                           @group(rw,sync,root_squash) ins1.example.com(rw,sync,no_root_squash) ins2.example.com(rw,sync,no_root_squash)
# A duplicate host for this exported path
/home/example                           ins2.example.com(rw,sync,no_root_squash)

Examples

>>> type(exports)
<class 'insights.parsers.nfs_exports.NFSExports'>
>>> type(exports.data) == type({})
True
>>> exports.raw_lines['/home/insights/shared/rw']  # List of lines that define this path
['/home/insights/shared/rw                @group(rw,sync)   ins1.example.com(rw,sync,no_root_squash) ins2.example.com(ro,sync,no_root_squash)']
>>> exports.raw_lines['/home/example']  # Lines are stored even if they contain duplicate hosts
['/home/example                           @group(rw,sync,root_squash) ins1.example.com(rw,sync,no_root_squash) ins2.example.com(rw,sync,no_root_squash)', '/home/example                           ins2.example.com(rw,sync,no_root_squash)']
>>> exports.ignored_exports
{'/home/example': {'ins2.example.com': ['rw', 'sync', 'no_root_squash']}}
>>> sorted(list(exports.all_options()))
['no_root_squash', 'ro', 'root_squash', 'rw', 'sync']
>>> sorted(list(exports.export_paths()))
['/home/example', '/home/insights/ins/special/all/config', '/home/insights/shared/rw', '/home/insights/shared/special/all/mail', '/home/utcs/shared/ro']
class insights.parsers.nfs_exports.NFSExports(context)[source]

Bases: insights.parsers.nfs_exports.NFSExportsBase

Subclass to attach nfs_exports spec to

class insights.parsers.nfs_exports.NFSExportsBase(context)[source]

Bases: insights.core.Parser

Class to parse /etc/exports and /etc/exports.d/*.exports.

Exports are stored keyed on the path of the export, and then the host definition. The flags are stored as a list. NFS allows the same path to be listed on multiple lines and in multiple files, but an exported path can only have one definition for a given host.

data

Key is export path, value is a dict, where the key is the client host and the value is a list of options.

Type

dict

ignored_exports

A dictionary of exported paths that have host definitions that conflicted with a previous definition.

Type

dict

ignored_lines

A synonym for the above ignored_exports dictionary, for historical reasons.

Type

dict

raw_lines

The list of the raw lines that define each exported path, including any lines that may have ignored exports.

Type

dict of lists

all_options()[source]

Returns the set of all options used in all export entries

export_paths()[source]

Returns the set of all export paths as strings

parse_content(content)[source]

This method must be implemented by classes based on this class.

static reconstitute(path, d)[source]

Warning

This function is deprecated. Please use the raw_lines dictionary property of the parser instance instead, as this contains the actual lines from the exports file.

‘Reconstitute’ a line from its parsed value. The original lines are not used for this. The hosts in d are listed in alphabetical order, and the options are listed in the order originally given.

Parameters
  • path (str) -- The exported path

  • d (dict) -- The hosts definition of the exported path

Returns

A line simulating the definition of that exported path to those hosts.

Return type

str

class insights.parsers.nfs_exports.NFSExportsD(context)[source]

Bases: insights.parsers.nfs_exports.NFSExportsBase

Subclass to attach nfs_exports.d spec to

NginxConf - file /etc/nginx/nginx.conf and other Nginx configuration files

Parse the keyword-and-value of an Nginx configuration file.

Generally, each line is split on the first space into key and value, leading and trailing space being ignored.

Example nginx.conf file:

user       root
worker_processes  5;
error_log  logs/error.log;
pid        logs/nginx.pid;
worker_rlimit_nofile 8192;

events {
  worker_connections  4096;
}

mail {
  server_name mail.example.com;
  auth_http  localhost:9000/cgi-bin/auth;
  server {
    listen   143;
    protocol imap;
  }
}

http {
  include  /etc/nginx/conf.d/*.conf
  index    index.html index.htm index.php;

  default_type application/octet-stream;
  log_format   main '$remote_addr - $remote_user [$time_local]  $status '
                    '"$request" $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
  access_log   logs/access.log  main;
  sendfile     on;
  tcp_nopush   on;
  server_names_hash_bucket_size 128;

  server { # php/fastcgi
    listen       80;
    server_name  domain1.com www.domain1.com;
    access_log   logs/domain1.access.log  main;
    root         html;

    location ~ \.php$ {
      fastcgi_pass   127.0.0.1:1025;
    }
  }

  server { # simple reverse-proxy
    listen       80;
    server_name  domain2.com www.domain2.com;
    access_log   logs/domain2.access.log  main;

    location ~ ^/(images|javascript|js|css|flash|media|static)/  {
      root    /var/www/virtual/big.server.com/htdocs;
      expires 30d;
    }

    location / {
      proxy_pass   http://127.0.0.1:8080;
    }
  }

  map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
  }

  upstream websocket {
    server 10.66.208.205:8010;
  }

  upstream big_server_com {
    server 127.0.0.3:8000 weight=5;
    server 127.0.0.3:8001 weight=5;
    server 192.168.0.1:8000;
    server 192.168.0.1:8001;
  }

  server { # simple load balancing
    listen          80;
    server_name     big.server.com;
    access_log      logs/big.server.access.log main;

    location / {
      proxy_pass      http://big_server_com;
    }
  }
}

Examples

>>> nginxconf = NginxConf(context_wrap(NGINXCONF))
>>> nginxconf['user']
'root'
>>> nginxconf['events']['worker_connections'] # Values are all kept as strings.
'4096'
>>> nginxconf['mail']['server'][0]['listen']
'143'
>>> nginxconf['http']['access_log']
'logs/access.log  main'
>>> nginxconf['http']['server'][0]['location'][0]['fastcgi_pass']
'127.0.0.1:1025'
class insights.parsers.nginx_conf.NginxConf(*args, **kwargs)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Warning

This parser is deprecated, please import insights.combiners.nginx_conf.NginxConfTree instead.

Class for nginx.conf and conf.d configuration files.

Gerenally nginx.conf is writed as key-value format. It has a mail section and several sections, http, mail, events, etc. They are unique, and subsection server and location in http section could be duplicate, so the value of these subsections may be list.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Nmcli parsers

This file will parse the output of command line tools used to manage NetworkManager.

Parsers provided by this module include:

NmcliDevShow - command /usr/bin/nmcli dev show

NmcliConnShow - command /usr/bin/nmcli conn show

class insights.parsers.nmcli.NmcliConnShow(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This file will parse the output of all the nmcli connections.

Sample configuration from a teamed interface in file /usr/bin/nmcli conn show:

NAME      UUID                                  TYPE      DEVICE
enp0s3    320d4923-c410-4b22-b7e9-afc5f794eecc  ethernet  enp0s3
virbr0    7c7dec66-4a8c-4b49-834a-889194b3b83c  bridge    virbr0
test-net  f858b1cc-d149-4de0-93bc-b1826256847a  ethernet  --

Examples

>>> type(static_conn)
<class 'insights.parsers.nmcli.NmcliConnShow'>
>>> static_conn.disconnected_connection
['test-net-1']
data

list of connections wrapped in dictionaries

Type

list

property disconnected_connection

It will return all the disconnected static route connections.

Type

(list)

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.nmcli.NmcliDevShow(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Warning

This parser may be for a single device, please use insights.combiners.nmcli.AllNmcliDevShow instead for all the devices.

This class will parse the output of command nmcli dev show, and the information will be stored in dictionary format.

NetworkManager displays all the devices and their current states along with network configuration and connection status.

This parser works like a python dictionary, all parsed data can be accessed via the dict interfaces.

connected_devices

list of devices who’s state is connected.

Type

list

Sample input for /usr/bin/nmcli dev show:

GENERAL.DEVICE:                         em3
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         B8:AA:BB:DE:F8:B9
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     em3
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.16.184.98/22
IP4.GATEWAY:                            10.16.187.254
IP4.DNS[1]:                             10.16.36.29
IP4.DNS[2]:                             10.11.5.19
IP4.DNS[3]:                             10.5.30.160
IP4.DOMAIN[1]:                          abc.lab.eng.example.com
IP6.ADDRESS[1]:                         2620:52:0:10bb:ba2a:72ff:fede:f8b9/64
IP6.ADDRESS[2]:                         fe80::ba2a:72ff:fede:f8b9/64
IP6.GATEWAY:                            fe80:52:0:10bb::fc
IP6.ROUTE[1]:                           dst = 2620:52:0:10bb::/64, nh = ::, mt = 100

GENERAL.DEVICE:                         em1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         B8:AA:BB:DE:F8:BB
GENERAL.MTU:                            1500
GENERAL.STATE:                          30 (disconnected)
GENERAL.CONNECTION:                     --
GENERAL.CON-PATH:                       --
WIRED-PROPERTIES.CARRIER:               off

GENERAL.DEVICE:                         em2
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         B8:AA:BB:DE:F8:BC
GENERAL.MTU:                            1500
GENERAL.STATE:                          30 (disconnected)
GENERAL.CONNECTION:                     --
GENERAL.CON-PATH:                       --
WIRED-PROPERTIES.CARRIER:               off

Examples

>>> type(nmcli_obj)
<class 'insights.parsers.nmcli.NmcliDevShow'>
>>> nmcli_obj['em3']['STATE']
'connected'
>>> nmcli_obj.get('em2')['HWADDR']
'B8:AA:BB:DE:F8:BC'
>>> sorted(nmcli_obj.connected_devices)
['em1', 'em2', 'em3']
property connected_devices

The list of devices who’s state is connected and managed by NetworkManager

Type

(list)

property data

Dict with the device name as the key and device details as the value.

Type

(dict)

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.nmcli.NmcliDevShowSos(context, extra_bad_lines=[])[source]

Bases: insights.parsers.nmcli.NmcliDevShow

Warning

This parser may be for a single device, please use insights.combiners.nmcli.AllNmcliDevShow instead for all the devices.

In some versions of sosreport, the nmcli dev show command is separated to individual files for different devices. While in some versions, it’s still a whole file. The base class NmcliDevShow could handle both of them, except that the parsing results is stored into a list for separated files.

nova_log - files /var/log/nova/*.log

This module contains classes to parse logs under /var/log/nova/

class insights.parsers.nova_log.NovaApiLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the /var/log/nova/nova-api.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

class insights.parsers.nova_log.NovaComputeLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the /var/log/nova/nova-compute.log file.

Note

Please refer to its super-class insights.core.LogFileOutput

Get uid of user nova and nova_migration

The parser class in this module uses base parser class CommandParser to get uids of the user nova and nova_migration.

Parsers included in this module are:

NovaUID - command id -u nova

NovaMigrationUID - command id -u nova_migration

class insights.parsers.nova_user_ids.NovaMigrationUID(context, extra_bad_lines=[])[source]

Bases: insights.parsers.nova_user_ids.NovaUID

Parse output of id -u nova_migration and get the uid (int).

Typical output of the id -u nova_migration command is:

153

However the id number may vary.

Examples

>>> nova_migration_uid.data
153
data

int if ‘nova_migration’ user exist.

Raises
  • SkipException -- If ‘nova_migration’ user not found or output is empty.

  • ParseException -- For any other output which is not a number or multi-line. Output of such kind are not yet expected from the command id.

class insights.parsers.nova_user_ids.NovaUID(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse output of id -u nova and get the uid (int).

Typical output of the id -u nova command is:

162

However the id number may vary.

Examples

>>> nova_uid.data
162
data

int if ‘nova’ user exist.

Raises
  • SkipException -- If ‘nova’ user not found or output is empty.

  • ParseException -- For any other output which is not a number or multi-line. Outputs of such kind are not yet expected from the command id.

parse_content(content)[source]

This method must be implemented by classes based on this class.

NscdConf - file /etc/nscd.conf

This module parses the contents of the file /etc/nscd.conf.

Each line of the nscd.conf file specifies either an attribute and a value, or an attribute, service, and a value. Fields are separated either by SPACE or TAB characters. A ‘#’ (number sign) indicates the beginning of a comment; following characters, up to the end of the line, are not interpreted by nscd.

The function service_attributes provides information on the lines that contain a service column such as passwd and group in the examples below. The function attribute can be used to obtain values for attributes not associated with a service such as server-user and debug-level in the examples.

Sample content of the nscd.conf file looks like:

#       logfile                 /var/log/nscd.log
#       threads                 4
#       max-threads             32
        server-user             nscd
#       stat-user               somebody
        debug-level             0
#       reload-count            5
        paranoia                no
#       restart-interval        3600

        enable-cache            passwd          no
        positive-time-to-live   passwd          600
        negative-time-to-live   passwd          20
        suggested-size          passwd          211
        check-files             passwd          yes
        persistent              passwd          yes
        shared                  passwd          yes
        max-db-size             passwd          33554432
        auto-propagate          passwd          yes

        enable-cache            group           no
        positive-time-to-live   group           3600
        negative-time-to-live   group           60
        suggested-size          group           211
        check-files             group           yes
        persistent              group           yes
        shared                  group           yes
        max-db-size             group           33554432
        auto-propagate          group           yes

Examples

>>> conf = shared[NscdConf]
>>> len([line for line in conf])
21
>>> [line for line in conf][0]
NscdConfLine(attribute='server-user', service=None, value='nscd')
>>> conf.attribute("server-user")
'nscd'
>>> conf.filter("server-user")
[NscdConfLine(attribute='server-user', service=None, value='nscd')]
>>> conf.filter("server-user")[0].attribute
'server-user'
>>> conf.filter("server-user")[0].value
'nscd'
>>> conf.service_attributes("passwd")
[NscdConfLine(attribute='enable-cache', service='passwd', value='no'),
 NscdConfLine(attribute='positive-time-to-live', service='passwd', value='600'),
 NscdConfLine(attribute='negative-time-to-live', service='passwd', value='20'),
 NscdConfLine(attribute='suggested-size', service='passwd', value='211'),
 NscdConfLine(attribute='check-files', service='passwd', value='yes'),
 NscdConfLine(attribute='persistent', service='passwd', value='yes'),
 NscdConfLine(attribute='shared', service='passwd', value='yes'),
 NscdConfLine(attribute='max-db-size', service='passwd', value='33554432'),
 NscdConfLine(attribute='auto-propagate', service='passwd', value='yes')]
>>> conf.filter("enable-cache")
[NscdConfLine(attribute='enable-cache', service='passwd', value='no'),
 NscdConfLine(attribute='enable-cache', service='group', value='no')]
>>> conf.filter("enable-cache", service="passwd")
[NscdConfLine(attribute='enable-cache', service='passwd', value='no')]
>>> conf.filter("enable-cache", service="passwd")[0].attribute
'enable-cache'
>>> conf.filter("enable-cache", service="passwd")[0].service
'passwd'
>>> conf.filter("enable-cache", service="passwd")[0].value
'no'
class insights.parsers.nscd_conf.NscdConf(context)[source]

Bases: insights.core.Parser

Class for parsing contents of the /etc/nscd.conf file.

attribute(attribute)[source]

str: Returns the value of attribute with name attribute. Lines that include a service are not returned by this function.

filter(attribute, service=None)[source]

list: Returns list of conf lines containing attribute and optional service if present.

parse_content(content)[source]

This method must be implemented by classes based on this class.

service_attributes(service_name)[source]

list: Returns a list of conf lines matching service_name.

class insights.parsers.nscd_conf.NscdConfLine(attribute, service, value)

Bases: tuple

namedtuple: Represents one line of information from the conf file.

property attribute
property service
property value

NSSwitchConf - file /etc/nsswitch.conf

class insights.parsers.nsswitch_conf.NSSwitchConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Read the contents of the /etc/nsswitch.conf file.

Each non-commented line is split into the service and its sources. The sources (e.g. ‘files sss’) are stored as is, as a string.

nsswitch.conf is case insensitive. This means that both the service and its sources are converted to lower case and searches should be done using lower case text.

data

The service dictionary

Type

dict

errors

Non-blank lines which don’t contain a ‘:’

Type

list

sources

An unordered set of the sources seen in this file

Type

set

Sample content:

# Example:
#passwd:    db files nisplus nis
#shadow:    db files nisplus nis
#group:     db files nisplus nis

passwd:     files sss
shadow:     files sss
group:      files sss
#initgroups: files

#hosts:     db files nisplus nis dns
hosts:      files dns myhostname

Examples

>>> nss = shared[NSSwitchConf]
>>> 'passwd' in nss
True
>>> 'initgroups' in nss
False
>>> nss['passwd']
'files nss'
>>> 'files' in nss['hosts']
True
>>> nss.errors
[]
>>> nss.sources
set(['files', 'dns', 'sss', 'myhostname'])
parse_content(content)[source]

This method must be implemented by classes based on this class.

NTP sources - remote clock info from ntpq and chronyc

The parsers here provide information about the time sources used by ntpd and chronyd. These are gathered from the output of the ntpq -pn and chronyc sources commands respectively.

There is also a parser for parsing the output of ntpq -c 'rv 0 leap' command to give leap second status.

Parsers in this module are:

ChronycSources - command /usr/bin/chronyc sources

NtpqLeap - command /usr/sbin/ntpq -c 'rv 0 leap'

NtpqPn - command /usr/sbin/ntpq -pn

class insights.parsers.ntp_sources.ChronycSources(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Chronyc Sources parser

Parses the list of NTP time sources in use by chronyd. So far only the source IP address and the mode and the state flags are retrieved.

Sample input:

210 Number of sources = 6
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- 10.20.30.40                   2   9   377    95  -1345us[-1345us] +/-   87ms
^- 10.56.72.8                    2  10   377   949  -3449us[-3483us] +/-  120ms
^* 10.64.108.95                  2  10   377   371    -91us[ -128us] +/-   30ms
^- 10.8.205.17                   2   8   377    27  +7161us[+7161us] +/-   52ms

Examples

>>> sources = shared[ChronycSources].data
>>> len(sources)
4
>>> sources[0]['source']
'10.20.30.40'
>>> sources[0]['mode']
'^'
>>> sources[0]['state']
'-'
parse_content(content)[source]

Get source, mode and state for chrony

class insights.parsers.ntp_sources.NtpqLeap(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Converts the output of ntpq -c 'rv 0 leap' into a dictionary in the data property, and sets the leap property to the value of the ‘leap’ key if found.

Sample input:

leap=00

Examples

>>> print shared[NtpqLeap].leap  # same data
'00'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.ntp_sources.NtpqPn(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Get source and flag for each NTP time source from the output of /usr/sbin/ntpq -pn.

Currently, this only captures the source IP address and the ‘flag’ character in the first column at this stage. Therefore it will need to be extended should you wish to determine the stratum, polling rate or other properties of the source.

Sample input:

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+10.20.30.40     192.231.203.132  3 u  638 1024  377    0.242    2.461   1.886
*2001:388:608c:8 .GPS.            1 u  371 1024  377   29.323    1.939   1.312
-2001:44b8:1::1  216.218.254.202  2 u  396 1024  377   37.869   -3.340   6.458
+150.203.1.10    202.6.131.118    2 u  509 1024  377   20.135    0.800   3.260

Examples

>>> sources = shared[NtpqPn].data
>>> len(sources)
4
>>> sources[0]
{'flag': '*', 'source', '10.20.30.40'}
parse_content(content)[source]

This method must be implemented by classes based on this class.

NUMACpus - file /sys/devices/system/node/node[0-9]*/cpulist

This parser will parse the content from cpulist file, from individual NUMA nodes. This parser will return data in (dict) format.

Sample Content from /sys/devices/system/node/node0/cpulist:

0-6,14-20

Examples

>>> type(numa_cpus_obj)
<class 'insights.parsers.numa_cpus.NUMACpus'>
>>> numa_cpus_obj.numa_node_name
'node0'
>>> numa_cpus_obj.numa_node_details() == {'numa_node_range': ['0-6', '14-20'], 'total_cpus': 14, 'numa_node_name': 'node0'}
True
>>> numa_cpus_obj.numa_node_cpus
['0-6', '14-20']
>>> numa_cpus_obj.total_numa_node_cpus
14
class insights.parsers.numa_cpus.NUMACpus(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse /sys/devices/system/node/node[0-9]*/cpulist file, return a dict which contains total number of CPUs per numa node.

property numa_node_cpus

It will return list of CPUs present under NUMA node when set, else None.

Type

(list)

numa_node_details()[source]

(dict): it will return the number of CPUs per NUMA, NUMA node name, CPU range, when set, else None.

property numa_node_name

It will return the CPU node name when set, else None.

Type

(str)

parse_content(content)[source]

This method must be implemented by classes based on this class.

property total_numa_node_cpus

It will return total number of CPUs per NUMA node

Type

(int)

NumericUserGroupName - command /usr/bin/grep -c '^[[:digit:]]' /etc/passwd /etc/group

This parser reads the output of /usr/bin/grep -c '^[[:digit:]]' /etc/passwd /etc/group and detects whether there are any user or group names that begin with a digit.

The grep command returns the number of matches for each file. It matches for user and group names that begin with a number.

The answers are available through attributes - boolean has_numeric_user_group and numeric nr_numeric_user and nr_numeric_group.

Names starting with a digit can be handled incorrectly by some software. Purely numeric user names can be handled incorrectly by some software and requires special care when used with utilities that accept both user/group IDs and names (like chown, chmod). In case such names exist on the system, it is advisable to review whether the used software (namely 3rd party software and shell scripts) handles the names correctly.

https://access.redhat.com/solutions/3103631

Sample input:

/etc/passwd:3
/etc/group:0

Examples

>>> shared[NumericUserGroupName].has_numeric_user_or_group
True
>>> shared[NumericUserGroupName].nr_numeric_user
3
>>> shared[NumericUserGroupName].nr_numeric_group
0
class insights.parsers.numeric_user_group_name.NumericUserGroupName(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Reports whether there is a user or group with a name that starts with a digit.

parse_content(content)[source]

This method must be implemented by classes based on this class.

NVMeCoreIOTimeout - The timeout for I/O operations submitted to NVMe devices

This parser reads the content of /sys/module/nvme_core/parameters/io_timeout.

class insights.parsers.nvme_core_io_timeout.NVMeCoreIOTimeout(context)[source]

Bases: insights.core.Parser

Class for parsing the content of the /sys/module/nvme_core/parameters/io_timeout.

A typical sample of the content of this file looks like:

4294967295
Raises
val

It is used to show the current value of the timeout for I/O operations submitted to NVMe devices.

Type

int

Examples

>>> nciotmo.val
4294967295
parse_content(content)[source]

This method must be implemented by classes based on this class.

obdc configuration file - Files

Shared mappers for parsing and extracting data from /etc/odbc.ini and /etc/odbcinst.ini files. Parsers contained in this module are:

ODBCIni - file /etc/odbc.ini

ODBCinstIni - file /etc/odbcinst.ini

class insights.parsers.odbc.ODBCIni(context)[source]

Bases: insights.core.IniConfigFile

The /etc/odbc.ini file is in a standard ‘.ini’ format, and this parser uses the IniConfigFile base class to read this.

Sample command output:

[myodbc5w]
Driver       = /usr/lib64/libmyodbc5w.so
Description  = DSN to MySQL server
SERVER       = localhost
NO_SSPS     = 1

[myodbc]
Driver=MySQL
SERVER=localhost
#NO_SSPS=1

Example

>>> config.sections()
['myodbc5w', 'myodbc']
>>> config.has_option('myodbc5w', 'Driver')
True
>>> config.get('myodbc5w', 'Driver')
'/usr/lib64/libmyodbc5w.so'
>>> config.getint('myodbc5w', 'NO_SSPS')
1
class insights.parsers.odbc.ODBCinstIni(context)[source]

Bases: insights.core.IniConfigFile

The /etc/odbcinst.ini file is in a standard ‘.ini’ format, and this parser uses the IniConfigFile base class to read this.

Sample command output:

# Driver from the postgresql-odbc package
# Setup from the unixODBC package
[PostgreSQL]
Description     = ODBC for PostgreSQL
Driver          = /usr/lib/psqlodbcw.so
Setup           = /usr/lib/libodbcpsqlS.so
Driver64        = /usr/lib64/psqlodbcw.so
Setup64         = /usr/lib64/libodbcpsqlS.so
FileUsage       = 1

Example

>>> config.sections()
['PostgreSQL']
>>> config.has_option('PostgreSQL', 'Driver')
True
>>> config.get('PostgreSQL', 'Driver')
'/usr/lib/psqlodbcw.so'
>>> config.getint('PostgreSQL', 'FileUsage')
1

openshift configuration files

/etc/origin/node/node-config.yaml and /etc/origin/master/master-config.yaml are configuration files of openshift Master node and Node node. Their format are both yaml.

OseNodeConfig - file /etc/origin/node/node-config.yaml

Reads the OpenShift node configuration

OseMasterConfig - file /etc/origin/master/master-config.yaml

Reads the Openshift master configuration

Examples

>>> result = shared[OseMasterConfig]
>>> result.data['assetConfig']['masterPublicURL']
'https://master.ose.com:8443'
>>> result.data['corsAllowedOrigins'][1]
'localhost'
class insights.parsers.openshift_configuration.OseMasterConfig(context)[source]

Bases: insights.core.YAMLParser

Class to parse /etc/origin/master/master-config.yaml

class insights.parsers.openshift_configuration.OseNodeConfig(context)[source]

Bases: insights.core.YAMLParser

Class to parse /etc/origin/node/node-config.yaml

OpenShift Get commands

oc get command is from openshift used to list the usage info - e.g. pods info, dc info, services info, etc. This shared parser is used to parse oc get XX --all-namespaces -o yaml command information. Parameters “--all-namespaces” means collecting information from all projects and “-o yaml” means the output is in YAML format.

Parsers included in this module are:

OcGetBc - command oc get bc -o yaml --all-namespaces

OcGetBuild - command oc get build -o yaml --all-namespaces

OcGetDc - command oc get dc -o yaml --all-namespaces

OcGetEgressNetworkPolicy - command oc get egressnetworkpolicy -o yaml --all-namespaces

OcGetEndPoints - command oc get endpoints -o yaml --all-namespaces

OcGetEvent - command oc get event -o yaml --all-namespaces

OcGetNode - command oc get nodes -o yaml

OcGetPod - command oc get pod -o yaml --all-namespaces

OcGetProject - command oc get project -o yaml --all-namespaces

OcGetPv - command oc get pv -o yaml --all-namespaces

OcGetPvc - command oc get pvc -o yaml --all-namespaces

OcGetRc - command oc get rc -o yaml --all-namespaces

OcGetRole - command oc get role -o yaml --all-namespaces

OcGetRolebinding - command oc get rolebinding -o yaml --all-namespaces

OcGetRoute - command oc get route -o yaml --all-namespaces

OcGetService - command oc get service -o yaml --all-namespaces

OcGetConfigmap - command oc get configmap -o yaml --all-namespaces

Examples

>>> type(setting_dic)
<class 'insights.parsers.openshift_get.OcGetService'>
>>> setting_dic.data['items'][0]['kind']
'Service'
>>> setting_dic.data['items'][0]['spec']['clusterIP']
'172.30.0.1'
>>> setting_dic.data['items'][0]['metadata']['name']
'kubernetes'
>>> setting_dic.data['items'][1]['metadata']['name']
'router-1'
>>> "zjj" in setting_dic.data['items'][1]['metadata']['namespace']
True
class insights.parsers.openshift_get.OcGetBc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get bc -o yaml --all-namespaces

property build_configs

Returns a dictionary of openshift build configs information.

Type

dict

class insights.parsers.openshift_get.OcGetBuild(context)[source]

Bases: insights.core.YAMLParser

Class to parse oc get build -o yaml --all-namespaces

property started_builds

Returns a dictionary of openshift started build information.

Type

dict

class insights.parsers.openshift_get.OcGetConfigmap(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get configmap -o yaml --all-namespaces

property configmaps

Returns a dictionary of openshift configmaps information.

Type

dict

class insights.parsers.openshift_get.OcGetDc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get dc -o yaml --all-namespaces

property deployment_configs

Returns a dictionary of openshift deploymentconfigs information.

Type

dict

class insights.parsers.openshift_get.OcGetEgressNetworkPolicy(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get egressnetworkpolicy -o yaml --all-namespaces

property egress_network_policies

Returns a dictionary of openshift egress network policy information.

Type

dict

class insights.parsers.openshift_get.OcGetEndPoints(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get endpoints -o yaml --all-namespaces

property endpoints

Returns a dictionary of openshift endpoints information.

Type

dict

class insights.parsers.openshift_get.OcGetEvent(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get event -o yaml --all-namespaces

property events

Returns a dictionary of openshift events information.

Type

dict

class insights.parsers.openshift_get.OcGetNode(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get nodes -o yaml

property nodes

Returns a dictionary of openshift nodes information.

Type

dict

class insights.parsers.openshift_get.OcGetPod(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get pod -o yaml --all-namespaces

property pods

Returns a dictionary of openshift pods information.

Type

dict

class insights.parsers.openshift_get.OcGetProject(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get project -o yaml --all-namespaces

property projects

Returns a dictionary of openshift project information.

Type

dict

class insights.parsers.openshift_get.OcGetPv(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get pv -o yaml --all-namespaces

property persistent_volumes

Returns a dictionary of openshift persistent volume information.

Type

dict

class insights.parsers.openshift_get.OcGetPvc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get pvc -o yaml --all-namespaces

property persistent_volume_claims

Returns a dictionary of openshift persistent volume claim information.

Type

dict

class insights.parsers.openshift_get.OcGetRc(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get rc -o yaml --all-namespaces

property replication_controllers

Returns a dictionary of openshift replication controllers information.

Type

dict

class insights.parsers.openshift_get.OcGetRole(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get role -o yaml --all-namespaces

property roles

Returns a dictionary of openshift role information.

Type

dict

class insights.parsers.openshift_get.OcGetRolebinding(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get rolebinding -o yaml --all-namespaces

property rolebindings

Returns a dictionary of openshift rolebind information.

Type

dict

class insights.parsers.openshift_get.OcGetRoute(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get route -o yaml --all-namespaces

property routes

Returns a dictionary of openshift route information.

Type

dict

class insights.parsers.openshift_get.OcGetService(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.YAMLParser

Class to parse oc get service -o yaml --all-namespaces

property services

Returns a dictionary of openshift services information.

Type

dict

OpenShift Get commands with configuration file

The commands set is similar to the oc get commands. It is used to display openshift resources. It uses the master configuration file rather than the default configuration when communicated with the client API. It makes sure this command will only be executed on the master node of an OpenShift cluster. This command will also not include the commands which display large size outputs.

Parsers included in this module are:

OcGetClusterRoleWithConfig - command oc get clusterrole --config /etc/origin/master/admin.kubeconfig

OcGetClusterRoleBindingWithConfig - command oc get clusterrolebinding --config /etc/origin/master/admin.kubeconfig

class insights.parsers.openshift_get_with_config.OcGetClusterRoleBindingWithConfig(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse oc get clusterrolebinding --config /etc/origin/master/admin.kubeconfig

data

List of dicts, each dict containing one row of the table

Type

list

rolebinding

It is a dictionary in which the key is rolebinding name and the value is the role.

Type

dict

A typical sample of the content of this file looks like:

NAME                                                                  ROLE                                                                   USERS                            GROUPS                                         SERVICE ACCOUNTS                                                                   SUBJECTS
admin                                                                 /admin                                                                                                                                                 openshift-infra/template-instance-controller
admin-0                                                               /admin                                                                                                                                                 kube-service-catalog/default
admin-1                                                               /admin                                                                                                                                                 openshift-ansible-service-broker/asb
asb-access                                                            /asb-access                                                                                                                                            openshift-ansible-service-broker/asb-client
asb-auth                                                              /asb-auth                                                                                                                                              openshift-ansible-service-broker/asb
auth-delegator-openshift-template-service-broker                      /system:auth-delegator                                                                                                                                 openshift-template-service-broker/apiserver
basic-users                                                           /basic-user                                                                                             system:authenticated
cluster-admin                                                         /cluster-admin                                                                                          system:masters
cluster-admin-0                                                       /cluster-admin                                                                                                                                         insights-scan/insights-scan

Examples

>>> type(oc_get_clusterrolebinding_with_config)
<class 'insights.parsers.openshift_get_with_config.OcGetClusterRoleBindingWithConfig'>
>>> oc_get_clusterrolebinding_with_config.rolebinding["admin"]
'/admin'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.openshift_get_with_config.OcGetClusterRoleWithConfig(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse oc get clusterrole --config /etc/origin/master/admin.kubeconfig

A typical sample of the content of this file looks like:

NAME
admin
asb-access
asb-auth
basic-user
cluster-admin
cluster-debugger
cluster-reader
cluster-status
edit
hawkular-metrics-admin
management-infra-admin
namespace-viewer
registry-admin

Examples

>>> type(oc_get_cluster_role_with_config)
<class 'insights.parsers.openshift_get_with_config.OcGetClusterRoleWithConfig'>
>>> oc_get_cluster_role_with_config[0]
'admin'
parse_content(content)[source]

This method must be implemented by classes based on this class.

OpenShiftHosts - file /root/.config/openshift/hosts

OpenShiftHosts file is /root/.config/openshift/hosts which records nodes information. While installing openshift cluster , this installation process would read this file, and install relative rpms on every node according to the configuration.

class insights.parsers.openshift_hosts.OpenShiftHosts(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Class OpenShiftHosts parses the content of the /root/.config/openshift/hosts` file. A small sample of the content of this file looks like:

[OSEv3:children]
nodes
nfs
masters
etcd
[OSEv3:vars]
openshift_master_cluster_public_hostname=None
ansible_ssh_user=root
openshift_master_cluster_hostname=None
openshift_hostname_check=false
deployment_type=openshift-enterprise
[nodes]
master.ose35.com  openshift_public_ip=192.66.208.202 openshift_ip=192.66.208.202 openshift_public_hostname=master.ose35.com openshift_hostname=master.ose35.com connect_to=master.ose35.com openshift_schedulable=False ansible_connection=local
node1.ose35.com  openshift_public_ip=192.66.208.169 openshift_ip=192.66.208.169 openshift_public_hostname=node1.ose35.com openshift_hostname=node1.ose35.com connect_to=node1.ose35.com openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
node2.ose35.com  openshift_public_ip=192.66.208.170 openshift_ip=192.66.208.170 openshift_public_hostname=node2.ose35.com openshift_hostname=node2.ose35.com connect_to=node2.ose35.com openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True
[nfs]
master.ose35.com  openshift_public_ip=192.66.208.202 openshift_ip=192.66.208.202 openshift_public_hostname=master.ose35.com openshift_hostname=master.ose35.com connect_to=master.ose35.com ansible_connection=local
[masters]
master.ose35.com  openshift_public_ip=192.66.208.202 openshift_ip=192.66.208.202 openshift_public_hostname=master.ose35.com openshift_hostname=master.ose35.com connect_to=master.ose35.com ansible_connection=local
[etcd]
master.ose35.com  openshift_public_ip=192.66.208.202 openshift_ip=192.66.208.202 openshift_public_hostname=master.ose35.com openshift_hostname=master.ose35.com connect_to=master.ose35.com ansible_connection=local

Examples

>>> type(host_info)
<class 'insights.parsers.openshift_hosts.OpenShiftHosts'>
>>> host_info["OSEv3:children"]
["nodes","nfs","masters","etcd"]
>>> host_info["nodes"]["master.ose35.com"]["openshift_public_ip"]
"192.66.208.202"
>>> host_info["nodes"]["master.ose35.com"]["openshift_node_labels"]
"{'region': 'infra'}"
>>> host_info.has_node("node1.ose35.com")
True
>>> host_info.has_var("openshift_use_crio")
False
has_node(node)[source]

Indicate whether the named node is present in the configuration. Return True if the given node name is present, and False if not present.

has_node_type(node_type)[source]

Indicate whether the named node type is present in the configuration. Return True if the given node type is present, and False if not present.

has_var(var)[source]

Indicate whether the named var is present in the configuration. Return True if the given var is present, and False if not present.

parse_content(content)[source]

This method must be implemented by classes based on this class.

OpenVSwitchLogs - files ovsdb-server.log and ovs_vswitchd.log

A standard log file reader for logs written by OpenVSwitch.

The logs have a standard format:

2016-03-08T02:10:01.155Z|01417|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2016-03-08T02:20:05.425Z|01418|connmgr|INFO|br0<->unix: 1 flow_mods in the last 0 s (1 adds)
2016-03-08T02:20:10.160Z|01419|connmgr|INFO|br0<->unix: 1 flow_mods in the last 0 s (1 deletes)
2016-03-08T11:30:52.206Z|00013|fatal_signal|WARN|terminating with signal 15 (Terminated)

The get method breaks up log lines on the bar character (‘|’) into the following fields:

  • timestamp - the UTC time stamp

  • sequence - the sequence number of this message

  • module - the module in OpenVSwitch that emitted this error

  • level - the level of error (INFO, WARN, ERROR)

  • message - the rest of the message.

Each line of the resultant list is a dictionary with those fields.

Examples

>>> vswlog = shared[OVSDB_Server_Log]
>>> 'fatal_signal' in vswlog
True
>>> vswlog.get('fatal_signal')
[{'timestamp': '2016-03-08T11:30:52.206Z', 'sequence': '00013',
  'module': 'fatal_signal', 'level': 'WARN',
  'message': 'terminating with signal 15 (Terminated)',
  'raw_message': '2016-03-08T11:30:52.206Z|00013|fatal_signal|WARN|terminating with signal 15 (Terminated)']
class insights.parsers.openvswitch_logs.OVSDB_Server_Log(context)[source]

Bases: insights.parsers.openvswitch_logs.OpenVSwitchLog

Parser for the ovsdb_server.log file, based on the OpenVSwitchLog class.

class insights.parsers.openvswitch_logs.OVS_VSwitchd_Log(context)[source]

Bases: insights.parsers.openvswitch_logs.OpenVSwitchLog

Parser for the ovs-vswitchd.log file, based on the OpenVSwitchLog class.

class insights.parsers.openvswitch_logs.OpenVSwitchLog(context)[source]

Bases: insights.core.LogFileOutput

Template class for reading OpenVSwitch logs.

Note

Please refer to its super-class insights.core.LogFileOutput for more usage information.

OpenvSwitchOtherConfig - command ovs-vsctl -t 5 get Open_vSwitch . other_config

Class OpenvSwitchOtherConfig process the output of the following OpenvSwitch command:

ovs-vsctl -t 5 get Open_vSwitch . other_config

Sample input:

{dpdk-init="true", dpdk-lcore-mask="30000003000", dpdk-socket-mem="4096,4096", pmd-cpu-mask="30000003000"}

Examples

>>> type(ovs_other_conf)
<class 'insights.parsers.openvswitch_other_config.OpenvSwitchOtherConfig'>
>>> ovs_other_conf.get("dpdk-init")
true
>>> ovs_other_conf["dpdk-lcore-mask"]
30000003000
>>> dpdk-socket-mem in ovs_other_conf
True
class insights.parsers.openvswitch_other_config.OpenvSwitchOtherConfig(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parses output of the ovs-vsctl -t 5 get Open_vSwitch . other_config command

parse_content(content)[source]

This method must be implemented by classes based on this class.

OsRelease - file /etc/os-release

This module provides plugins access to file /etc/os-release.

Typical content of file /etc/os-release is:

NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Employee SKU"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"

Note

The /etc/os-release is not exist in RHEL6 and prior versions.

This module parses the file content and stores it as a dict in the data attribute.

Examples

>>> os_rls_content = '''
... Red Hat Enterprise Linux Server release 7.2 (Maipo)
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {OsRelease: OsRelease(context_wrap(os_rls_content))}
>>> rls = shared[OsRelease]
>>> data = rls.data
>>> assert data.get("VARIANT_ID") is None
>>> assert data.get("VERSION") == "7.2 (Maipo)"
class insights.parsers.os_release.OsRelease(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parses the content of file /etc/os-release.

parse_content(content)[source]

This method must be implemented by classes based on this class.

OSADispatcherLog - file /var/log/rhn/osa-dispatcher.log

class insights.parsers.osa_dispatcher_log.OSADispatcherLog(context)[source]

Bases: insights.core.LogFileOutput

Reads the OSA dispatcher log. Based on the LogFileOutput class.

Note

Please refer to its super-class insights.core.LogFileOutput

Works a bit like the XMLRPC log but the IP address always seems to be 0.0.0.0 and the module is always ‘osad’ - it’s more like what produced the log.

Sample log data:

2015/12/23 04:40:58 -04:00 28307 0.0.0.0: osad/jabber_lib.__init__
2015/12/23 04:40:58 -04:00 28307 0.0.0.0: osad/jabber_lib.setup_connection('Connected to jabber server', u'example.com')
2015/12/23 04:40:58 -04:00 28307 0.0.0.0: osad/osa_dispatcher.fix_connection('Upstream notification server started on port', 1290)
2015/12/23 04:40:58 -04:00 28307 0.0.0.0: osad/jabber_lib.process_forever
2015/12/27 22:48:50 -04:00 28307 0.0.0.0: osad/jabber_lib.main('ERROR', 'Error caught:')
2015/12/27 22:48:50 -04:00 28307 0.0.0.0: osad/jabber_lib.main('ERROR', 'Traceback (most recent call last)')

Example

>>> osa = shared[OSADispatcherLog]
>>> osa.get('__init__')
[{'raw_message': '2015/12/23 04:40:58 -04:00 28307 0.0.0.0: osad/jabber_lib.__init__',
 'timestamp': '2015/12/23 04:40:58 -04:00',
 'datetime': datetime.datetime(2015, 12, 23, 4, 40, 58),
 'pid': '28307', 'client_ip': '0.0.0.0', 'module': 'osad',
 'function': 'jabber_lib.__init__', 'info': None}
]
>>> osa.last()
{'raw_message': "2015/12/27 22:48:50 -04:00 28307 0.0.0.0: osad/jabber_lib.main('ERROR', 'Traceback (most recent call last)')",
 'timestamp': '2015/12/27 22:48:50 -04:00',
 'datetime': datetime.datetime(2015, 12, 27, 22, 48, 50), 'pid': '28307',
 'client_ip': '0.0.0.0', 'module': 'osad', 'function': 'jabber_lib.main',
 'info': "'ERROR', 'Traceback (most recent call last)'"}
>>> from datetime import datetime
>>> len(list(osa.get_after(datetime(2015, 12, 27, 22, 48, 0))))
2
last()[source]

Finds the last complete log line in the file. It looks for a line with a client IP address and parses the line to a dictionary.

Returns

(dict) the last complete log line parsed to a dictionary.

Ovirt Engine logs

This module contains the following parsers:

ServerLog - file /var/log/ovirt-engine/server.log

UILog - file /var/log/ovirt-engine/ui.log

EngineLog - file /var/log/ovirt-engine/engine.log

BootLog - file /var/log/ovirt-engine/boot.log

ConsoleLog - file /var/log/ovirt-engine/console.log

class insights.parsers.ovirt_engine_log.BootLog(context)[source]

Bases: insights.core.Syslog

Provide access to /var/log/ovirt-engine/boot.log using the Syslog parser class.

Sample input:

03:46:19,238 INFO  [org.jboss.as.server] WFLYSRV0039: Creating http management service using socket-binding (management)
03:46:19,242 INFO  [org.xnio] XNIO version 3.5.5.Final-redhat-1
03:46:19,250 INFO  [org.xnio.nio] XNIO NIO Implementation Version 3.5.5.Final-redhat-1

Examples

>>> xnio_lines = boot_log.get('xnio.nio')
>>> len(xnio_lines)
1
>>> xnio_lines[0].get('procname')
'org.xnio.nio'
>>> xnio_lines[0].get('level')
'INFO'
>>> xnio_lines[0].get('message')
'XNIO NIO Implementation Version 3.5.5.Final-redhat-1'
class insights.parsers.ovirt_engine_log.ConsoleLog(context)[source]

Bases: insights.core.LogFileOutput

Provide access to /var/log/ovirt-engine/console.log using the LogFileoutput parser class.

class insights.parsers.ovirt_engine_log.EngineLog(context)[source]

Bases: insights.core.LogFileOutput

Provide access to /var/log/ovirt-engine/engine.log using the LogFileoutput parser class.

Sample input:

2018-08-06 04:06:33,229+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue.
2018-08-06 04:06:33,229+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks.
2018-08-06 04:06:33,229+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks.
2018-08-06 04:06:33,229+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 5 threads waiting for tasks.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10, 2 threads waiting for tasks.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for tasks.
2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 5 threads waiting for tasks.
2018-08-06 04:26:33,233+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10, 2 threads waiting for tasks.
2018-08-06 04:26:33,233+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2018-08-06 04:26:33,233+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 8 threads waiting for tasks and 0 tasks in queue.
2018-08-22 00:16:14,357+05 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugLeaseVDSCommand] (default task-133) [e3bc976c-bc3e-4b41-807f-3a518169ad18] START, HotUnplugLeaseVDSCommand(HostName = example.com, LeaseVDSParameters:{hostId='bfa308ab-5add-4ad7-8f1c-389cb8dcf703', vmId='789489a3-be62-40e4-b13e-beb34ba5ff93'}), log id: 7a634963

Examples

>>> from datetime import datetime
>>> "Thread pool 'engine'" in engine_log
True
>>> len(list(engine_log.get_after(datetime(2018, 8, 6, 4, 16, 33, 0))))
10
>>> matched_line = "2018-08-06 04:16:33,231+05 INFO  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5, 5 threads waiting for tasks."
>>> engine_log.get('hostUpdatesChecker')[-1].get('raw_message') == matched_line
True
>>> engine_log.get('vdsbroker')[-1].get('procname') == 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugLeaseVDSCommand'
True
class insights.parsers.ovirt_engine_log.ServerLog(context)[source]

Bases: insights.parsers.ovirt_engine_log.EngineLog

Provide access to /var/log/ovirt-engine/server.log using the EngineLog parser class.

Sample input:

2018-01-17 01:46:15,834+05 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0207: Starting subdeployment (runtime-name: "services.war")
2018-01-17 01:46:16,834+05 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0207: Starting subdeployment (runtime-name: "webadmin.war")
2018-01-17 01:46:17,739+05 WARN  [org.jboss.as.dependency.unsupported] (MSC service thread 1-7) WFLYSRV0019: Deployment "deployment.engine.ear" is using an unsupported module ("org.dom4j") which may be changed or removed in future versions without notice.

Examples

>>> 'is using an unsupported module' in server_log
True
>>> from datetime import datetime
>>> len(list(server_log.get_after(datetime(2018, 1, 17, 1, 46, 16, 0))))
2
>>> matched_line = '2018-01-17 01:46:17,739+05 WARN  [org.jboss.as.dependency.unsupported] (MSC service thread 1-7) WFLYSRV0019: Deployment "deployment.engine.ear" is using an unsupported module ("org.dom4j") which may be changed or removed in future versions without notice.'
>>> server_log.get('WARN')[-1].get('raw_message') == matched_line
True
>>> sec_lines = server_log.get('org.wildfly.security')
>>> len(sec_lines)
1
>>> sec_lines[0]['level']
'INFO'
class insights.parsers.ovirt_engine_log.UILog(context)[source]

Bases: insights.parsers.ovirt_engine_log.EngineLog

Provide access to /var/log/ovirt-engine/ui.log using the EngineLog parser class.

Sample input:

2018-01-24 05:31:26,243+05 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-134) [] Permutation name: C068E8B2E40A504D3054A1BDCF2A72BB
2018-01-24 05:32:26,243+05 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-134) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: (TypeError)

Examples

>>> 'Permutation name' in ui_log
True
>>> from datetime import datetime
>>> len(list(ui_log.get_after(datetime(2018, 1, 24, 5, 31, 26, 0))))
2
>>> exception_lines = ui_log.get('Uncaught exception')
>>> len(exception_lines)
1
>>> exception_lines[0].get('procname')
'org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService'
>>> exception_lines[0].get('level')
'ERROR'

OVSappctlFdbShowBridge - command /usr/bin/ovs-appctl fdb/show [bridge-name]

This module provides class OVSappctlFdbShowBridgeCount to parse the output of command /usr/bin/ovs-appctl fdb/show [bridge-name].

Sample command output:

port VLAN  MAC Age
6       1 MAC1 118
3       0 MAC2 24
class insights.parsers.ovs_appctl_fdb_show_bridge.OVSappctlFdbShowBridge(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

This class provides processing for the output of the command /usr/bin/ovs-appctl fdb/show [bridge-name].

Sample content collected by insights-client:

port  VLAN  MAC                Age
   6    29  aa:bb:cc:dd:ee:ff  270
   3    27  gg:hh:ii:jj:kk:ll  266
   1   100  mm:nn:oo:pp:qq:rr  263

Sample parsed output:

{
    'br-int': [
                  {'port': '6', 'VLAN': '29', 'MAC': 'aa:bb:cc:dd:ee:ff', 'Age': '270'},
                  {'port': '3', 'VLAN': '27', 'MAC': 'gg:hh:ii:jj:kk:ll', 'Age': '266'},
                  {'port': '1', 'VLAN': '100', 'MAC': 'mm:nn:oo:pp:qq:rr', 'Age': '263'}
              ]
}
data

A dictionary where each element contains the bridge-name as key and list of dictionary elements having MAC info as value.

Type

dict

Raises

SkipException -- When the file is empty or data is not present for a bridge.

Examples

>>> len(data["br_tun"])
2
>>> data.get("br_tun")[1]["MAC"] == "gg:hh:ii:jj:kk:ll"
True
>>> int(data["br_tun"][0]["port"])
7
parse_content(content)[source]

This method must be implemented by classes based on this class.

OVSofctlDumpFlows - command /usr/bin/ovs-ofctl dump-flows <bridge-name>

This module provides class OVSofctlDumpFlows to parse the output of command /usr/bin/ovs-ofctl dump-flows <bridge-name>.

class insights.parsers.ovs_ofctl_dump_flows.OVSofctlDumpFlows(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This class provides processing for the output of the command /usr/bin/ovs-ofctl dump-flows <bridge-name>.

Sample command output:

cookie=0x0, duration=8.528s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=62:ee:31:2b:35:7c,dl_dst=a2:72:e7:06:75:2e,arp_spa=10.0.0.2,arp_tpa=10.0.0.3,arp_op=2 actions=output:"s1-eth3"
cookie=0x0, duration=4.617s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=d6:fc:9c:e7:a2:f9,dl_dst=a2:72:e7:06:75:2e,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=2 actions=output:"s1-eth3"

Sample parsed output:

[
        { 'cookie': '0x0', 'duration': '8.528s', 'table': '0', 'n_packets': '0', 'n_bytes': '0', 'idle_timeout': '60', 'priority': '65535', 'arp,in_port': 's1-eth2', 'vlan_tci': '0x0000', 'dl_src': '62:ee:31:2b:35:7c', 'dl_dst': 'a2:72:e7:06:75:2e', 'arp_spa': '10.0.0.2', 'arp_tpa': '10.0.0.3', 'arp_op': '2' 'actions=output':'s1-eth3'},
        { 'cookie': '0x0', 'duration': '4.617s', 'table': '0', 'n_packets': '0', 'n_bytes': '0', 'idle_timeout': '60', 'priority': '65535', 'arp,in_port': 's1-eth1', 'vlan_tci': '0x0000', 'dl_src': 'd6:fc:9c:e7:a2:f9', 'dl_dst': 'a2:72:e7:06:75:2e', 'arp_spa': '10.0.0.1', 'arp_tpa': '10.0.0.3', 'arp_op': '2' 'actions=output':'s1-eth3'}
]

Examples

>>> ovs_obj.bridge_name
'br0'
>>> len(ovs_obj.flow_dumps)
2
property bridge_name

It will return bridge interface name on success else returns None on failure.

Type

(str)

property flow_dumps

It will return list of flows added under bridge else returns empty list [] on failure.

Type

(list)

parse_content(content)[source]

This method must be implemented by classes based on this class.

OVSvsctlListBridge - command /usr/bin/ovs-vsctl list bridge

This module provides class OVSvsctlListBridge for parsing the output of command ovs-vsctl list bridge.

class insights.parsers.ovs_vsctl_list_bridge.OVSvsctlListBridge(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to parse output of command ovs-vsctl list bridge. Generally, the data is in key:value format with values having data types as string, numbers, list or dictionary. The class provides attribute data as list with lines parsed line by line based on keys for each bridge.

Sample command output:

name                : br-int
other_config        : {disable-in-band="true", mac-table-size="2048"}
name                : br-tun
other_config        : {}

Examples

>>> bridge_lists[0]["name"]
'br-int'
>>> bridge_lists[0]["other_config"]["mac-table-size"]
'2048'
>>> bridge_lists[0]["other_config"]["disable-in-band"]
'true'
>>> bridge_lists[1].get("name")
'br-tun'
>>> len(bridge_lists[1]["other_config"]) == 0
True
data

A list containing dictionary elements where each element contains the details of a bridge.

Type

list

Raises

SkipException -- When file is empty.

parse_content(content)[source]

Details of all the bridges are extracted and stored in a list as dictionary elements. Each dictionary element contains the information of a specific bridge.

PacemakerLog - file /var/log/pacemaker.log

class insights.parsers.pacemaker_log.PacemakerLog(context)[source]

Bases: insights.core.LogFileOutput

Read the pacemaker log file. Uses the LogFileOutput class parser functionality.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample pacemaker.log:

Aug 21 12:58:40 [11656] example.redhat.com        cib:     info: crm_client_destroy:    Destroying 0 events
Aug 21 12:59:53 [11655] example.redhat.com pacemakerd:     info: pcmk_quorum_notification:      Membership 12: quorum retained (3)
Aug 21 12:59:53 [11661] example.redhat.com       crmd:     info: pcmk_quorum_notification:      Membership 12: quorum retained (3)
Aug 21 12:59:53 [11655] example.redhat.com pacemakerd:     info: pcmk_quorum_notification:      Membership 12: quorum retained (3)

Note

Because pacemaker timestamps by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

Examples

>>> pm = shared(PacemakerLog)
>>> pm.get('crmd')[0]['raw_message']
'Aug 21 12:59:53 [11661] example.redhat.comm       crmd:     info: pcmk_quorum_notification:    Membership 12: quorum retained (3)'
>>> from datetime import datetime
>>> len(list(pm.get_after(datetime(21, 8, 2017, 12, 59, 50))))
3

PackageProvidesHttpd - command /bin/echo {httpd_command_package}

This module parses the content that contains running instances of ‘httpd’ and its corresponding RPM package which provide them. The running command and its package name are stored as properties command and package of the object.

The reason why using above datasource is that we need to record the links between running_httpd_command and package which provides the httpd command. In ps aux output, we can only get what httpd command starts a httpd application, instead of httpd package. Through this way, when there is httpd bug, we can detect whether a running httpd application will be affected.

Examples

>>> package.command
'/opt/rh/httpd24/root/usr/sbin/httpd'
>>> package.package
'httpd24-httpd-2.4.34-7.el7.x86_64'
class insights.parsers.package_provides_httpd.PackageProvidesHttpd(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the content like ‘/opt/rh/httpd24/root/usr/sbin/httpd /usr/sbin/httpd’

command

The httpd command that starts application.

Type

str

package

httpd package that provides above httpd command.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

PackageProvidesJava - command /bin/echo {java_command_package}

This command reads the output of the pre-command:

for jp in `/bin/ps auxwww | grep java | grep -v grep| awk '{print $11}' | sort -u`; do echo $jp `readlink -e $jp | xargs rpm -qf`; done

This command looks for all versions of ‘java’ running and tries to find the RPM packages which provide them. The running command and its package name are stored as properties command and package of the object.

The reason why using above pre_command is that we need to record the links between running_java_command and package which provides the java command. In ps aux output, we can only get what java command starts a java application, instead of java package. Through this way, when there is jdk bug, we can detect whether a running java application will be affected.

Typical contents of the pre_command:

/usr/lib/jvm/jre/bin/java java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64

Parsed result:

self.command = "/usr/lib/jvm/jre/bin/java"
self.package = "java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64"

Examples

>>> command_package = shared[PackageProvidesJava]
>>> command_package.command
'/usr/lib/jvm/jre/bin/java'
>>> command_package.package
'java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'
raises insights.parsers.ParseException

if there is no java application running

raises insights.parsers.SkipException

if running java command is not provided by package installed through yum or rpm

class insights.parsers.package_provides_java.PackageProvidesJava(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of pre_command:

``for jp in `/bin/ps auxwww | grep java | grep -v grep| awk '{print $11}' | sort -u`; do echo "$jp `readlink -e $jp | xargs rpm -qf`"; done``.
command

The java command that starts application.

Type

str

package

Java package that provides above java command.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

Pluggable Authentication Module configuration

This module provides parsing for PAM configuration files. PamConf is a parser for /etc/pam.conf files. Sample input is provided in the examples.

PamConf - file /etc/pam.conf

Sample file data:

#%PAM-1.0
vsftpd      auth        required    pam_securetty.so
vsftpd      auth        requisite   pam_unix.so nullok
vsftpd      auth        sufficient  pam_nologin.so
vsftpd      account     optional    pam_unix.so
other       password    include     pam_cracklib.so retry=3 logging=verbose
other       password    required    pam_unix.so shadow nullok use_authtok
other       session     required    pam_unix.so

Examples

>>> type(pam_conf)
<class 'insights.parsers.pam.PamConf'>
>>> len(pam_conf)
7
>>> pam_conf[0].service
'vsftpd'
>>> pam_conf[0].interface
'auth'
>>> pam_conf[0].control_flags
[ControlFlag(flag='required', value=None)]
>>> pam_conf[0].module_name
'pam_securetty.so'
>>> pam_conf[0].module_args is None
True
>>> pam_conf.file_path
'/etc/pam.conf'

PamDConf - used for specific PAM configuration files

PamDConf is a base class for the creation of parsers for /etc/pam.d service specific configuration files.

Sample file from /etc/pam.d/sshd:

#%PAM-1.0
auth       required     pam_sepermit.so
auth       substack     password-auth
auth       include      postlogin
# Used with polkit to reauthorize users in remote sessions
-auth      optional     pam_reauthorize.so prepare
account    required     pam_nologin.so
account    include      password-auth
password   include      password-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open env_params
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      password-auth
session    include      postlogin
# Used with polkit to reauthorize users in remote sessions
-session   optional     pam_reauthorize.so prepare

Examples

>>> type(pamd_conf)
<class 'insights.parsers.pam.PamDConf'>
>>> len(pamd_conf)
15
>>> pamd_conf[0]._errors == [] # No errors in parsing
True
>>> pamd_conf[0].service
'sshd'
>>> pamd_conf[0].interface
'auth'
>>> pamd_conf[0].control_flags
[ControlFlag(flag='required', value=None)]
>>> pamd_conf[0].module_name
'pam_sepermit.so'
>>> pamd_conf[0].module_args is None
True
>>> pamd_conf.file_path
'/etc/pam.d/sshd'
>>> pamd_conf[3].module_name
'pam_reauthorize.so'
>>> pamd_conf[3].ignored_if_module_not_found
True

Normal use of the PamDConf class is to subclass it for a parser. In insights/specs/default.py:

pam_sshd = simple_file("etc/pam.d/sshd")

In the parser module (e.g. insights/parsers/pam_sshd.py):

from insights import parser
from insights.parsers.pam import PamDConf
from insights.specs import Specs

@parser(Specs.pam_sshd)
class PamSSHD(PamDConf):
    pass

References

http://www.linux-pam.org/Linux-PAM-html/Linux-PAM_SAG.html

class insights.parsers.pam.PamConf(context)[source]

Bases: insights.parsers.pam.PamDConf

Base class for parsing pam config file /etc/pam.conf.

Based on the PamDConf parser class, but the service must be given as the first element of the line, rather than assumed from the file name.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.pam.PamConfEntry(line, pamd_conf=False, service=None)[source]

Bases: object

Contains information from one PAM configuration line.

Parses a single line of either a /etc/pam.conf file or a service specific /etc/pam.d conf file. The difference is that for /etc/pam.conf, the service name is the first column of the input line. If a service specific conf file then the service name is not present in the line and must be provided as the service parameter as well as setting the pamd_conf to True.

Parameters
  • line (str) -- One line of the pam conf info.

  • pamd_config (boolean) -- If this is set to False then line will be parsed as a line from the etc/pam.conf file, if True then the line will be parsed as a line from a service specific etc/pam.d/ conf file. Default is True.

  • service (str) -- If pamd_conf is True then the name of the service file must be provided since it is not present in line.

service

The service name (taken from the line or from the file name if not parsing pam.conf)

Type

str

interface

The type clause - should be one of 'account', 'auth', 'password' or 'session'.  If the line was invalid this is set to ``None.

Type

str

ignored_if_module_not_found

If the type clause is preceded by '-', then this is set to True and it indicates that PAM would skip this line rather than reporting an error if the given module is not found.

Type

bool

control_flags

A list of ControlFlag named tuples. If the control flag was one of 'required', 'requisite', 'sufficient', 'optional', 'include', or 'substack', then this is the only flag in the list and its value is set to True. If the control flag started with [, then the list inside the square brackets is interpreted as a list of key=value tuples.

Type

list

_control_raw

the raw control flag string before parsing, for reference.

Type

str

module_name

the PAM module name (including the ‘.so’)

Type

str

module_args

the PAM module arguments, if any. This is not parsed.

Type

str

_full_line

The original line in the PAM configuration.

Type

str

_errors

A list of parsing errors detected in this line.

Type

list

Examples

>>> pam_conf_line = 'vsftpd      auth        requisite   pam_unix.so nullok'
>>> entry = PamConfEntry(pam_conf_line)
>>> entry.service
'vsftpd'
>>> entry.control_flags[0].flag
'requisite'
>>> entry.module_args
'nullok'
>>> pamd_conf_line = '''
... auth        [success=2 default=ok]  pam_debug.so auth=perm_denied cred=success
... '''.strip()
>>> entry = PamConfEntry(pamd_conf_line, pamd_conf=True, service='vsftpd')
>>> entry.service
'vsftpd'
>>> entry.control_flags
[ControlFlag(flag='success', value='2'), ControlFlag(flag='default', value='ok')]
>>> entry.module_args
'auth=perm_denied cred=success'
Raises

ValueError -- If pamd_conf is True and service name is not provided, or if the line doesn’t contain any module information.

ControlFlag

A named tuple with the ‘flag’ and ‘value’ properties, used to store information about the control flags in a PAM configuration line.

class ControlFlag(flag, value)

Bases: tuple

property flag
property value
class insights.parsers.pam.PamDConf(context)[source]

Bases: insights.core.Parser

Base class for parsing files in /etc/pam.d

Derive from this class for parsers of files in the /etc/pam.d directory. Parses each line of the conf file into a list of PamConfEntry. Configuration file format is:

module_interface    control_flag    module_name module_arguments
Sample input::
>>> pam_sshd = '''
... auth        required    pam_securetty.so
... auth        requisite   pam_unix.so nullok
... auth        sufficient  pam_nologin.so
... auth        [success=2 default=ok]  pam_debug.so auth=perm_denied cred=success
... account     optional    pam_unix.so
... password    include     pam_cracklib.so retry=3 logging=verbose
... password    required    pam_unix.so shadow nullok use_authtok
... '''
>>> from insights.tests import context_wrap
>>> class YourPamDConf(PamDConf):  # A trivial example
...     pass
>>> conf = YourPamDConf(context_wrap(pam_sshd, path='/etc/pam.d/sshd'))

The service property of each PamConfEntry is set to the complete path name of the PAM config file.

data

List containing a PamConfEntry object for each line of the conf file in the same order as lines appear in the file.

Type

list

Examples

>>> conf[0].module_name  # Can be used like a list of objects
'pam_securetty.so'
>>> account_rows = list(conf.search(interface='account'))
>>> len(account_rows)
1
>>> account_rows[0].interface
'account'
>>> account_rows[0].module_name
'pam_unix.so'
>>> account_rows[0].control_flags
[ControlFlag(flag='optional', value=None)]
parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kwargs)[source]

Search the pam.d configuration file by keyword. This is provided by the insights.parsers.keyword_search() function - see its documentation for more information.

Searching on the list of PAM configuration entries is exactly like they were dictionaries instead of objects with properties. In addition, the ‘control_flags’ property becomes a dictionary of keywords and values, so that ‘control_flags__contains’ allows searching for a particular control flag.

Returns

A list of PamConfEntry objects that match the given search criteria.

Return type

(list)

PartedL - command parted -l

This module provides processing for the parted command. The output is parsed by the PartedL class. Attributes are provided for each field for the disk, and a list of Partition class objects, one for each partition in the output.

Typical content of the parted -l command output looks like:

Model: ATA TOSHIBA MG04ACA4 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  526MB   524MB   xfs
 3      526MB   4001GB  4000GB                     lvm

The columns may vary depending upon the type of device.

Note

The examples in this module may be executed with the following command:

python -m insights.parsers.parted

Examples

>>> parted_data = '''
... Model: ATA TOSHIBA MG04ACA4 (scsi)
... Disk /dev/sda: 4001GB
... Sector size (logical/physical): 512B/512B
... Partition Table: gpt
... Disk Flags: pmbr_boot
...
... Number  Start   End     Size    File system  Name  Flags
...  1      1049kB  2097kB  1049kB                     bios_grub
...  2      2097kB  526MB   524MB   xfs
...  3      526MB   4001GB  4000GB                     lvm
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {PartedL: PartedL(context_wrap(parted_data))}
>>> parted_info = shared[PartedL]
>>> parted_info.data
{'partition_table': 'gpt', 'sector_size': '512B/512B', 'disk_flags': 'pmbr_boot', 'partitions': [{'end': '2097kB', 'name': 'bios_grub', 'number': '1', 'start': '1049kB', 'flags': 'bios_grub', 'file_system': 'bios_grub', 'size': '1049kB'}, {'start': '2097kB', 'size': '524MB', 'end': '526MB', 'number': '2', 'file_system': 'xfs'}, {'end': '4001GB', 'name': 'lvm', 'number': '3', 'start': '526MB', 'flags': 'lvm', 'file_system': 'lvm', 'size': '4000GB'}], 'model': 'ATA TOSHIBA MG04ACA4 (scsi)', 'disk': '/dev/sda', 'size': '4001GB'}
>>> parted_info.data['model']
'ATA TOSHIBA MG04ACA4 (scsi)'
>>> parted_info.disk
'/dev/sda'
>>> parted_info.logical_sector_size
'512B'
>>> parted_info.physical_sector_size
'512B'
>>> parted_info.boot_partition
>>> parted_info.data['disk_flags']
'pmbr_boot'
>>> len(parted_info.partitions)
3
>>> parted_info.partitions[0].data
{'end': '2097kB', 'name': 'bios_grub', 'number': '1', 'start': '1049kB', 'flags': 'bios_grub', 'file_system': 'bios_grub', 'size': '1049kB'}
>>> parted_info.partitions[0].number
'1'
>>> parted_info.partitions[0].start
'1049kB'
>>> parted_info.partitions[0].end
'2097kB'
>>> parted_info.partitions[0].size
'1049kB'
>>> parted_info.partitions[0].file_system
'bios_grub'
>>> parted_info.partitions[0].type
>>> parted_info.partitions[0].flags
'bios_grub'
class insights.parsers.parted.PartedL(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to represent attributes of the parted command output.

The columns may vary depending upon the type of device.

data

Dictionary of information returned by parted command.

Type

dict

partitions

The partitions found in the output, as Partition objects.

Type

list

boot_partition

the first partition marked as bootable, or None if one was not found.

Type

Partition

Raises
  • ParseException -- Raised if parted output indicates “error” or “warning” in first line, or if “disk” field is not present, or if there is an error parsing the data.

  • ValueError -- Raised if there is an error parsing the partition table.

property disk

Disk information.

Type

str

get(item)[source]

Returns a value for the specified item key.

property logical_sector_size

Logical part of sector size.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

property physical_sector_size

Physical part of sector size.

Type

str

class insights.parsers.parted.Partition(data)[source]

Bases: object

Class to contain information for one partition.

Represents the values from one row of the partition information from the parted command. Column names have been converted to lowercase and are provided as attributes. Column names may vary so the get method may be used to check for the presence of a column.

data

Dictionary of partition information keyed by column names in lowercase.

Type

dict

property end

Ending location for the partition.

Type

str

property file_system

File system type.

Type

str

property flags

Partition flags.

Type

str

get(item)[source]

Get information for column item or None if not present.

property number

Partition number.

Type

str

property size

Size of the partition.

Type

str

property start

Starting location for the partition.

Type

str

property type

File system type.

Type

str

Partitions - file /proc/partitions

This parser reads the /proc/partitions file, which contains partition block allocation information.

class insights.parsers.partitions.Partitions(context)[source]

Bases: insights.core.Parser, dict

A class for parsing the /proc/partitions file.

Sample input:

major minor  #blocks  name

   3     0   19531250 hda
   3     1     104391 hda1
   3     2   19422585 hda2
 253     0   22708224 dm-0
 253     1     524288 dm-1

Examples

>>> type(partitions_info)
<class 'insights.parsers.partitions.Partitions'>
>>> 'hda' in partitions_info
True
>>> partitions_info['dm-0'].get('major')
'253'
>>> sorted(partitions_info['hda'].items(), key=lambda x: x[0])
[('blocks', '19531250'), ('major', '3'), ('minor', '0'), ('name', 'hda')]
Raises

SkipException -- When input is empty.

parse_content(content)[source]

This method must be implemented by classes based on this class.

property partitions

Dictionary with each partition name as index and its information from the block allocation table.

Type

(dict)

passenger-status command

This module provides processing for the passenger-status command using the following parsers:

class insights.parsers.passenger_status.PassengerStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Parse the passenger-status command output.

Produces a simple dictionary of keys and values from the command output contents.

Sample command output:

Version : 4.0.18
Date    : 2018-10-23 15:42:04 +0800
Instance: 1265
----------- General information -----------
Max pool size : 12
Processes     : 2
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
App root: /usr/share/foreman
Requests in queue: 192
* PID: 30131   Sessions: 1       Processed: 991     Uptime: 2h 9m 8s
CPU: 3%      Memory  : 562M    Last used: 1h 53m 51s
* PID: 32450   Sessions: 1       Processed: 966     Uptime: 2h 8m 15s
CPU: 4%      Memory  : 463M    Last used: 1h 48m 17
* PID: 4693    Sessions: 1       Processed: 939     Uptime: 2h 6m 32s
CPU: 3%      Memory  : 470M    Last used: 1h 50m 48

/etc/puppet/rack#default:
App root: /etc/puppet/rack
Requests in queue: 0
* PID: 21934   Sessions: 1       Processed: 380     Uptime: 1h 33m 34s
CPU: 1%      Memory  : 528M    Last used: 1h 29m 4
* PID: 26194   Sessions: 1       Processed: 544     Uptime: 1h 31m 34s
CPU: 2%      Memory  : 490M    Last used: 1h 23m 5
* PID: 32384   Sessions: 1       Processed: 36      Uptime: 1h 0m 29s
CPU: 0%      Memory  : 561M    Last used: 1h 0m 3s

Examples

>>> passenger_status["Version"]
'4.0.18'
>>> 'rack_default' in passenger_status
True
>>> len(passenger_status['foreman_default']['p_list'])
3
Raises

SkipException -- When input content is empty or there is no useful data.

property data

A simple dictionary of keys and values from the command output contents

Type

(dict)

parse_content(content)[source]

This method must be implemented by classes based on this class.

PciRportTargetDiskPath

Module for parsing the output of command find /sys/devices/ -maxdepth 10 -mindepth 9 -name stat -type f.

class insights.parsers.pci_rport_target_disk_paths.PciRportTargetDiskPaths(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing find /sys/devices/ -maxdepth 10 -mindepth 9 -name stat -type f command output.

Typical output of command find /sys/devices/ -maxdepth 10 -mindepth 9 -name stat -type f with the filter of ‘block’ looks like:

/sys/devices/pci0000:00/0000:00:01.0/0000:04:00.6/host1/rport-1:0-1/target1:0:0/1:0:0:0/block/sdb/stat
/sys/devices/pci0000:00/0000:00:01.0/0000:04:00.7/host2/rport-2:0-2/target2:0:0/2:0:0:0/block/sdc/stat
/sys/devices/pci0000:00/0000:00:02.2/0000:02:00.0/host0/target0:1:0/0:1:0:0/block/sda/stat

The Original data parsed looks like:

[
       {
           'target': 'target1:0:0',
           'devnode': 'sdb',
           'host_channel_id_lun': '1:0:0:0',
           'pci_id': '0000:04:00.6',
           'host': 'host1',
           'rport': 'rport-1:0-1'
       },
       {
           'target': 'target2:0:0',
           'devnode': 'sdc',
           'host_channel_id_lun': '2:0:0:0',
           'pci_id': '0000:04:00.7',
           'host': 'host2',
           'rport': 'rport-2:0-2'
       },
       {
           'target': 'target0:1:0',
           'devnode': 'sda',
           'host_channel_id_lun': '0:1:0:0',
           'pci_id': '0000:02:00.0',
           'host': 'host0',
       }
]

Examples

>>> type(pd)
<class 'insights.parsers.pci_rport_target_disk_paths.PciRportTargetDiskPaths'>
>>> pd.pci_id
['0000:02:00.0', '0000:04:00.6', '0000:04:00.7']
>>> pd.host
['host0', 'host1', 'host2']
>>> pd.target
['target0:1:0', 'target1:0:0', 'target2:0:0']
>>> pd.host_channel_id_lun
['0:1:0:0', '1:0:0:0', '2:0:0:0']
>>> pd.devnode
['sda', 'sdb', 'sdc']
Raises
path_list

the result parsed

Type

list

property devnode

The all devicenode(s) from parsed content.

Returns

device nodes

Return type

list

property host

The all host(s) from parsed content.

Returns

hosts

Return type

list

property host_channel_id_lun

The all host_channel_id_lun(s) from parsed content

Returns

host_channel_id_lun

Return type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

property pci_id

The all pci_id(s) from parsed content.

Returns

pci id

Return type

list

property rport

The all rport(s) from parsed content.

Returns

rports

Return type

list

property target

The all target(s) from parsed content.

Returns

targets

Return type

list

PCSConfig - command pcs config

This module provides class PCSConfig for parsing output of pcs config command.

Typical /usr/sbin/pcs config output looks something like:

Cluster Name: cluster-1
Corosync Nodes:
 node-1 node-2
Pacemaker Nodes:
 node-1 node-2

Resources:
 Clone: clone-1
 Meta Attrs: interleave=true ordered=true
 Resource: res-1 (class=ocf provider=pacemaker type=controld)
  Operations: start interval=0s timeout=90 (dlm-start-interval-0s)
              stop interval=0s timeout=100 (dlm-stop-interval-0s)
              monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)
 Group: grp-1
 Resource: res-1 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=10.0.0.1 cidr_netmask=32
  Operations: monitor interval=120s (ip_monitor-interval-120s)
              start interval=0s timeout=20s (ip_-start-interval-0s)
              stop interval=0s timeout=20s (ip_-stop-interval-0s)

Stonith Devices:
Fencing Levels:

Location Constraints:
Resource: fence-1
    Disabled on: res-mgt (score:-INFINITY) (id:location-fence-1--INFINITY)
Resource: res-1
    Enabled on: res-mcast (score:INFINITY) (role: Started) (id:cli-prefer-res)
Ordering Constraints:
Colocation Constraints:

Resources Defaults:
 resource-stickiness: 100
 migration-threshold: 3
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: cluster-1
 dc-version: 1.1.13-10.el7_2.4-44eb2dd
 have-watchdog: false
 no-quorum-policy: ignore
 stonith-enable: true
 stonith-enabled: false

The class provides attribute data as dictionary with lines parsed line by line based on keys, which are the key words of the output. Information in keys Corosync Nodes and Pacemaker Nodes is parsed in one line. The get method get(str) provides lines from data based on given key.

Examples

>>> pcs_config.get("Cluster Name")
'cluster-1'
>>> pcs_config.get("Corosync Nodes")
['node-1', 'node-2']
>>> pcs_config.get("Cluster Properties")
['cluster-infrastructure: corosync', 'cluster-name: cluster-1', 'dc-version: 1.1.13-10.el7_2.4-44eb2dd', 'have-watchdog: false', 'no-quorum-policy: ignore', 'stonith-enable: true', 'stonith-enabled: false']
>>> pcs_config.get("Colocation Constraints")
['clone-1 with clone-x (score:INFINITY) (id:clone-INFINITY)', 'clone-2 with clone-x (score:INFINITY) (id:clone-INFINITY)']
>>> 'have-watchdog' in pcs_config.cluster_properties
True
>>> pcs_config.cluster_properties.get('have-watchdog')
'false'
class insights.parsers.pcs_config.PCSConfig(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to process the output of pcs config command.

cluster_properties

A dictionary containing all cluster properties key, value

Type

dict

data

A dictionary containing all line. Keys sorted based on keywords of the output. the form:

{
    "Cluster Name": "cluster-1",
    "Corosync Nodes": [
        "node-1",
        "node-2"
    ],
    "Pacemaker Nodes": [
        "node-1",
        "node-2"
    ],
    "Resources": [
        "Clone: clone-1",
        "Meta Attrs: interleave=true ordered=true",
        "Resource: res-1 (class=ocf provider=pacemaker type=controld)",
        "Operations: start interval=0s timeout=90 (dlm-start-interval-0s)",
        "stop interval=0s timeout=100 (dlm-stop-interval-0s)",
        "monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)"
    ],
    "Cluster Properties": [
        "cluster-infrastructure: corosync",
        "cluster-name: cluster-1"
    ]
}
Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

PcsQuorumStatus - Commands pcs quorum status

class insights.parsers.pcs_quorum_status.PcsQuorumStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the output of pcs quorum status command.

Typical output of the command is:

Quorum information
------------------
Date:             Wed Jun 29 13:17:02 2016
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          1
Ring ID:          1/8272
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate Qdevice

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
         1          1    A,V,NMW node1 (local)
         2          1    A,V,NMW node2
         0          1            Qdevice
quorum_info

Dicts where keys are the feature name of quorum information and values are the corresponding feature value.

Type

dict

votequorum_info

Dicts where keys are the feature name of votequorum information and values are the corresponding feature value.

Type

dict

membership_info

List of dicts where keys are the feature name of each node and values are the corresponding feature value.

Type

list

Raises

Examples

>>> type(pcs_quorum_status)
<class 'insights.parsers.pcs_quorum_status.PcsQuorumStatus'>
>>> pcs_quorum_status.quorum_info['Node ID']
'1'
>>> pcs_quorum_status.votequorum_info['Expected votes']
'3'
>>> pcs_quorum_status.membership_info[0]['Name']
'node1 (local)'
parse_content(content)[source]

This method must be implemented by classes based on this class.

PCSStatus - command pcs status

This module provides the classs PCSStatus which processes /usr/sbin/pcs status command output. Typical output of the pcs status command looks like:

Cluster name: mycluster
Last updated: Thu Dec  1 02:33:50 2016          Last change: Wed Aug  3 03:47:11 2016 by root via cibadmin on nodea.example.com
Stack: corosync
Current DC: nodea.example.com (version 1.1.13-10.el7-44eb2dd) - partition WITHOUT quorum
3 nodes and 3 resources configured

Online: [ nodea.example.com ]
OFFLINE: [ nodeb.example.com nodec.example.com ]

Full list of resources:
 myfence        (stonith:fence_xvm):    Stopped
 Resource Group: myweb
     webVIP     (ocf::heartbeat:IPaddr2):       Stopped
     webserver  (ocf::heartbeat:apache):        Stopped
PCSD Status:
  nodea.example.com: Online
  nodeb.example.com: Offline
  nodec.example.com: Offline
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

The class PCSStatus has one attribute data which is a dict containing the whole the output and one attribute nodes is a list containing all node names that from PCSD Status section.

Examples

>>> pcsstatus_content = '''
... Cluster name: openstack
... Last updated: Fri Oct 14 15:45:32 2016
... Last change: Thu Oct 13 20:02:27 2016
... Stack: corosync
... Current DC: myhost15 (1) - partition with quorum
... Version: 1.1.12-a14efad
... 3 Nodes configured
... 143 Resources configured
... online: [ myhost15 myhost16 myhost17 ]
... Full list of resources:
... stonith-ipmilan-10.24.221.172   (stonith:fence_ipmilan):        Started myhost15
... stonith-ipmilan-10.24.221.171   (stonith:fence_ipmilan):        Started myhost16
... stonith-ipmilan-10.24.221.173   (stonith:fence_ipmilan):        Started myhost15
... PCSD Status:
...     myhost15: Online
...     myhost17: Online
...     myhost16: Online
... Daemon Status:
...    corosync: active/enabled
...    pacemaker: active/enabled
...    pcsd: active/enabled
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {PCSStatus: PCSStatus(context_wrap(pcsstatus_content))}
>>> pcsstatus_info = shared[PCSStatus]
>>> pcsstatus_info.get("Cluster name")
'openstack'
>>> pcsstatus_info.get("Stack")
'corosync'
>>> pcsstatus_info.get("Nodes configured")
'3'
>>> pcsstatus_info.get("Resources configured")
'143'
>>> pcsstatus_info.nodes
['myhost15', 'myhost17', 'myhost16']
>>> len(pcsstatus_info.get("Full list of resources"))
3
class insights.parsers.pcs_status.PCSStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class to process the output of pcs status command.

nodes

A list containing all node names according “PCSD Status” section

Type

list

data

A dict containing key, value for each line of the output in the form:

{
   "Resources configured": "143",
   "PCSD Status": [
       "myhost15: Online",
       "myhost17: Online",
       "myhost16: Online"
   ],
   "Current DC": "myhost15 (1) - partition with quorum",
   "Full list of resources": [
       "stonith-ipmilan-10.24.221.172      (stonith:fence_ipmilan):        Started myhost15",
       "stonith-ipmilan-10.24.221.171      (stonith:fence_ipmilan):        Started myhost16",
       "stonith-ipmilan-10.24.221.173      (stonith:fence_ipmilan):        Started myhost15",
   ],
   "Daemon Status": [
       "corosync: active/enabled",
       "pacemaker: active/enabled",
       "pcsd: active/enabled"
   ],
   "Nodes configured": "3",
   "Online": "[ myhost15 myhost16 myhost17 ]",
   "Cluster name": "openstack",
   "Stack": "corosync"
}
Type

dict

get(i_key)[source]

str/list: Returns the data associated with i_key or None if i_key is not present

parse_content(content)[source]

This method must be implemented by classes based on this class.

PodmanInspect - Command podman inspect --type={TYPE}

This module parses the output of the podman inspect command. This uses the core.marshalling.unmarshal function to parse the JSON output from the commands. The data is stored as a dictionary.

class insights.parsers.podman_inspect.PodmanInspect(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Parse the output of command “podman inspect --type=image” and “podman inspect --type=container”. The output of these two commands is formatted as JSON, so “json.loads” is an option to parse the output in the future.

Raises

SkipException -- If content is not provided

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.podman_inspect.PodmanInspectContainer(context, extra_bad_lines=[])[source]

Bases: insights.parsers.podman_inspect.PodmanInspect

Parse podman container inspect output using the PodmanInspect parser class.

Sample input:

[
    {
        "ID": "66db151828e9beede0cdd9c17fc9bd5ebb5d125dd036f7230bc6b6433e5c0dda",
        "Created": "2019-08-21T10:38:34.753548542Z",
        "Path": "dumb-init",
        "State": {
            "OciVersion": "1.0.1-dev",
            "Status": "running",
            "Running": true,
            "Paused": false,
        },
    ...

Examples

>>> container['ID'] == '66db151828e9beede0cdd9c17fc9bd5ebb5d125dd036f7230bc6b6433e5c0dda'
True
>>> container['Path'] == 'dumb-init'
True
>>> container.get('State').get('Paused') is False
True
class insights.parsers.podman_inspect.PodmanInspectImage(context, extra_bad_lines=[])[source]

Bases: insights.parsers.podman_inspect.PodmanInspect

Parse podman image inspect output using the PodmanInspect parser class.

Sample input:

[
    {
        "Id": "013125b8a088f45be8f85f88b5504f05c02463b10a6eea2b66809a262bb911ca",
        "Digest": "sha256:f9662cdd45e3db182372a4fa6bfff10e1c601cc785bac09ccae3b18f0bc429df",
        "RepoTags": [
            "192.168.24.1:8787/rhosp15/openstack-rabbitmq:20190819.1",
            "192.168.24.1:8787/rhosp15/openstack-rabbitmq:pcmklatest"
        ],
    ...

Examples

>>> image['Id'] == '013125b8a088f45be8f85f88b5504f05c02463b10a6eea2b66809a262bb911ca'
True
>>> image['RepoTags'][0] == '192.168.24.1:8787/rhosp15/openstack-rabbitmq:20190819.1'
True

PodmanList - command /usr/bin/podman (images|ps)

Parse the output of command “podman_list_images” and “podman_list_containers”, which have very similar formats.

The header line is parsed and used as the names for the remaining columns. All fields in both header and data are assumed to be separated by at least three spaces. This allows single spaces in values and headers, so headers such as ‘IMAGE ID’ are captured as is.

If the header line and at least one data line are not found, no data is stored.

Each row is stored as a dictionary, keyed on the header fields. The data is available in two formats:

  • The old format is a list of row dictionaries.

  • The new format stores each dictionary in a dictionary keyed on the value of a given field, given by the subclass.

class insights.parsers.podman_list.PodmanList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A general class for parsing tabular podman list information. Parsing rules are:

  • The first line is the header line.

  • The other lines are data lines.

  • All fields line up vertically.

  • Fields are separated from each other by at least three spaces.

  • Some fields can contain nothing, and this is shown as spaces, so we need to catch that and turn it into None.

Why not just use hard-coded fields and columns? So that we can adapt to different output lists.

Raises
  • NotImplementedError -- If key_field or attr_name is not defined

  • SkipException -- If no data to parse

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.podman_list.PodmanListContainers(context, extra_bad_lines=[])[source]

Bases: insights.parsers.podman_list.PodmanList

Handle the list of podman containers using the PodmanList parser class.

Sample output of command podman ps --all --no-trunc --size:

CONTAINER ID                                                       IMAGE                                                              COMMAND                                            CREATED             STATUS                        PORTS                  NAMES               SIZE
03e2861336a76e29155836113ff6560cb70780c32f95062642993b2b3d0fc216   rhel7_httpd                                                        "/usr/sbin/httpd -DFOREGROUND"                     45 seconds ago      Up 37 seconds                 0.0.0.0:8080->80/tcp   angry_saha          796 B (virtual 669.2 MB)
95516ea08b565e37e2a4bca3333af40a240c368131b77276da8dec629b7fe102   bd8638c869ea40a9269d87e9af6741574562af9ee013e03ac2745fb5f59e2478   "/bin/sh -c 'yum install -y vsftpd-2.2.2-6.el6'"   51 minutes ago      Exited (137) 50 minutes ago                          tender_rosalind     4.751 MB (virtual 200.4 MB)
rows

List of row dictionaries.

Type

list

containers

Dictionary keyed on the value of the “NAMES” field

Type

dict

Examples

>>> containers.rows[0]['NAMES']
'angry_saha'
>>> containers.rows[0]['STATUS']
'Up 37 seconds'
>>> containers.containers['tender_rosalind']['STATUS']
'Exited (137) 18 hours ago'
class insights.parsers.podman_list.PodmanListImages(context, extra_bad_lines=[])[source]

Bases: insights.parsers.podman_list.PodmanList

Handle the list of podman images using the PodmanList parser class.

Sample output of command podman images --all --no-trunc --digests:

REPOSITORY                           TAG                 DIGEST              IMAGE ID                                                           CREATED             SIZE
rhel6_vsftpd                         latest              <none>              412b684338a1178f0e5ad68a5fd00df01a10a18495959398b2cf92c2033d3d02   37 minutes ago      459.5 MB
rhel7_imagemagick                    latest              <none>              882ab98aae5394aebe91fe6d8a4297fa0387c3cfd421b2d892bddf218ac373b2   4 days ago          785.4 MB
rhel6_nss-softokn                    latest              <none>              dd87dad2c7841a19263ae2dc96d32c501ee84a92f56aed75bb67f57efe4e48b5   5 days ago          449.7 MB
rows

List of row dictionaries.

Type

list

images

Dictionary keyed on the value of the “REPOSITORY” fileld

Type

dict

Examples

>>> images.rows[0]['REPOSITORY']
'rhel6_vsftpd'
>>> images.rows[1]['SIZE']
'785.4 MB'
>>> images.images['rhel6_vsftpd']['CREATED']
'37 minutes ago'

PostgreSQLConf - file /var/lib/pgsql/data/postgresql.conf

The PostgreSQL configuration file is in a fairly standard ‘key = value’ format, with the equals sign being optional. A hash mark (#) marks the rest of the line as a comment.

The configuration then appears as a dictionary in the data property.

This parser does not attempt to know the default value of any property; it only shows what’s defined in the configuration file as given.

This parser also provides several utility functions to make sense of values specific to PostgreSQL. These are:

  • as_duration(property)

    Convert the value (given in milliseconds, seconds, minutes, hours or days) to seconds (as a floating point value).

  • as_boolean(property)

    If the value is ‘on’, ‘true’, ‘yes’, or ‘1’, return True. If the value is ‘off’, ‘false’, ‘no’ or ‘0’, return False. Unique prefixes of these are acceptable and case is ignored.

  • as_memory_bytes(property)

    Convert a number given in KB, MB or GB into bytes, where 1 kilobyte is 1024 bytes.

All three type conversion functions will raise a ValueError if the value doesn’t match the spec or cannot be converted to the correct type.

Example

>>> pgsql = shared[PostgreSQLConf]
>>> 'port' in pgsql
True
>>> pgsql['port']
'5432'
>>>
class insights.parsers.postgresql_conf.PostgreSQLConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parses postgresql.conf and converts it into a dictionary of properties.

as_boolean(item, default=None)[source]

See https://www.postgresql.org/docs/9.3/static/config-setting.html :-

“Boolean values can be written as on, off, true, false, yes, no, 1, 0 (all case-insensitive) or any unambiguous prefix of these.”

as_duration(item, default=None)[source]

Postgres’s time durations for checkpoint_timeout can have ‘ms’, ‘s’, ‘min’, ‘h’, or ‘d’ suffixes. We convert all of them here to seconds.

See https://www.postgresql.org/docs/9.3/static/config-setting.html :-

“Valid time units are ms (milliseconds), s (seconds), min (minutes), h (hours), and d (days)”

We return a floating point number because of the possibility of convertion from milliseconds, and because maybe someone will say 8.4h.

as_memory_bytes(item, default=None)[source]

See https://www.postgresql.org/docs/9.3/static/config-setting.html :-

“Valid memory units are kB (kilobytes), MB (megabytes), and GB (gigabytes). Note that the multiplier for memory units is 1024, not 1000.”

parse_content(content)[source]

Parsing rules from :

https://www.postgresql.org/docs/9.3/static/config-setting.html

One parameter is specified per line. The equal sign between name and value is optional. Whitespace is insignificant and blank lines are ignored. Hash marks (#) designate the remainder of the line as a comment. Parameter values that are not simple identifiers or numbers must be single-quoted. To embed a single quote in a parameter value, write either two quotes (preferred) or backslash-quote.

PostgreSQLLog - file /var/lib/pgsql/data/pg_log/postgresql-*.log

class insights.parsers.postgresql_log.PostgreSQLLog(context)[source]

Bases: insights.core.LogFileOutput

Read the PostgreSQL log files. Uses the LogFileOutput class parser functionality.

Note

Please refer to its super-class insights.core.LogFileOutput

The PostgreSQL log files contain no dates or times by default:

LOG:  shutting down
LOG:  database system is shut down
LOG:  database system was shut down at 2015-03-31 05:05:12 UTC
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

Because this parser reads multiple log files, the contents of the shared parser information are a list of parsed files. This means at present that you will need to iterate through the log file objects in that list in order to find what you want.

Examples

>>> for log in shared(PacemakerLog):
...     print "File:", logs.file_path
...     print "Startups:", len(logs.get('ready to accept connections'))
...
File: /var/log/pgsql/data/pg_log/postgresql-Fri
Startups: 1
File: /var/log/pgsql/data/pg_log/postgresql-Mon
Startups: 0
File: /var/log/pgsql/data/pg_log/postgresql-Sat
Startups: 0
File: /var/log/pgsql/data/pg_log/postgresql-Sun
Startups: 2
File: /var/log/pgsql/data/pg_log/postgresql-Thu
Startups: 0
File: /var/log/pgsql/data/pg_log/postgresql-Tue
Startups: 0
File: /var/log/pgsql/data/pg_log/postgresql-Wed
Startups: 0

ProcEnviron - File /proc/<PID>/environ

Parser for parsing the environ file under /proc/<PID> directory.

class insights.parsers.proc_environ.OpenshiftFluentdEnviron(context)[source]

Bases: insights.parsers.proc_environ.ProcEnviron

Class for parsing the environ file of the fluentd process.

class insights.parsers.proc_environ.OpenshiftRouterEnviron(context)[source]

Bases: insights.parsers.proc_environ.ProcEnviron

Class for parsing the environ file of the openshift-route process.

class insights.parsers.proc_environ.ProcEnviron(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Base class for parsing the environ file under special /proc/<PID> directory into a dictionaries with environment variable name as key and containing environment variable value.

Typical content looks like:

REGISTRIES=--add-registry registry.access.redhat.com OPTIONS= --selinux-enabled       --signature-verification=FalseDOCKER_HTTP_HOST_COMPAT=1ADD_REGISTRY=--add-registry registry.access.redhat.comPATH=/usr/libexec/docker:/usr/bin:/usr/sbinPWD=/run/docker/libcontainerd/containerd/135240dbd15a834acb21d68867930917afcc84c5f006ba65004acd88dccab756/initLANG=en_US.UTF-8GOTRACEBACK=crashDOCKER_NETWORK_OPTIONS= --mtu=1450DOCKER_CERT_PATH=/etc/dockerSHLVL=0DOCKER_STORAGE_OPTIONS=--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker--vg-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true 

Examples

>>> proc_environ['REGISTRIES']
'--add-registry registry.access.redhat.com'
>>> 'OPTIONS' in proc_environ
True
Raises
parse_content(content)[source]

This method must be implemented by classes based on this class.

ProcLimits - File /proc/<PID>/limits

Parser for parsing the the limits file under special /proc/<PID> directory.

class insights.parsers.proc_limits.HttpdLimits(context)[source]

Bases: insights.parsers.proc_limits.ProcLimits

Class for parsing the limits file of the httpd process.

class insights.parsers.proc_limits.Limits(data={})[source]

Bases: insights.core.LegacyItemAccess

An object representing a line in the /proc/limits. Each entry contains below fixed attributes:

hard_limit

Hard limit

Type

str

soft_limit

Soft limit

Type

str

units

Unit of the limit value

Type

str

items()[source]

To keep backward compatibility and let it can be iterated as a dictionary.

class insights.parsers.proc_limits.MysqldLimits(context)[source]

Bases: insights.parsers.proc_limits.ProcLimits

Class for parsing the limits file of the mysqld process.

class insights.parsers.proc_limits.OvsVswitchdLimits(context)[source]

Bases: insights.parsers.proc_limits.ProcLimits

Class for parsing the limits file of the ovs-vswitchd process.

class insights.parsers.proc_limits.ProcLimits(context)[source]

Bases: insights.core.Parser

Base class for parsing the limits file under special /proc/<PID> directory into a list of dictionaries by using the insights.parsers.parse_fixed_table() function.

Each line is a dictionary of fields, named according to their definitions in Limit.

This class provides the ‘__len__’ and ‘__iter__’ methods to allow it to be used as a list to iterate over the parsed dictionaries.

Each of the resource provided by this file will be set as an attribute. The attribute name is the resource name got from the Limit column, which is converted to lowercase and joined the words with underline ‘_’. If not sure about whether an attribute is exist or not, check it via the ‘__contains__’ method before fetching it. The attribute value is set to a Limits which wraps the corresponding hard_limit, soft_limit and units.

Typical content looks like:

Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            10485760             unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             9                    99                   processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15211                15211                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Examples

>>> len(proc_limits)
16
>>> proc_limits.max_processes.hard_limit
'99'
>>> proc_limits.max_processes.soft_limit
'9'
>>> 'max_cpu_time' in proc_limits
True
>>> proc_limits.max_cpu_time.soft_limit
'unlimited'
>>> proc_limits.max_cpu_time.units
'seconds'
Raises

insights.parsers.ParseException -- if the limits file is empty or doesn’t exist.

parse_content(content)[source]

This method must be implemented by classes based on this class.

ProcStat - File /proc/stat

This parser reads the content of /proc/stat.

class insights.parsers.proc_stat.ProcStat(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class ProcStat parses the content of the /proc/stat.

cpu_percentage

The CPU usage percentage since boot.

Type

string

intr_total

The total of all interrupts serviced including unnumbered architecture specific interrupts.

Type

int

ctxt

The number of context switches that the system under went.

Type

int

btime

boot time, in seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC).

Type

string

processes

Number of forks since boot.

Type

int

procs_running

Number of processes in runnable state. (Linux 2.5.45 onward.)

Type

int

procs_blocked

Number of processes blocked waiting for I/O to complete. (Linux 2.5.45 onward.)

Type

int

softirq_total

The total of all softirqs and each subsequent column is the total for particular softirq. (Linux 2.6.31 onward.)

Type

int

A small sample of the content of this file looks like:

cpu  32270961 89036 23647730 1073132344 1140756 0 1522035 18738206 0 0
cpu0 3547155 11248 2563031 135342787 113432 0 199615 2199379 0 0
cpu1 4660934 10954 3248126 132271933 120282 0 279870 2660186 0 0
cpu2 4421035 10729 3306081 132914999 126705 0 194141 2505565 0 0
cpu3 4224551 10633 3139695 133634676 121035 0 181213 2380738 0 0
cpu4 3985452 11151 2946570 134064686 205568 0 165839 2478471 0 0
cpu5 3914912 11396 2896447 134635676 117341 0 164794 2260011 0 0
cpu6 3802544 11418 2817453 134878674 222855 0 182738 2150276 0 0
cpu7 3714375 11503 2730323 135388911 113534 0 153821 2103576 0 0
intr 21359029 22 106 0 0 0 0 3 0 1 0 16 155 357 0 0 671261 0 0 0 0 0 0 0 0 0 0 0 32223 0 4699385 2 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ctxt 17852681
btime 1542179825
processes 19212
procs_running 1
procs_blocked 0
softirq 11867930 1 3501158 6 4705528 368244 0 79 2021509 0 1271405

Examples

>>> type(proc_stat)
<class 'insights.parsers.proc_stat.ProcStat'>
>>> proc_stat.cpu_percentage
'6.73%'
>>> proc_stat.btime
'1542179825'
>>> proc_stat.ctxt
17852681
>>> proc_stat.softirq_total
11867930
>>> proc_stat.intr_total
21359029
>>> proc_stat.processes
19212
>>> proc_stat.procs_running
1
>>> proc_stat.procs_blocked
0
parse_content(content)[source]

This method must be implemented by classes based on this class.

Ps - command ps auxww and others

This module provides processing for the various outputs of the ps command.

class insights.parsers.ps.Ps(*args, **kwargs)[source]

Bases: insights.core.CommandParser

Template Class to parse ps command output.

Raises

ParseException -- Raised if the heading line (containing both user_name and command_name) is not found in the input.

data

List of dicts, where the keys in each dict are the column headers and each item in the list represents a process.

Type

list

running

Set of full command strings for each command including optional path and arguments, in order of listing in the ps output.

Type

set

cmd_names

Set of just the command names, minus any path or arguments.

Type

set

services

List of tuples in format (cmd names, user/uid/pid, raw_line) for each command.

Type

list

command_name = 'COMMAND_TEMPLATE'

command_name is the name of the subclass specific command column from the header of ps output, the subclass must override it correspondingly

fuzzy_match(proc)[source]

Are there any commands that contain the given text?

Returns

True if the word proc appears in the command column.

Return type

boolean

Note

‘proc’ can match anywhere in the command path, name or arguments.

max_splits = 0

max_splits is the split number for the columns from the ps output, the subclass must override it correspondingly

number_occurences(proc)[source]

Returns the number of occurencies of commands that contain given text

Returns

The number of occurencies of commands with given text

Return type

int

Note

‘proc’ can match anywhere in the command path, name or arguments.

parse_content(content)[source]

This method must be implemented by classes based on this class.

running_pids()[source]

Gives the list of process IDs in the order listed.

Returns

the PIDs from the PID column.

Return type

list

search(**kwargs)[source]

Search the process list for matching rows based on key-value pairs.

This uses the insights.parsers.keyword_search() function for searching; see its documentation for usage details. If no search parameters are given, no rows are returned.

Returns

A list of dictionaries of processes that match the given search criteria.

Return type

list

Examples

>>> ps.search(COMMAND__contains='bash') == [
...    {'%MEM': '0.0', 'TTY': 'pts/3', 'VSZ': '108472', 'ARGS': '', 'PID': '20160', '%CPU': '0.0',
...     'START': '10:09', 'COMMAND': '/bin/bash', 'USER': 'user1', 'STAT': 'Ss', 'TIME': '0:00',
...     'COMMAND_NAME': 'bash', 'RSS': '1896'},
...    {'%MEM': '0.0', 'TTY': '?', 'VSZ': '9120', 'ARGS': '', 'PID': '20457', '%CPU': '0.0',
...     'START': '10:09', 'COMMAND': '/bin/bash', 'USER': 'root', 'STAT': 'Ss', 'TIME': '0:00',
...     'COMMAND_NAME': 'bash', 'RSS': '832'}
... ]
True
>>> ps.search(USER='root', COMMAND__contains='bash') == [
...    {'%MEM': '0.0', 'TTY': '?', 'VSZ': '9120', 'ARGS': '', 'PID': '20457', '%CPU': '0.0',
...     'START': '10:09', 'COMMAND': '/bin/bash', 'USER': 'root', 'STAT': 'Ss', 'TIME': '0:00',
...     'COMMAND_NAME': 'bash', 'RSS': '832'}
... ]
True
>>> ps.search(TTY='pts/3') == [
...    {'%MEM': '0.0', 'TTY': 'pts/3', 'VSZ': '108472', 'ARGS': '', 'PID': '20160', '%CPU': '0.0',
...     'START': '10:09', 'COMMAND': '/bin/bash', 'USER': 'user1', 'STAT': 'Ss', 'TIME': '0:00',
...     'COMMAND_NAME': 'bash', 'RSS': '1896'}
... ]
True
>>> ps.search(STAT__contains='Z') == [
...    {'%MEM': '0.0', 'TTY': '?', 'VSZ': '0', 'ARGS': '', 'PID': '1821', '%CPU': '0.0',
...     'START': 'May31', 'COMMAND': '[kondemand/0]', 'USER': 'root', 'STAT': 'Z', 'TIME': '0:29',
...     'COMMAND_NAME': '[kondemand/0]', 'RSS': '0'}
... ]
True
user_name = 'USER_TEMPLATE'

user_name is the name of the subclass specificuser_name column from the header of ps output, the subclass must override it correspondingly

users(proc)[source]

Searches for all users running a given command. If the user column is not present then returns an empty dict.

Returns

each username as a key to a list of PIDs (as strings) that are running the given process. {} if neither USER nor UID is found or proc is not found.

Return type

dict

Note

‘proc’ must match the entire command and arguments.

class insights.parsers.ps.PsAlxwww(*args, **kwargs)[source]

Bases: insights.parsers.ps.Ps

Class to parse the command ps alxwww. See method and attribute details in the Ps parser.

Sample input data:

F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0     1     0  20   0 128292  6928 ep_pol Ss   ?          0:02 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
1     0     2     0  20   0      0     0 kthrea S    ?          0:00 [kthreadd]
1     0     3     2  20   0      0     0 smpboo S    ?          0:00 [ksoftirqd/0]
5     0     4     2  20   0      0     0 worker S    ?          0:00 [kworker/0:0]
1     0     5     2   0 -20      0     0 worker S<   ?          0:00 [kworker/0:0H]
1     0     6     2  20   0      0     0 worker S    ?          0:00 [kworker/u4:0]
1     0     7     2 -100  -      0     0 smpboo S    ?          0:00 [migration/0]
1     0     8     2  20   0      0     0 rcu_gp S    ?          0:00 [rcu_bh]

Examples

>>> type(ps_alxwww)
<class 'insights.parsers.ps.PsAlxwww'>
>>> 'systemd' in ps_alxwww.cmd_names
True
>>> '/usr/lib/systemd/systemd --switched-root --system --deserialize 22' in ps_alxwww.running
True
>>> ps_alxwww.search(COMMAND_NAME__contains='systemd') == [{
...     'F': '4', 'UID': '0', 'PID': '1', 'PPID': '0', 'PRI': '20', 'NI': '0', 'VSZ': '128292', 'RSS': '6928',
...     'WCHAN': 'ep_pol', 'STAT': 'Ss', 'TTY': '?', 'TIME': '0:02',
...     'COMMAND': '/usr/lib/systemd/systemd --switched-root --system --deserialize 22',
...     'COMMAND_NAME': 'systemd', 'ARGS': '--switched-root --system --deserialize 22'
... }]
True
class insights.parsers.ps.PsAux(*args, **kwargs)[source]

Bases: insights.parsers.ps.PsAuxww

class insights.parsers.ps.PsAuxcww(*args, **kwargs)[source]

Bases: insights.parsers.ps.PsAuxww

class insights.parsers.ps.PsAuxww(*args, **kwargs)[source]

Bases: insights.parsers.ps.Ps

Class PsAuxww parses the output of the ps auxww command. A small sample of the output of this command looks like:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0  19356  1544 ?        Ss   May31   0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
root      1661  0.0  0.0 126252  1392 ?        Ss   May31   0:04 /usr/sbin/crond -n
root      1691  0.0  0.0  42688   172 ?        Ss   May31   0:00 /usr/sbin/rpc.mountd
root      1821  0.0  0.0      0     0 ?        Z    May31   0:29 [kondemand/0]
root      1864  0.0  0.0  18244   668 ?        Ss   May31   0:05 /usr/sbin/irqbalance --foreground
user1    20160  0.0  0.0 108472  1896 pts/3    Ss   10:09   0:00 /bin/bash
root     20357  0.0  0.0   9120   832 ?        Ss   10:09   0:00 /usr/sbin/dhclient enp0s25
root     20457  0.0  0.0   9120   832 ?        Ss   10:09   0:00 /bin/bash

PsAuxww attempts to read the output of ps auxwww, ps aux, and ps auxcww commands from archives.

Examples

>>> type(ps_auxww)
<class 'insights.parsers.ps.PsAuxww'>
>>> ps_auxww.running == set([
...     '/bin/bash', '/usr/sbin/rpc.mountd', '/usr/lib/systemd/systemd --switched-root --system --deserialize 22',
...     '/usr/sbin/irqbalance --foreground', '/usr/sbin/dhclient enp0s25', '[kondemand/0]', '/usr/sbin/crond -n'
... ])
True
>>> ps_auxww.cpu_usage('[kondemand/0]')
'0.0'
>>> ps_auxww.users('/bin/bash') == {'root': ['20457'], 'user1': ['20160']}
True
>>> ps_auxww.fuzzy_match('dhclient')
True
>>> sum(int(p['VSZ']) for p in ps_auxww)
333252
cpu_usage(proc)[source]

Searches for the first command matching proc and returns its CPU usage as a string.

Returns

the %CPU column corresponding to proc in command or None if proc is not found.

Return type

str

Note

‘proc’ must match the entire command and arguments.

class insights.parsers.ps.PsEf(*args, **kwargs)[source]

Bases: insights.parsers.ps.Ps

Class PsEf parses the output of the ps -ef command. A small sample of the output of this command looks like:

UID         PID   PPID  C STIME TTY          TIME CMD
root          1      0  0 03:53 ?        00:00:06 /usr/lib/systemd/systemd --system --deserialize 15
root          2      0  0 03:53 ?        00:00:00 [kthreadd]
root       1803      1  5 03:54 ?        00:55:22 /usr/bin/openshift start master --config=/etc/origin/master/master-config.yaml --loglevel
root       1969      1  3 03:54 ?        00:33:51 /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=2
root       1995      1  0 03:54 ?        00:02:06 /usr/libexec/docker/rhel-push-plugin
root       2078   1969  0 03:54 ?        00:00:00 journalctl -k -f
root       7201      1  0 03:59 ?        00:00:00 /usr/bin/python /usr/libexec/rhsmd
root     111434      1  0 22:32 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx    111435 111434  0 22:32 ?        00:00:00 nginx: worker process

PsEf attempts to read the output of ps -ef commands from archives.

Examples

>>> type(ps_ef)
<class 'insights.parsers.ps.PsEf'>
>>> ps_ef.parent_pid("111435")
['111434', 'nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf']
>>> ps_ef.users('nginx: worker process')
{'nginx': ['111435']}
>>> ps_ef.fuzzy_match('kthreadd')
True
parent_pid(pid)[source]

Search for the parent pid of command matching pid and returns the parent pid.

Returns

First one is the parent pid corresponding to pid in command and second one is parent command name. None if proc is not found.

Return type

list

class insights.parsers.ps.PsEo(*args, **kwargs)[source]

Bases: insights.parsers.ps.Ps

Class to parse the command ps -eo pid,ppid,comm

Sample input data:

  PID  PPID COMMAND
    1     0 systemd
    2     0 kthreadd
    3     2 ksoftirqd/0
 2416     1 auditd
 2419  2416 audispd
 2421  2419 sedispatch
 2892     1 NetworkManager
 3172  2892 dhclient
 3871     1 master
 3886  3871 qmgr
13724  3871 pickup
15663     2 kworker/0:1
16998     2 kworker/0:3
17259     2 kworker/0:0
18294  3357 sshd
pid_info

Dictionary with PID as key containing ps row as a dict

Type

dict

Examples

>>> type(ps_eo)
<class 'insights.parsers.ps.PsEo'>
>>> ps_eo.pid_info['1'] == {'PID': '1', 'PPID': '0', 'COMMAND': 'systemd', 'COMMAND_NAME': 'systemd', 'ARGS': ''}
True
>>> ps_eo.children('2') == [
...     {'PID': '3', 'PPID': '2', 'COMMAND': 'ksoftirqd/0', 'COMMAND_NAME': 'ksoftirqd/0', 'ARGS': ''},
...     {'PID': '15663', 'PPID': '2', 'COMMAND': 'kworker/0:1', 'COMMAND_NAME': 'kworker/0:1', 'ARGS': ''},
...     {'PID': '16998', 'PPID': '2', 'COMMAND': 'kworker/0:3', 'COMMAND_NAME': 'kworker/0:3', 'ARGS': ''},
...     {'PID': '17259', 'PPID': '2', 'COMMAND': 'kworker/0:0', 'COMMAND_NAME': 'kworker/0:0', 'ARGS': ''}
... ]
True
children(ppid)[source]

list: Returns a list of dict for all rows with ppid as parent PID

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.ps.are_present(tags, line)[source]

bool: Returns True if all tags are present in line.

PulpWorkerDefaults - file /etc/default/pulp_workers

The PulpWorkerDefaults parser reads the shell options set in the /etc/default/pulp_worker file. These are made available using the insights.core.SysconfigOptions class methods.

Sample file contents:

# Configuration file for Pulp's Celery workers

# Define the number of worker nodes you wish to have here. This defaults to the number of processors
# that are detected on the system if left commented here.
PULP_CONCURRENCY=1

# Configure Python's encoding for writing all logs, stdout and stderr
PYTHONIOENCODING="UTF-8"

Examples

>>> pulp_defs = shared[PulpWorkerDefaults]
>>> type(pulp_defs)
<class 'insights.parsers.pulp_worker_defaults.PulpWorkerDefaults'>
>>> 'PULP_CONCURRENCY' in pulp_defs
True
>>> pulp_defs['PULP_CONCURRENCY']  # Note string return value
'1'
>>> 'PULP_MAX_TASKS_PER_CHILD' in pulp_defs
False
>>> pulp_defs['PYTHONIOENCODING']  # Values are dequoted as per bash
'UTF-8'
class insights.parsers.pulp_worker_defaults.PulpWorkerDefaults(context)[source]

Bases: insights.core.SysconfigOptions

Parse the /etc/default/pulp_workers file.

PuppetserverConfig - file /etc/sysconfig/puppetserver

class insights.parsers.puppetserver_config.PuppetserverConfig(*args, **kwargs)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Warning

This parser is deprecated, please use insights.parsers.sysconfig.PuppetserverSysconfig instead.

Parse the puppetserver configuration file.

Produces a simple dictionary of keys and values from the configuration file contents , stored in the data attribute. The object also functions as a dictionary itself thanks to the insights.core.LegacyItemAccess mixin class.

Sample configuration file:

###########################################
# Init settings for puppetserver
###########################################

# Location of your Java binary (version 7 or higher)
JAVA_BIN="/usr/bin/java"

# Modify this if you'd like to change the memory allocation, enable JMX, etc
JAVA_ARGS="-Xms2g -Xmx2g -XX:MaxPermSize=256m"

# These normally shouldn't need to be edited if using OS packages
USER="puppet"
GROUP="puppet"
INSTALL_DIR="/opt/puppetlabs/server/apps/puppetserver"
CONFIG="/etc/puppetlabs/puppetserver/conf.d"

BOOTSTRAP_CONFIG="/etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/"

SERVICE_STOP_RETRIES=60

START_TIMEOUT=300

RELOAD_TIMEOUT=120

Examples

>>> puppetserver_config['START_TIMEOUT']
'300'
>>> 'AUTO' in puppetserver_config
False
parse_content(content)[source]

This method must be implemented by classes based on this class.

QemuConf - file /etc/libvirt/qemu.conf

The /etc/libvirt/qemu.conf file is in a key-value format, but there are several lines for one value.

Given a file containing the following test data:

vnc_listen = "0.0.0.0"
vnc_auto_unix_socket = 1
vnc_tls = 1
vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"
security_driver = "selinux"
cgroup_device_acl = [
 "/dev/null", "/dev/full", "/dev/zero",
 "/dev/random", "/dev/urandom",
 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
 "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
 ]

Example

>>> config = shared[QemuConf]
>>> config.get('vnc_listen')
'0.0.0.0'
>>> config.get('vnc_tls')
'1'
>>> "/dev/random" in config.get('cgroup_device_acl')
True
class insights.parsers.qemu_conf.QemuConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

A dict of the content of the qemu.conf configuration file.

data

Dictionary of parsed data that splits by ‘=’

Type

dict

parse_content(content)[source]

Parse file content of qemu.conf

QemuXML - file /etc/libvirt/qemu/*.xml and /var/run/libvirt/qemu/*.xml

Parsers provided by this module are:

QemuXML - file /etc/libvirt/qemu/*.xml

VarQemuXML - file /var/run/libvirt/qemu/*.xml

OpenStackInstanceXML - file /etc/libvirt/qemu/*.xml

class insights.parsers.qemu_xml.BaseQemuXML(context)[source]

Bases: insights.core.XMLParser

Base class for parsing Qemu XML files. It uses XMLParser mixin class.

vm_name

Name of VM

Type

str

parse_content(content)[source]

All child classes inherit this function to parse XML file automatically. It will call the function parse_dom() by default to parser all necessary data to data and the xmlns (the default namespace) is ready for this function.

parse_dom()[source]

Parse xml information in data and return.

Returns

Parsed xml data. An empty dictionary when content is blank.

Return type

dict

class insights.parsers.qemu_xml.OpenStackInstanceXML(context)[source]

Bases: insights.parsers.qemu_xml.BaseQemuXML

Parse OpenStack instances metadata based on the class BaseQemuXML.

This parser depends on insights.components.openstack.IsOpenStackCompute and will be fired only if the dependency is met.

Sample metadata section in the XML file:

<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
  <nova:package version="14.0.3-8.el7ost"/>
  <nova:name>django_vm_001</nova:name>
  <nova:creationTime>2017-10-09 08:51:28</nova:creationTime>
  <nova:flavor name="vpc1-cf1-foo-bar">
    <nova:memory>8096</nova:memory>
    <nova:disk>10</nova:disk>
    <nova:swap>0</nova:swap>
    <nova:ephemeral>0</nova:ephemeral>
    <nova:vcpus>4</nova:vcpus>
  </nova:flavor>
  <nova:owner>
    <nova:user uuid="96e9d2b749ea48fcb5a911e6f0e144f2">django_user_01</nova:user>
    <nova:project uuid="5a50e9d0d19746158958be0c759793fb">vpcdi1</nova:project>
  </nova:owner>
  <nova:root type="image" uuid="1a05a423-dfae-428a-ae54-1614d8024e76"/>
</nova:instance>
</metadata>

Examples

>>> rhosp_xml.domain_name
'instance-000008d6'
>>> rhosp_xml.nova.get('version')
'14.0.3-8.el7ost'
>>> rhosp_xml.nova.get('instance_name')
'django_vm_001'
>>> rhosp_xml.nova.get('user')
'django_user_01'
>>> rhosp_xml.nova.get('root_disk_type')
'image'
>>> rhosp_xml.nova.get('flavor_vcpus')
'4'
domain_name

XML domain name.

Type

str

nova

OpenStack Compute Metadata.

Type

dict

parse_content(content)[source]

All child classes inherit this function to parse XML file automatically. It will call the function parse_dom() by default to parser all necessary data to data and the xmlns (the default namespace) is ready for this function.

class insights.parsers.qemu_xml.QemuXML(context)[source]

Bases: insights.parsers.qemu_xml.BaseQemuXML

This class parses xml files under /etc/libvirt/qemu/ using BaseQemuXML base parser.

Sample file:

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit 05-s00c06h0
or other application using the libvirt API.
-->

<domain type='kvm'>
  <name>05-s00c06h0</name>
  <uuid>02cf0bba-2bd6-11e7-8337-e4115b9a50d0</uuid>
  <memory unit='KiB'>12582912</memory>
  <currentMemory unit='KiB'>12582912</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='4'/>
    <emulatorpin cpuset='1-4'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0-1'/>
    <memnode cellid='0' mode='strict' nodeset='0'/>
    <memnode cellid='1' mode='strict' nodeset='1'/>
  </numatune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
    <boot dev='network'/>
    <bootmenu enable='yes' timeout='1000'/>
    <bios useserial='yes' rebootTimeout='0'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='6291456' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='6291456' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='threads'/>
      <source file='/var/lib/libvirt/images/05-s00c06h0_1.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='hostdev' managed='yes'>
      <mac address='b2:59:73:15:00:00'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x10' function='0x0'/>
      </source>
      <rom bar='on' file='/opt/vcp/share/ipxe/808610ed.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='hostdev' managed='yes'>
      <mac address='b2:59:73:15:00:01'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x10' function='0x1'/>
      </source>
      <rom bar='on' file='/opt/vcp/share/ipxe/808610ed.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='pipe'>
      <source path='/var/lib/libvirt/qemu/channels/FROM-05-s00c06h0'/>
      <target type='virtio' name='virtio2host'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='pipe'>
      <source path='/var/lib/libvirt/qemu/channels/HGC-05-s00c06h0'/>
      <target type='virtio' name='virtio_host_guest_check'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <watchdog model='i6300esb' action='reset'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </watchdog>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Examples

>>> xml_numa.file_name == 'vm.xml'
True
>>> xml_numa.vm_name == '05-s00c06h0'
True
>>> memnode = xml_numa.get_elements('./numatune/memnode', None)
>>> len(memnode[0].items()) == 3
True
>>> len(memnode[1].items()) == 3
True
>>> memnode[0].get('cellid') == '0'
True
>>> memnode[1].get('mode') == 'strict'
True
class insights.parsers.qemu_xml.VarQemuXML(context)[source]

Bases: insights.parsers.qemu_xml.BaseQemuXML

This class parses xml files under /var/run/libvirt/qemu/ using BaseQemuXML base parser.

Sample file:

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit test-idm-client-ccveu-net
or other application using the libvirt API.
-->

<domstatus state='running' reason='unpaused' pid='17150'>
  <monitor path='/var/lib/libvirt/qemu/domain-59-test-idm-client-ccve/monitor.sock' json='1' type='unix'/>
  <vcpus>
    <vcpu id='0' pid='17156'/>
  </vcpus>
  <qemuCaps>
    <flag name='kvm'/>
    <flag name='mem-path'/>
  </qemuCaps>
  <devices>
    <device alias='balloon0'/>
  </devices>
  <libDir path='/var/lib/libvirt/qemu/domain-59-test-idm-client-ccve'/>
  <domain type='kvm' id='59'>
    <name>test-idm-client-ccveu-net</name>
    <uuid>78177d07-ac0e-4057-b1de-9ccd66cbc3d7</uuid>
    <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
      <ovirt:qos/>
    </metadata>
    <maxMemory slots='16' unit='KiB'>4294967296</maxMemory>
    <memory unit='KiB'>2097152</memory>
    <os>
      <type arch='x86_64' machine='pc-i440fx-rhel7.2.0'>hvm</type>
      <bootmenu enable='yes' timeout='10000'/>
      <smbios mode='sysinfo'/>
    </os>
    <devices>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <disk type='file' device='cdrom'>
        <driver name='qemu' type='raw'/>
        <source startupPolicy='optional'/>
      </disk>
    </devices>
  </domain>
</domstatus>

QPID statistics - command qpid-stat

This module contains parsers that check the QPID daemon statistics. qpidd is used by Satellite for communication between clients, capsules and servers.

Parsers provided by this module are:

QpidStatQ - command /usr/bin/qpid-stat -q --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

QpidStatU - command /usr/bin/qpid-stat -u --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

QpidStatG - command /usr/bin/qpid-stat -g --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

class insights.parsers.qpid_stat.QpidStat(context)[source]

Bases: insights.core.CommandParser

Base class for parsing QpidStat command.

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kwargs)[source]

Search for rows in the data matching keywords in the search.

This method uses the insights.parsers.keyword_search() function - see its documentation for a complete description of its keyword recognition capabilities.

Parameters

**kwargs -- Key-value pairs of search parameters.

Returns

A list of subscriptions that matched the search criteria.

Return type

(list)

class insights.parsers.qpid_stat.QpidStatG(context)[source]

Bases: insights.parsers.qpid_stat.QpidStat

This parser reads the output of the command qpid-stat -g --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

Sample output:

Broker Summary:
  uptime           cluster       connections  sessions  exchanges  queues
  =========================================================================
  96d 23h 50m 41s  <standalone>  23           37        14         33

Aggregate Broker Statistics:
  Statistic                   Messages    Bytes
  ========================================================
  queue-depth                 0           0
  total-enqueues              1,726,798   42,589,932,236
  total-dequeues              1,726,798   42,589,932,236
  persistent-enqueues         28,725      23,889,836
  persistent-dequeues         28,725      23,889,836
  transactional-enqueues      0           0
  transactional-dequeues      0           0
  flow-to-disk-depth          0           0
  flow-to-disk-enqueues       0           0
  flow-to-disk-dequeues       0           0
  acquires                    1,726,798
  releases                    0
  discards-no-route           41,163,896
data

A list of dictionaries with the key-value data from the table.

Type

list of dict

by_queue

A dictionary of the same data dictionaries stored by cluster name or queue name.

Type

dict of dict

Examples

>>> type(qpid_stat_g)
<class 'insights.parsers.qpid_stat.QpidStatG'>
>>> type(qpid_stat_g.data) == type([])
True
>>> type(qpid_stat_g.data[0]) == type({}) # Each row is a dictionary formed from the table
True
>>> qpid_stat_g.data[0]['uptime']
'97d 0h 16m 24s'
>>> qpid_stat_g.data[0]['cluster']
'<standalone>'
>>> qpid_stat_g.data[1]['Statistic']
'queue-depth'
>>> qpid_stat_g.data[1]['Messages']
'0'
>>> qpid_stat_g.data[11]['Bytes']
''
>>> type(qpid_stat_g.by_queue) == type({}) # Dictionary lookup by queue ID
True
>>> qpid_stat_g.by_queue['queue-depth'] == qpid_stat_g.data[1]
True
>>> enqueues = qpid_stat_g.search(Statistic__contains='enqueues')  # Keyword search
>>> type(enqueues) == type([])
True
>>> len(enqueues)
4
>>> enqueues[0] == qpid_stat_g.data[2]  # List contains matching items
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.qpid_stat.QpidStatQ(context)[source]

Bases: insights.parsers.qpid_stat.QpidStat

This parser reads the output of the command qpid-stat -q --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

Sample output:

Queues
  queue                                                                      dur  autoDel  excl  msg   msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind
  ==========================================================================================================================================================
  00d6cc19-15fc-4b7c-af3c-6a38e7bb386d:1.0                                        Y        Y        0     2      2       0    486      486         1     2
  0f7f1a3d-daff-42a6-a994-29050a2eabde:1.0                                        Y        Y        0     8      8       0   4.88k    4.88k        1     2
data

A list of dictionaries with the key-value data from the table.

Type

list of dict

by_queue

A dictionary of the same data dictionaries stored by queue ID.

Type

dict of dict

Examples

>>> type(qpid_stat_q)
<class 'insights.parsers.qpid_stat.QpidStatQ'>
>>> type(qpid_stat_q.data) == type([])  # Queue data stored as it appears
True
>>> type(qpid_stat_q.data[0]) == type({}) # Each row is a dictionary formed from the table
True
>>> qpid_stat_q.data[0]['queue']
'00d6cc19-15fc-4b7c-af3c-6a38e7bb386d:1.0'
>>> qpid_stat_q.data[0]['dur']  # Blank columns are empty strings
''
>>> qpid_stat_q.data[0]['autoDel']  # Flags are left as strings
'Y'
>>> qpid_stat_q.data[0]['msgOut']  # Numbers are left as strings
'2'
>>> qpid_stat_q.data[1]['bytesOut']  # No byte measure conversion
'4.88k'
>>> type(qpid_stat_q.by_queue) == type({}) # Dictionary lookup by queue ID
True
>>> qpid_stat_q.by_queue['00d6cc19-15fc-4b7c-af3c-6a38e7bb386d:1.0'] == qpid_stat_q.data[0]
True
>>> total_messages_in = 0
>>> for queue in qpid_stat_q:  # Can be used as an iterator
...     total_messages_in += int(queue['msgIn'])
...
>>> total_messages_in
10
>>> qpid_stat_q.search(queue__contains=':2.0')  # Keyword search
[]
class insights.parsers.qpid_stat.QpidStatU(context)[source]

Bases: insights.parsers.qpid_stat.QpidStat

This parser reads the output of the command qpid-stat -u --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671

Sample output:

Subscriptions
  subscr               queue                                                                      conn                                    procName          procId  browse  acked  excl  creditMode  delivered  sessUnacked
  ===========================================================================================================================================================================================================================
  0                    00d6cc19-15fc-4b7c-af3c-6a38e7bb386d:1.0                                   qpid.10.20.1.10:5671-10.20.1.10:33787   celery            21409                        CREDIT      2          0
  0                    pulp.agent.c6a430bc-5ec7-42f8-99ce-f320ed0b9113                            qpid.10.20.1.10:5671-10.30.0.148:57423  goferd            32227           Y            CREDIT      0          0
  1                    server.example.com:event                                                   qpid.10.20.1.10:5671-10.20.1.10:33848   Qpid Java Client  21066           Y      Y     WINDOW      2,623      0
  0                    celeryev.4c77bd03-1cde-49eb-bdc0-b7c38f9ff93d                              qpid.10.20.1.10:5671-10.20.1.10:33777   celery            21356           Y            CREDIT      363,228    0
  1                    celery                                                                     qpid.10.20.1.10:5671-10.20.1.10:33786   celery            21409           Y            CREDIT      5          0
data

A list of dictionaries with the key-value data from the table.

Type

list of dict

by_queue

A dictionary of the same data dictionaries stored by queue ID.

Type

dict of dict

Examples

>>> type(qpid_stat_u)
<class 'insights.parsers.qpid_stat.QpidStatU'>
>>> type(qpid_stat_u.data) == type([]) # Subscription data stored as it appears
True
>>> type(qpid_stat_u.data[0]) == type({}) # Each row is a dictionary formed from the table
True
>>> qpid_stat_u.data[0]['queue']
'00d6cc19-15fc-4b7c-af3c-6a38e7bb386d:1.0'
>>> qpid_stat_u.data[0]['browse']  # Blank columns are empty strings
''
>>> qpid_stat_u.data[1]['acked']  # Flags are left as strings
'Y'
>>> qpid_stat_u.data[1]['subscr']  # Numbers are left as strings
'0'
>>> qpid_stat_u.data[2]['delivered']  # Beware the commas
'2,623'
>>> type(qpid_stat_u.by_queue) == type({}) # Dictionary lookup by queue ID
True
>>> qpid_stat_u.by_queue['celery'] == qpid_stat_u.data[4]
True
>>> total_celery_queues = 0
>>> for subscr in qpid_stat_u:  # Can be used as an iterator
...     if subscr['procName'] == 'celery':
...         total_celery_queues += 1
...
>>> total_celery_queues
3
>>> event_queues = qpid_stat_u.search(queue__contains=':event')  # Keyword search
>>> type(event_queues) == type([])
True
>>> len(event_queues)
1
>>> event_queues[0] == qpid_stat_u.data[2]  # List contains matching items
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

QpiddConfig - file /etc/qpid/qpidd.conf

class insights.parsers.qpidd_conf.QpiddConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parse the qpidd configuration file.

Produces a simple dictionary of keys and values from the configuration file contents , stored in the data attribute. The object also functions as a dictionary itself thanks to the insights.core.LegacyItemAccess mixin class.

Sample configuration file:

# Configuration file for qpidd. Entries are of the form:
# name=value
#
# (Note: no spaces on either side of '='). Using default settings:
# "qpidd --help" or "man qpidd" for more details.
#cluster-mechanism=ANONYMOUS
log-enable=error+
log-to-syslog=yes
auth=no
require-encryption=yes
ssl-require-client-authentication=yes
ssl-port=5672
ssl-cert-db=/etc/pki/katello/nssdb
ssl-cert-password-file=/etc/pki/katello/nssdb/nss_db_password-file
ssl-cert-name=broker

interface=lo

Examples

>>> qpidd_conf['auth']
'no'
>>> 'require-encryption' in qpidd_conf
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

Parsers for RabbitMQ

Parsers included in this module are:

RabbitMQReport - command /usr/sbin/rabbitmqctl report

RabbitMQReportOfContainers - files docker_exec_-t_rabbitmq-bundle-docker-*_rabbitmqctl_report

RabbitMQUsers - command /usr/sbin/rabbitmqctl list_users

RabbitMQQueues - command /usr/sbin/rabbitmqctl list_queues name messages consumers auto_delete

RabbitMQEnv - file /etc/rabbitmq/rabbitmq-env.conf

class insights.parsers.rabbitmq.RabbitMQEnv(context)[source]

Bases: insights.core.SysconfigOptions

Parse the content of file /etc/rabbitmq/rabbitmq-env.conf using the SysconfigOptions base class.

Sample content of the file /etc/rabbitmq/rabbitmq-env.conf:

RABBITMQ_SERVER_ERL_ARGS="+K true +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"

Example

>>> rabbitmq_env.rabbitmq_server_erl_args
'+K true +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]'
>>> rabbitmq_env.data['RABBITMQ_SERVER_ERL_ARGS']
'+K true +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]'
>>> rabbitmq_env.rmq_erl_tcp_timeout
'5000'
rabbitmq_server_erl_args

If RABBITMQ_SERVER_ERL_ARGS otherwise None.

Type

str

rmq_erl_tcp_timeout

If value of inet_default_connect_options equals value of inet_default_listen_options. Otherwise None.

Type

str

class insights.parsers.rabbitmq.RabbitMQQueues(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of the rabbitmqctl list_queues command.

The actual command is rabbitmqctl list_queues name messages consumers auto_delete.

The four columns that are output are:

  1. name - The name of the queue with non-ASCII characters escaped as in C.

  2. messages - Sum of ready and unacknowledged messages (queue depth).

  3. consumers - Number of consumers.

  4. auto_delete - Whether the queue will be deleted automatically when no longer used.

The output of the command looks like:

cinder-scheduler        0       3       false
cinder-scheduler.ha-controller  0       3       false
cinder-scheduler_fanout_ea9c69fb630f41b2ae6120eba3cd43e0        8141    1   true
cinder-scheduler_fanout_9aed9fbc3d4249289f2cb5ea04c062ab        8145    0   true
cinder-scheduler_fanout_b7a2e488f3ed4e1587b959f9ac255b93        8141    0   true

Examples

>>> queues.data[0]
QueueInfo(name='cinder-scheduler', messages=0, consumers=3, auto_delete=False)
>>> queues.data[0].name
'cinder-scheduler'
>>> queues.data[1].name
'cinder-scheduler.ha-controller'
Raises
  • ParseException -- Raised if the data indicates an error in acquisition or if the auto_delete value is not true or false.

  • ValueError -- Raised if any of the numbers are not valid numbers

class QueueInfo(name, messages, consumers, auto_delete)

Bases: tuple

namedtuple: Structure to hold a line of RabbitMQ queue information.

property auto_delete
property consumers
property messages
property name
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.rabbitmq.RabbitMQReport(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

parse_content(content)[source]

Support StatusOfNode and Permissions Sections only.

Attrbutes:

results(dict): None if encountered an error while parsing. For example:

self.result =
{'nstat': {
    "'rabbit@overcloud-controller-0'": {
        'file_descriptors': {
            'total_used': '967',
            'sockets_used': '965',
            'total_limit': '3996',
            'sockets_limit': '3594'},
        'uptime': '3075485',
        'pid': '6005',
        'disk_free': '259739344896',
        'disk_free_limit': '50000000'},
    "'rabbit@overcloud-controller-1'": {
        'file_descriptors': {
            'total_used': '853',
            'sockets_used': '851',
            'total_limit': '3996',
            'sockets_limit': '3594'},
        'uptime': '3075482',
        'pid': '9304',
        'disk_free': '260561866752',
        'disk_free_limit': '50000000'}}
 'perm': {
    '/': {
        'redhat1': ['redhat.*', '.*', '.*'],
        'guest': ['.*', '.*', '.*'],
        'redhat':['redhat.*', '.*', '.*']},
    'test_vhost': ''}}
class insights.parsers.rabbitmq.RabbitMQReportOfContainers(context, extra_bad_lines=[])[source]

Bases: insights.parsers.rabbitmq.RabbitMQReport

Parse the rabbitmqctl report command of each container running on the host.

class insights.parsers.rabbitmq.RabbitMQUsers(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.rabbitmq.TRUE_FALSE = {'false': False, 'true': True}

Dictionary for converting true/false strings to bool.

Type

dict

RabbitMQ Logs

Module for parsing the log files for RabbitMQ:

RabbitMQLogs - file /var/log/rabbitmq/rabbit@$HOSTNAME.log

RabbitMQStartupErrLog - file /var/log/rabbitmq/startup_err

RabbitMQStartupLog - file /var/log/rabbitmq/startup_log

class insights.parsers.rabbitmq_log.RabbitMQLogs(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/rabbitmq/rabbit@$HOSTNAME.log file

Typical content of rabbit@$HOSTNAME.log file is:

=INFO REPORT==== 9-Nov-2016::14:29:11 ===
Starting RabbitMQ 3.6.3 on Erlang 18.3.4.4
Copyright (C) 2007-2016 Pivotal Software, Inc.
Licensed under the MPL.  See http://www.rabbitmq.com/

=INFO REPORT==== 9-Nov-2016::14:29:11 ===
node           : rabbit@overcloud-controller-0
home dir       : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash    : F7g8XhNTzvEK3KywLHh9yA==
log            : /var/log/rabbitmq/rabbit@overcloud-controller-0.log
sasl log       : /var/log/rabbitmq/rabbit@overcloud-controller-0-sasl.log
database dir   : /var/lib/rabbitmq/mnesia/rabbit@overcloud-controller-0
...

Note

Please refer to its super-class insights.core.LogFileOutput for full usage.

Note

Because this parser is defined using a PatternSpec, which returns multiple files, the data in the shared parser state is a list of these parser objects. This means that for the moment you will have to iterate across these objects directly.

Examples

>>> for log in shared[RabbitMQLogs]:
...     print 'log file:', log.file_path
...     print 'INFO lines:', len(log.get('INFO REPORT'))
...     print 'ERROR lines:', len(log.get('ERROR REPORT'))
...
log file: /var/log/rabbitmq/rabbit@queue.example.com.log
INFO lines: 2
ERROR lines: 0
class insights.parsers.rabbitmq_log.RabbitMQStartupErrLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/rabbitmq/startup_err file.

Typical content of startup_err file is:

Error: {node_start_failed,normal}

Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

Note

Please refer to its super-class insights.core.LogFileOutput

class insights.parsers.rabbitmq_log.RabbitMQStartupLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing /var/log/rabbitmq/startup_log file.

Typical content of startup_log file is:

Starting all nodes...
Starting node rabbit@ubuntu...

+---+   +---+
|   |   |   |
|   |   |   |
|   |   |   |
|   +---+   +-------+
|                   |
| RabbitMQ  +---+   |
|           |   |   |
|   v1.8.0  +---+   |
|                   |
+-------------------+
AMQP 8-0
Copyright (C) 2007-2010 LShift Ltd., Cohesive Financial Technologies LLC., and Rabbit Technologies Ltd.
Licensed under the MPL.  See http://www.rabbitmq.com/

node           : rabbit@ubuntu
app descriptor : /usr/lib/rabbitmq/lib/rabbitmq_server-1.8.0/sbin/../ebin/rabbit.app
home dir       : /var/lib/rabbitmq
cookie hash    : mfoMkOc9CYok/SmH7RH9Jg==
log            : /var/log/rabbitmq/rabbit@ubuntu.log
sasl log       : /var/log/rabbitmq/rabbit@ubuntu-sasl.log
database dir   : /var/lib/rabbitmq/mnesia/rabbit@ubuntu
erlang version : 5.7.4

starting file handle cache server                                     ...done
starting worker pool                                                  ...done
starting database                                                     ...done
starting empty DB check                                               ...done
starting exchange recovery                                            ...done
starting queue supervisor and queue recovery                          ...BOOT ERROR: FAILED

Note

Please refer to its super-class LogFileOutput

RcLocal - file /etc/rc.d/rc.local

class insights.parsers.rc_local.RcLocal(context)[source]

Bases: insights.core.Parser

Parse the /etc/rc.d/rc.local file.

Sample input:

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
data

List of all lines from rc.local that are not comments or blank

Type

list

Examples

>>> shared[RcLocal].data[0]
'touch /var/lock/subsys/local'
>>> shared[RcLocal].get('kernel')
['echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled']
get(value)[source]

Returns the lines containing string value.

parse_content(content)[source]

This method must be implemented by classes based on this class.

RdmaConfig - file /etc/rdma/rdma.conf

class insights.parsers.rdma_config.RdmaConfig(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

This class will parse the output of file /etc/rdma/rdma.conf.

The rdma service reads /etc/rdma/rdma.conf file to find out which kernel-level and user-level RDMA protocols the administrator wants to be loaded by default.

data

Dictionary of keys with values in dict.

Type

dict

Sample configuration file:

IPOIB_LOAD=yes
# Load SRP (SCSI Remote Protocol initiator support) module
SRP_LOAD=yes
# Load SRPT (SCSI Remote Protocol target support) module
SRPT_LOAD=yes
# Load iSER (iSCSI over RDMA initiator support) module
ISER_LOAD=yes
# Load iSERT (iSCSI over RDMA target support) module
ISERT_LOAD=yes
# Load RDS (Reliable Datagram Service) network protocol
RDS_LOAD=no
# Load NFSoRDMA client transport module
XPRTRDMA_LOAD=yes
# Load NFSoRDMA server transport module
SVCRDMA_LOAD=no
# Load Tech Preview device driver modules
TECH_PREVIEW_LOAD=no
# Should we modify the system mtrr registers?  We may need to do this if you
# get messages from the ib_ipath driver saying that it couldn't enable
# write combining for the PIO buffs on the card.
#
# Note: recent kernels should do this for us, but in case they don't, we'll
# leave this option
FIXUP_MTRR_REGS=no

Examples

>>> rdma_conf['IPOIB_LOAD']
'yes'
>>> rdma_conf["SRP_LOAD"]
'yes'
>>> rdma_conf["SVCRDMA_LOAD"]
'no'
parse_content(content)[source]

This method must be implemented by classes based on this class.

redhat-release - File /etc/redhat-release

This module provides plugins access to file /etc/redhat-release

Typical content of file /etc/redhat-release is:

Red Hat Enterprise Linux Server release 7.2 (Maipo)

This module parses the file content and stores data in the dict self.parsed. The version info can also be get via obj.major and obj.minor. Property is_rhel and is_hypervisor specifies the host type.

Examples

>>> rh_rls_content = '''
... Red Hat Enterprise Linux Server release 7.2 (Maipo)
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {RedhatRelease: RedhatRelease(context_wrap(rh_rls_content))}
>>> release = shared[RedhatRelease]
>>> assert release.raw == rh_rls_content
>>> assert release.major == 7
>>> assert release.minor == 2
>>> assert release.version == "7.2"
>>> assert release.is_rhel
>>> assert release.product == "Red Hat Enterprise Linux Server"
class insights.parsers.redhat_release.RedhatRelease(context)[source]

Bases: insights.core.Parser

Parses the content of file /etc/redhat-release.

property is_rhel

True if this OS belong to RHEL, else False.

Type

bool

property major

the major version of this OS.

Type

int

property minor

the minor version of this OS.

Type

int

parse_content(content)[source]

This method must be implemented by classes based on this class.

property product

product of this OS.

Type

string

property version

version of this OS.

Type

string

ResolvConf - file /etc/resolv.conf

class insights.parsers.resolv_conf.ResolvConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse the /etc/resolv.conf file into a dictionary of keywords and their values. This is made available via the data property but the object itself can be used as a dictionary thanks to the insights.core.LegacyItemAccess mixin class.

Each keyword found in the file is stored as a key in the data dictionary, storing the list of values given (in order) on all occurrences of that keyword in the file.

According to the man page, the ‘domain’ and ‘search’ keywords are mutually exclusive. If more than one instance of these keywords is present, whichever is last becomes the active resolution method. So, the active key stores which of these keywords was the last present in the file.

Sample file content:

; generated by /sbin/dhclient-script
# This file is being maintained by Puppet.
# DO NOT EDIT
search a.b.com b.c.com
options timeout:2 attempts:2
nameserver 10.160.224.51
nameserver 10.61.193.11

Examples

>>> resolv = shared[ResolvConf]
>>> resolv['active']
'search'
>>> resolv['nameserver']
["10.160.224.51", "10.61.193.11" ]
>>> resolv['search']
["a.b.com", "b.c.com"]
>>> resolv.data["options"]  # old style access
["timeout:2", "attempts:2"]
parse_content(content)[source]

This method must be implemented by classes based on this class.

rhev_data_center - datasource rhev_data_center

class insights.parsers.rhev_data_center.RhevDataCenter(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Walk through the /rhev/data-center directory of RHEV host and return the full path of files not having correct file ownership i.e vdsm:kvm. See the rhev_data_center Datasource for more info.

data

A list of the parsed output returned by rhev_data_center Datasource.

Type

list

incorrect_volume_ownership

Volumes attached to the RHEV VMs in the Data Center having incorrect file ownership.

Type

list

Raises

SkipException -- If no files are found with incorrect ownership.

The following are available in data and incorrect_volume_ownership:

  • name - file owner

  • group - file group

  • path - full path of a file

Examples

>>> assert len(rhev_dc.data) == 4
>>> assert len(rhev_dc.incorrect_volume_ownership) == 1
>>> assert rhev_dc.incorrect_volume_ownership[0]['path'] == '/rhev/data-center/mnt/host1.example.com:_nfsshare_data/a384bf5d-db92-421e-926d-bfb99a6b4b28/images/b7d6cc07-d1f1-44b3-b3c0-7067ec7056a3/4d6e5dea-995f-4a4e-b487-0f70361f6137'
>>> assert rhev_dc.incorrect_volume_ownership[0]['name'] == 'root'
>>> assert rhev_dc.incorrect_volume_ownership[0]['group'] == 'root'
parse_content(content)[source]

This method must be implemented by classes based on this class.

RHNConf - file /etc/rhn/rhn.conf

class insights.parsers.rhn_conf.RHNConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Class to parse the configuration file rhn.conf.

The special feature of rhn.conf is that values can span multiple lines with each intermediate line ending with a comma.

This parser uses the insights.core.LegacyItemAccess mix-in to provide access to its data directly.

data

A dictionary of values keyed by the configuration item. Values spanning multiple lines are compacted together. Values that include a comma are turned into lists.

Type

dict

Sample rhn.conf input:

# Corporate gateway (hostname:PORT):
server.satellite.http_proxy = corporate_gateway.example.com:8080
server.satellite.http_proxy_username =
server.satellite.http_proxy_password =
traceback_mail = test@example.com, test@redhat.com

web.default_taskmaster_tasks = RHN::Task::SessionCleanup,
                               RHN::Task::ErrataQueue,
                               RHN::Task::ErrataEngine,
                               RHN::Task::DailySummary,
                               RHN::Task::SummaryPopulation,
                               RHN::Task::RHNProc,
                               RHN::Task::PackageCleanup

Examples

>>> conf = shared[RHNConf]
>>> conf.data['server.satellite.http_proxy']  # Long form access
'corporate_gateway.example.com:8080'
>>> conf['server.satellite.http_proxy']  # Short form access
'corporate_gateway.example.com:8080'
>>> conf['traceback_mail']  # split into a list
['test@example.com', 'test@redhat.com']
>>> conf['web.default_taskmaster_tasks'][3]  # Values can span multiple lines
'RHN::Task::DailySummary'
parse_content(content)[source]

This method must be implemented by classes based on this class.

RHN Logs - Files /var/log/rhn/*.log

Modules for parsing the content of log files under /rhn-logs/rhn directory in spacewalk-debug or sosreport archives of Satellite 5.x.

Note

Please refer to the super-class insights.core.LogFileOutput

class insights.parsers.rhn_logs.SatelliteServerLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the var/log/rhn/rhn_server_satellite.log file

Sample log contents:

2016/11/19 01:13:35 -04:00 Downloading errata data complete
2016/11/19 01:13:35 -04:00 Downloading kickstartable trees metadata
2016/11/19 01:13:35 -04:00    Retrieving / parsing kickstart tree files: rhel-x86_64-server-optional-6-debuginfo (NONE RELEVANT)
2016/11/19 01:13:39 -04:00    debug/output level: 1
2016/11/19 01:13:39 -04:00    db:  rhnsat/<password>@rhnsat
2016/11/19 01:13:39 -04:00
2016/11/19 01:13:39 -04:00 Retrieving / parsing channel-families data
2016/11/19 01:13:44 -04:00 channel-families data complete
2016/11/19 01:13:44 -04:00
2016/11/19 01:13:44 -04:00 RHN Entitlement Certificate sync

Examples

>>> log = shared[SatelliteServerLog]
>>> log.get('Downloading')[0['raw_message']
'2016/11/19 01:13:35 -04:00 Downloading errata data complete', '2016/11/19 01:13:35 -04:00 Downloading kickstartable trees metadata'
>>> list(log.set_after(datetime(2016, 11, 19, 1, 13, 44)))[0]['raw_message']
'2016/11/19 01:13:44 -04:00 channel-families data complete', '2016/11/19 01:13:44 -04:00 ', '2016/11/19 01:13:44 -04:00 RHN Entitlement Certificate sync'
class insights.parsers.rhn_logs.SearchDaemonLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the /var/log/rhn/search/rhn_search_daemon.log file.

Sample log contents:

STATUS | wrapper  | 2013/01/28 14:41:58 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2013/01/28 14:41:58 | Launching a JVM...
INFO   | jvm 1    | 2013/01/28 14:41:59 | Wrapper (Version 3.2.1) http://wrapper.tanukisoftware.org
STATUS | wrapper  | 2013/01/29 17:04:25 | TERM trapped.  Shutting down.
lines

All lines captured in this file.

Type

list

Examples

>>> log = shared[SearchDaemonLog]
>>> log.file_path
'var/log/rhn/search/rhn_search_daemon.log'
>>> log.get('Launching a JVM')[0]['raw_message']
'STATUS | wrapper  | 2013/01/28 14:41:58 | Launching a JVM...'
>>> list(log.get_after(datetime(2013, 1, 29, 0, 0, 0)))[0]['raw_message']
'STATUS | wrapper  | 2013/01/29 17:04:25 | TERM trapped.  Shutting down.'
class insights.parsers.rhn_logs.ServerXMLRPCLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the rhn_server_xmlrpc.log file.

Sample log line:

2016/04/11 05:52:01 -04:00 23630 10.4.4.17: xmlrpc/registration.welcome_message('lang: None',)
lines

All lines captured in this file.

Type

list

last

Dict of the last log line.

Type

dict

Examples

>>> log = shared[ServerXMLRPCLog]
>>> log.file_path
'var/log/rhn/rhn_server_xmlrpc.log'
>>> log.get('two')
[{'timestamp':'2016/04/11 05:52:01 -04:00',
  'datetime': datetime(2016, 04, 11, 05, 52, 01),
  'pid': '23630',
  'client_ip': '10.4.4.17',
  'module': 'xmlrpc',
  'function': 'registration.welcome_message',
  'client_id': None,
  'args': "'lang: None'",
  'raw_message': "...two..."}]
>>> log.last
[{'timestamp':'2016/04/11 05:52:01 -04:00',
  'datetime': datetime(2016, 04, 11, 05, 52, 01),
  'pid': '23630',
  'client_ip': '10.4.4.17',
  'module': 'xmlrpc',
  'function': 'registration.welcome_message',
  'client_id': None,
  'args': "'lang: None'",
  'raw_message': "..."}]
parse_content(content)[source]

Parse the logs as its super class LogFileOutput. And get the last complete log. If the last line is not complete, then get from its previous line.

class insights.parsers.rhn_logs.TaskomaticDaemonLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the rhn_taskomatic_daemon.log file.

Note

Because of the need to get the datetime of the last log, please DO NOT filter it.

lines

All lines captured in this file.

Type

list

last_log_date

The last log datetime get from the last line.

Type

datetime

Examples

>>> td_log = shared[TaskomaticDaemonLog]
>>> td_log.file_path
'var/log/rhn/rhn_taskomatic_daemon.log'
>>> td_log.get('two')[0]['raw_message']
'Log file line two'
>>> 'three' in td_log
True
>>> td_log.last_log_date
2016-05-18 15:13:40
parse_content(content)[source]

Once the logs are parsed, retrieve the last log date from the last line which has a complete timestamp in the third field.

rhn_schema_version - Command /usr/bin/rhn-schema-version

Parse the output of command /usr/bin/rhn-schema-version.

insights.parsers.rhn_schema_version.rhn_schema_version(context)[source]

Function to parse the output of command /usr/bin/rhn-schema-version.

Sample input:

5.6.0.10-2.el6sat

Examples

>>> db_ver = shared[rhn_schema_version]
>>> db_ver
'5.6.0.10-2.el6sat'

RhospRelease - file /etc/rhosp-release

This module provides plugins access to file /etc/rhosp-release

Typical content of file /etc/rhosp-release is:

Red Hat OpenStack Platform release 14.0.0 RC (Rocky)

This module parses the file content and stores data in the dict self.release with keys product, version, and code_name.

Examples

>>> release.product
'Red Hat OpenStack Platform'
>>> release.version
'14.0.0'
>>> release.code_name
'Rocky'
class insights.parsers.rhosp_release.RhospRelease(context)[source]

Bases: insights.core.Parser

Parses the content of file /etc/rhosp-release.

property code_name

Release code name.

Type

string

parse_content(content)[source]

This method must be implemented by classes based on this class.

property product

Product full name.

Type

string

property version

Version of RHOSP.

Type

string

RHSM Release Version - file /var/lib/rhsm/cache/releasever.json

Parser Red Hat Subscription manager release info.

class insights.parsers.rhsm_releasever.RhsmReleaseVer(context)[source]

Bases: insights.core.JSONParser

Class for parsing the file: /var/lib/rhsm/cache/releasever.json.

This information mirror the information provided by the subscription-manager release --show command.

Note

Please refer to the super-class insights.core.JSONParser for additional information on attributes and methods.

Sample input data:

{"releaseVer": "6.10"}

Examples

>>> type(rhsm_releasever)
<class 'insights.parsers.rhsm_releasever.RhsmReleaseVer'>
>>> rhsm_releasever['releaseVer'] == '6.10'
True
>>> rhsm_releasever.set == '6.10'
True
>>> rhsm_releasever.major
6
>>> rhsm_releasever.minor
10
parse_content(content)[source]

Parse the contents of file /var/lib/rhsm/cache/releasever.json.

RHV Log Collector Analyzer

RHV Log Collector Analyzer is a tool that analyze RHV sosreports and live systems.

This module provides processing for the output of rhv-log-collector-analyzer --json which will be running in a live system to detect possible issues.

class insights.parsers.rhv_log_collector_analyzer.RhvLogCollectorJson(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of rhv-log-collector-analyzer --json.

RndcStatus - Command rndc status

class insights.parsers.rndc_status.RndcStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class for parsing the output of rndc status command.

Typical output of the command is:

version: BIND 9.11.4-P2-RedHat-9.11.4-9.P2.el7 (Extended Support Version) <id:7107deb>
running on rhel7: Linux x86_64 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019
boot time: Mon, 26 Aug 2019 02:17:03 GMT
last configured: Mon, 26 Aug 2019 02:17:03 GMT
configuration file: /etc/named.conf
CPUs found: 4
worker threads: 4
UDP listeners per interface: 3
number of zones: 103 (97 automatic)
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/900/1000
tcp clients: 1/150
server is up and running
Raises

Examples

>>> type(rndc_status)
<class 'insights.parsers.rndc_status.RndcStatus'>
>>> rndc_status['CPUs found']
'4'
>>> rndc_status['server']
'up and running'
parse_content(content)[source]

This method must be implemented by classes based on this class.

RsyslogConf - file /etc/rsyslog.conf

The rsyslog configuration files can include statements with two different line based formats along with snippets of ‘RainerScript’ that can span multiple lines.

See http://www.rsyslog.com/doc/master/configuration/basic_structure.html#statement-types

Due to high parsing complexity, this parser presents a simple line-based view of the file that meets the needs of the current rules.

Example

>>> content = '''
... :fromhost-ip, regex, "10.0.0.[0-9]" /tmp/my_syslog.log
... $ModLoad imtcp
... $InputTCPServerRun 10514"
... '''.strip()
>>> from insights.tests import context_wrap
>>> rsl = RsyslogConf(context_wrap(content))
>>> len(rsl)
3
>>> len(list(rsl))
3
>>> any('imtcp' in n for n in rsl)
True
class insights.parsers.rsyslog_conf.RsyslogConf(context)[source]

Bases: insights.core.Parser

Parses /etc/rsyslog.conf info simple lines.

Skips lines that begin with hash (“#”) or are only whitespace.

data

List of lines in the file that don’t start with ‘#’ and aren’t whitespace.

Type

list

config_items

Configuration items opportunistically found in the configuration file, with their values as given.

Type

dict

config_val(item, default=None)[source]

Return the given configuration item, or the default if not defined.

Parameters
  • item (str) -- The configuration item name

  • default -- The default if the item is not found (defaults to None)

Returns

The related value in the config_items dictionary.

parse_content(content)[source]

This method must be implemented by classes based on this class.

SambaConfig - file /etc/samba/smb.conf

This parser reads the SaMBa configuration file /etc/samba/smb.conf, which is in standard .ini format, with a couple of notable features:

  • SaMBa ignores spaces at the start of options, which the ConfigParser class normally does not. This spacing is stripped by this parser.

  • SaMBa likewise ignores spaces in section heading names.

  • SaMBa allows the same section to be defined multiple times, with the options therein being merged as if they were one section.

  • SaMBa allows options to be declared before the first section marker. This parser puts these options in a global section.

  • SaMBa treats ‘;’ as a comment prefix, similar to ‘#’.

Sample configuration file:

# This is the main Samba configuration file. You should read the
# smb.conf(5) manual page in order to understand the options listed
#...
#======================= Global Settings =====================================

[global]
    workgroup = MYGROUP
    server string = Samba Server Version %v
    max log size = 50

[homes]
    comment = Home Directories
    browseable = no
    writable = yes
;   valid users = %S
;   valid users = MYDOMAIN\%S

[printers]
    comment = All Printers
    path = /var/spool/samba
    browseable = no
    guest ok = no
    writable = no
    printable = yes

# A publicly accessible directory, but read only, except for people in
# the "staff" group
[public]
   comment = Public Stuff
   path = /home/samba
   public = yes
   writable = yes
   printable = no
   write list = +staff

Examples

>>> type(conf)
<class 'insights.parsers.samba.SambaConfig'>
>>> sorted(conf.sections()) == [u'global', u'homes', u'printers', u'public']
True
>>> global_options = conf.items('global')  # get a section as a dictionary
>>> type(global_options) == type({})
True
>>> conf.get('public', 'comment') == u'Public Stuff'  # Accessor for section and option
True
>>> conf.getboolean('public', 'writable')  # Type conversion, but no default
True
>>> conf.getint('global', 'max log size')  # Same for integer conversion
50
class insights.parsers.samba.SambaConfig(context)[source]

Bases: insights.core.IniConfigFile

This parser reads the SaMBa configuration file /etc/samba/smb.conf.

parse_content(content)[source]

Parses content of the config file.

In child class overload and call super to set flag allow_no_values and allow keys with no value in config file:

def parse_content(self, content):
    super(YourClass, self).parse_content(content,
                                         allow_no_values=True)

samba logs - files matching /var/log/samba/*.log

class insights.parsers.samba_logs.SAMBALog(context)[source]

Bases: insights.core.LogFileOutput

Parser class for reading samba log files. The main work is done by the LogFileOutput super-class.

Sample input:

[2018/12/07 07:09:44.812154, 5, pid=6434, effective(0, 0), real(0, 0)] ../source3/param/loadparm.c:1344(free_param_opts)

Freeing parametrics:

[2018/12/07 07:09:44.812281, 3, pid=6434, effective(0, 0), real(0, 0)] ../source3/param/loadparm.c:547(init_globals)

Initialising global parameters

[2018/12/07 07:09:44.812356, 2, pid=6434, effective(0, 0), real(0, 0)] ../source3/param/loadparm.c:319(max_open_files)

rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)

[2019/45/899 11:11:04.911891, 3, pid=15822, effective(0, 0), real(0, 0)] ../source3/printing/queue_process.c:236

(bq_sig_hup_handler) Reloading pcap cache after SIGHUP.

Each line is parsed into a dictionary with the following keys:

  • timestamp - the date of the log line (as a string)

  • datetime - the date as a datetime object (if conversion is possible)

  • pid - process id of samba process being run

  • function - the function within the module

  • message - the body of the message

  • raw_message - the raw message before being split.

Examples

>>> 'Fake' in samba_logs
True
>>> 'pid=15822, effective(0, 0), real(0, 0)]' in samba_logs
True
>>> len(samba_logs.get('Fake line')) == 1
True

HDBVersion - Commands

Shared parser for parsing output of the sudo -iu <SID>adm HDB version commands.

class insights.parsers.sap_hdb_version.HDBVersion(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class for parsing the output of HDB version command.

Typical output of the command is:

# sudo -iu sr1adm HDB version
HDB version info:
  version:             2.00.030.00.1522210459
  branch:              hanaws
  machine config:      linuxx86_64
  git hash:            bb2ff6b25b8eab5ab382c170a43dc95ae6ce298f
  git merge time:      2018-03-28 06:14:19
  weekstone:           2018.13.0
  cloud edition:       0000.00.00
  compile date:        2018-03-28 06:19:13
  compile host:        ld2221
  compile type:        rel
version

the raw HDB version

Type

str

major

the major version

Type

str

minor

the minor version

Type

str

revision

the SAP HANA SPS revision number

Type

str

patchlevel

the patchlevel number of this revision

Type

str

sid

the SID of this SAP HANA

Type

str

Examples

>>> type(hdb_ver)
<class 'insights.parsers.sap_hdb_version.HDBVersion'>
>>> hdb_ver.sid
'sr1'
>>> hdb_ver.version
'2.00.030.00.1522210459'
>>> hdb_ver.major
'2'
>>> hdb_ver.minor
'00'
>>> hdb_ver.revision
'030'
>>> hdb_ver.patchlevel
'00'
>>> hdb_ver['machine config']
'linuxx86_64'
parse_content(content)[source]

This method must be implemented by classes based on this class.

SAPHostProfile - File /usr/sap/hostctrl/exe/host_profile

Shared parser for parsing the /usr/sap/hostctrl/exe/host_profile file.

class insights.parsers.sap_host_profile.SAPHostProfile(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Class for parsing the /usr/sap/hostctrl/exe/host_profile file.

Typical content of the file is:

SAPSYSTEMNAME = SAP
SAPSYSTEM = 99
service/porttypes = SAPHostControl SAPOscol SAPCCMS
DIR_LIBRARY = /usr/sap/hostctrl/exe
DIR_EXECUTABLE = /usr/sap/hostctrl/exe
DIR_PROFILE = /usr/sap/hostctrl/exe
DIR_GLOBAL = /usr/sap/hostctrl/exe
DIR_INSTANCE = /usr/sap/hostctrl/exe
DIR_HOME = /usr/sap/hostctrl/work

Examples

>>> type(hpf)
<class 'insights.parsers.sap_host_profile.SAPHostProfile'>
>>> hpf['SAPSYSTEMNAME']
'SAP'
>>> hpf['DIR_HOME']
'/usr/sap/hostctrl/work'
parse_content(content)[source]

This method must be implemented by classes based on this class.

sapcontrol - Commands sapcontrol

Shared parsers for parsing output of the sapcontrol [option] commands.

SAPControlSystemUpdateList- command sapcontrol -nr <NR> -function GetSystemUpdateList

class insights.parsers.sapcontrol.SAPControlSystemUpdateList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This class provides processing for the output of the sapcontrol -nr <NR> -function GetSystemUpdateList command on SAP system.

Sample output of the command:

29.01.2019 01:20:36
GetSystemUpdateList
OK
hostname, instanceNr, status, starttime, endtime, dispstatus
vm37-39, 00, Running, 29.01.2019 00:00:02, 29.01.2019 01:10:11, GREEN
vm37-39, 02, Running, 29.01.2019 00:00:05, 29.01.2019 01:11:11, GREEN
vm37-39, 03, Running, 29.01.2019 00:00:05, 29.01.2019 01:12:36, GREEN

Examples

>>> rks.is_running
True
>>> rks.is_green
True
>>> rks.data[-1]['status'] == 'Running'
True
>>> rks.data[-1]['dispstatus'] == 'GREEN'
True
>>> rks.data[0]['instanceNr'] == '00'
True
is_running

The status of GetSystemUpdateList

Type

Boolean

is_green

The display status of GetSystemUpdateList

Type

Boolean

data

List of dicts where keys are the lead name of header line and values are the string value.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

saphostctrl - Commands saphostctrl

Parsers included in this module are:

SAPHostCtrlInstances - Command saphostctrl -function GetCIMObject -enuminstances SAPInstance

class insights.parsers.saphostctrl.SAPHostCtrlInstances(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

This class provides processing for the output of the /usr/sap/hostctrl/exe/saphostctrl -function GetCIMObject -enuminstances SAPInstance command on SAP systems.

Sample output of the command:

*********************************************************
 CreationClassName , String , SAPInstance
 SID , String , D89
 SystemNumber , String , 88
 InstanceName , String , HDB88
 Hostname , String , hdb88
 FullQualifiedHostname , String , hdb88.example.com
 IPAddress , String , 10.0.0.88
 SapVersionInfo , String , 749, patch 211, changelist 1754007
*********************************************************
 CreationClassName , String , SAPInstance
 SID , String , D90
 SystemNumber , String , 90
 InstanceName , String , HDB90
 Hostname , String , hdb90
 FullQualifiedHostname , String , hdb90.example.com
 IPAddress , String , 10.0.0.90
 SapVersionInfo , String , 749, patch 211, changelist 1754007

Examples

>>> type(sap_inst)
<class 'insights.parsers.saphostctrl.SAPHostCtrlInstances'>
>>> sap_inst.data[-1]['CreationClassName']
'SAPInstance'
>>> sap_inst.data[-1]['SID']
'D90'
>>> sap_inst.data[-1]['SapVersionInfo']  # Note: captured as one string
'749, patch 211, changelist 1754007'
>>> sap_inst.data[0]['InstanceType']  # Inferred code from InstanceName
'HDB'
data

List of dicts where keys are the lead name of each line and values are the string value.

Type

list

instances

The list of instances found in the cluster output.

Type

list

sids

The list of SID found in the cluster output.

Type

list

types

The list of instance types found in the cluster output.

Type

list

Raises
parse_content(content)[source]

This method must be implemented by classes based on this class.

saphostexec - Commands

Shared parsers for parsing output of the saphostexec [option] commands.

SAPHostExecStatus- command saphostexec -status

SAPHostExecVersion - command saphostexec -version

class insights.parsers.saphostexec.SAPHostExecStatus(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class for parsing the output of saphostexec -status command.

Typical output of the command is:

saphostexec running (pid = 9159)
sapstartsrv running (pid = 9163)
saposcol running (pid = 9323)
is_running

The SAP Host Agent is running or not.

Type

bool

services

List of services.

Type

list

Examples

>>> type(sha_status)
<class 'insights.parsers.saphostexec.SAPHostExecStatus'>
>>> sha_status.is_running
True
>>> sha_status.services['saphostexec']
'9159'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.saphostexec.SAPHostExecVersion(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LegacyItemAccess

Class for parsing the output of saphostexec -version command.

Typical output of the command is:

*************************** Component ********************
/usr/sap/hostctrl/exe/saphostexec: 721, patch 1011, changelist 1814854, linuxx86_64, opt (Jan 13 2018, 04:43:56)
/usr/sap/hostctrl/exe/sapstartsrv: 721, patch 1011, changelist 1814854, linuxx86_64, opt (Jan 13 2018, 04:43:56)
/usr/sap/hostctrl/exe/saphostctrl: 721, patch 1011, changelist 1814854, linuxx86_64, opt (Jan 13 2018, 04:43:56)
/usr/sap/hostctrl/exe/xml71d.so: 721, patch 1011, changelist 1814854, linuxx86_64, opt (Jan 13 2018, 01:12:10)
**********************************************************
--------------------
SAPHOSTAGENT information
--------------------
kernel release                721
kernel make variant           721_REL
compiled on                   Linux GNU SLES-9 x86_64 cc4.1.2  for linuxx86_64
compiled for                  64 BIT
compilation mode              Non-Unicode
compile time                  Jan 13 2018 04:40:52
patch number                  33
latest change number          1814854
---------------------
supported environment
---------------------
operating system
Linux 2.6
Linux 3
Linux
components

Dict of SAPComponent instances.

Type

dict

Examples

>>> type(sha_version)
<class 'insights.parsers.saphostexec.SAPHostExecVersion'>
>>> sha_version.components['saphostexec'].version
'721'
>>> sha_version.components['saphostexec'].patch
'1011'
class SAPComponent(version, patch, changelist)

Bases: tuple

namedtuple: Type for storing the SAP components

property changelist
property patch
property version
parse_content(content)[source]

This method must be implemented by classes based on this class.

Sat5InsightsProperties - File redhat-access-insights.properties

class insights.parsers.sat5_insights_properties.Sat5InsightsProperties(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Class to parse configuration file /etc/redhat-access/redhat-access-insights.properties on Satellite 5 Server.

The typical content is:

portalurl = https://cert-api.access.redhat.com/r/insights
enabled = true
debug = true
rpmname = redhat-access-insights

Examples

>>> insights_props.enabled
True
>>> insights_props['debug']
'true'
>>> insights_props['rpmname']
'redhat-access-insights'
enabled

True when insights is enabled on the Satellite 5. Otherwise, False

Type

bool

Raises

SkipException -- When file content is empty.

parse_content(content)[source]

This method must be implemented by classes based on this class.

SatelliteEnabledFeatures - command curl -sk https://localhost:9090/features --connect-timeout 5

The satellite enabled features parser reads the output of curl -sk https://localhost:9090/features --connect-timeout 5 and convert it into a list.

Sample output of curl -sk https://localhost:9090/features --connect-timeout 5:

["ansible","dhcp","discovery",dynflow","logs","openscap","pulp","puppet","puppetca","ssh","templates","tftp"]

Examples

>>> type(satellite_features)
<class 'insights.parsers.satellite_enabled_features.SatelliteEnabledFeatures'>
>>> 'dhcp' in satellite_features
True
>>> 'dns' in satellite_features
False
class insights.parsers.satellite_enabled_features.SatelliteEnabledFeatures(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, list

Read the curl -sk https://localhost:9090/features --connect-timeout 5 command and convert it to a list.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Satellite installer configuration files

Parsers included in this module are:

CustomHiera - file /etc/foreman-installer/custom-hiera.yaml

Parsers the file /etc/foreman-installer/custom-hiera.yaml

class insights.parsers.satellite_installer_configurations.CustomHiera(context)[source]

Bases: insights.core.YAMLParser

Class to parse /etc/foreman-installer/custom-hiera.yaml

Examples

>>> 'apache::mod::prefork::serverlimit' in custom_hiera
True
>>> custom_hiera['apache::mod::prefork::serverlimit']
582
parse_content(content)[source]

This method must be implemented by classes based on this class.

Satellite MongoDB Commands

Parsers included in this module are:

MongoDBStorageEngine - command mongo pulp_database --eval 'db.serverStatus().storageEngine'

The satellite mongodb storage engine parser reads the output of mongo pulp_database --eval 'db.serverStatus().storageEngine' and save the storage engine attributes to a dict.

class insights.parsers.satellite_mongodb.MongoDBStorageEngine(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Read the mongo pulp_database --eval 'db.serverStatus().storageEngine' command and save the storage engine attributes to a dict.

Sample Output:

MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017/pulp_database
MongoDB server version: 3.4.9
{
        "name" : "wiredTiger",
        "supportsCommittedReads" : true,
        "readOnly" : false,
        "persistent" : true
}

Examples:

>>> type(satellite_storage_engine)
<class 'insights.parsers.satellite_mongodb.MongoDBStorageEngine'>
>>> satellite_storage_engine['name']
'wiredTiger'

Raises:

SkipException: When there is no attribute in the output
ParseException: When the storage engine attributes aren't in expected format
parse_content(content)[source]

This method must be implemented by classes based on this class.

Satellite6Version - file /usr/share/foreman/lib/satellite/version.rb

Module for parsing the content of file version.rb or satellite_version, which is a simple file in foreman-debug or sosreport archives of Satellite 6.x.

Typical content of “satellite_version” is:

COMMAND> cat /usr/share/foreman/lib/satellite/version.rb

module Satellite
  VERSION = "6.1.3"
end

Warning

This module only works for Satellite 6.0.x and 6.1.x Please use the combiner insights.combiners.satellite_version.SatelliteVersion class to cover all versions.

Examples

>>> sat6_ver = shared[SatelliteVersion]
>>> sat6_ver.full
"6.1.3"
>>> sat6_ver.version
"6.1.3"
>>> sat6_ver.major
6
>>> sat6_ver.minor
1
>>> sat6_ver.release
None
class insights.parsers.satellite_version.Satellite6Version(context)[source]

Bases: insights.core.Parser

Class for parsing the content of satellite_version.

parse_content(content)[source]

This method must be implemented by classes based on this class.

SCSIEhDead - file /sys/class/scsi_host/host[0-9]*/eh_deadline

This parser parses the content from eh_deadline file from individual SCSI hosts. This parser will return data in dictionary format.

Sample content from /sys/class/scsi_host/host0/eh_deadline:

off/10/-1/0

Examples

>>> type(scsi_obj0)
<class 'insights.parsers.scsi_eh_deadline.SCSIEhDead'>
>>> scsi_obj0.data
{'host0': 'off'}
>>> scsi_obj0.scsi_host
'host0'
>>> type(scsi_obj1)
<class 'insights.parsers.scsi_eh_deadline.SCSIEhDead'>
>>> scsi_obj1.data
{'host1': '10'}
>>> scsi_obj1.scsi_host
'host1'
>>> type(scsi_obj2)
<class 'insights.parsers.scsi_eh_deadline.SCSIEhDead'>
>>> scsi_obj2.data
{'host2': '-1'}
>>> scsi_obj2.scsi_host
'host2'
class insights.parsers.scsi_eh_deadline.SCSIEhDead(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse /sys/class/scsi_host/host[0-9]*/eh_deadline file, return a dict contain eh_deadline scsi host file info. “scsi_host” key is scsi host file parse from scsi host file name.

Properties:

scsi_host (str): scsi host file name derived from file path.

property host_eh_deadline

It will return the scsi host modes when set else None.

Type

(list)

parse_content(content)[source]

This method must be implemented by classes based on this class.

SCSIFWver - file /sys/class/scsi_host/host[0-9]*/fwrev

This parser parses the content from fwver file from individual SCSI hosts. This parser will return data in dictionary format.

Sample Content from /sys/class/scsi_host/host0/fwrev:

2.02X12 (U3H2.02X12), sli-3

Examples

>>> type(scsi_obj)
<class 'insights.parsers.scsi_fwver.SCSIFWver'>
>>> scsi_obj.data
{'host0': ['2.02X12 (U3H2.02X12)', 'sli-3']}
>>> scsi_obj.scsi_host
'host0'
class insights.parsers.scsi_fwver.SCSIFWver(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse /sys/class/scsi_host/host[0-9]*/fwrev file, return a dict contain fwver scsi host file info. “scsi_host” key is scsi host file parse from scsi host file name.

Properties:

scsi_host (str): scsi host file name derived from file path.

property host_mode

It will return the scsi host modes when set else None.

Type

(list)

parse_content(content)[source]

This method must be implemented by classes based on this class.

SCTP Socket State Parser

Parsers provided by this module include:

SCTPEps - file /proc/net/sctp/eps

SCTPAsc - file /proc/net/sctp/assocs on RHEL-6

SCTPAsc7 - file /proc/net/sctp/assocs on RHEL-7

SCTPSnmp - file /proc/net/sctp/snmp

class insights.parsers.sctp.SCTPAsc(*args, **kwargs)[source]

Bases: insights.parsers.sctp.SCTPAscBase

This parser parses the file /proc/net/sctp/assocs from RHEL-6. It has different columns as compare to RHEL-7.

Typical contents of /proc/net/sctp/assocs on RHEL-6 file are:

ASSOC     SOCK   STY SST ST HBKT ASSOC-ID TX_QUEUE RX_QUEUE UID INODE LPORT RPORT LADDRS <-> RADDRS HBINT INS OUTS MAXRT T1X T2X RTXC
ffff88045ac7e000 ffff88062077aa00 2   1   4  1205  963        0        0     200 273361167 11567 11166  10.0.0.102 10.0.0.70 <-> *10.0.0.109 10.0.0.77      1000     2     2   10    0    0        0
ffff88061fbf2000 ffff88060ff92500 2   1   4  1460  942        0        0     200 273360669 11566 11167  10.0.0.102 10.0.0.70 <-> *10.0.0.109 10.0.0.77      1000     2     2   10    0    0        0

Output data is stored in the list of dictionaries

Examples

>>> type(sctp_asc)
<class 'insights.parsers.sctp.SCTPAsc'>
>>> sorted(sctp_asc.sctp_local_ports) == sorted(['11567','11566'])
True
>>> sorted(sctp_asc.sctp_remote_ports) == sorted(['11166','11167'])
True
>>> sorted(sctp_asc.sctp_local_ips) == sorted(['10.0.0.102', '10.0.0.70'])
True
>>> sorted(sctp_asc.sctp_remote_ips) == sorted(['*10.0.0.109', '10.0.0.77'])
True
>>> sorted(sctp_asc.search(local_port='11566')) == sorted([{'init_chunks_send': '0', 'uid': '200', 'shutdown_chunks_send': '0', 'max_outstream': '2', 'tx_que': '0', 'inode': '273360669', 'hrtbt_intrvl': '1000', 'sk_type': '2', 'remote_addr': ['*10.0.0.109', '10.0.0.77'], 'data_chunks_retrans': '0', 'local_addr': ['10.0.0.102', '10.0.0.70'], 'asc_id': '942', 'max_instream': '2', 'remote_port': '11167', 'asc_state': '4', 'max_retrans_atmpt': '10', 'sk_state': '1', 'socket': 'ffff88060ff92500', 'asc_struct': 'ffff88061fbf2000', 'local_port': '11566', 'hash_bkt': '1460', 'rx_que': '0'}])
True
class insights.parsers.sctp.SCTPAsc7(*args, **kwargs)[source]

Bases: insights.parsers.sctp.SCTPAscBase

This parser parses the file /proc/net/sctp/assocs from RHEL-7. It has different columns as compare to RHEL-6.

Typical contents of /proc/net/sctp/assocs on RHEL-7 file are:

ASSOC     SOCK   STY SST ST HBKT ASSOC-ID TX_QUEUE RX_QUEUE UID INODE LPORT RPORT LADDRS <-> RADDRS HBINT INS OUTS MAXRT T1X T2X RTXC wmema wmemq sndbuf rcvbuf
ffff8805d36b3000 ffff880f8911f380 0   10  3  0    12754        0        0       0 496595 3868   3868  10.131.222.5 <-> *10.131.160.81 10.131.176.81        30000    17    10   10    0    0        0        11        12  1000000  2000000
ffff8805f17e1000 ffff881004aff380 0   10  3  0    12728        0        0       0 532396 3868   3868  10.131.222.3 <-> *10.131.160.81 10.131.176.81        30000    17    10   10    0    0        0        13        14  3000000  4000000

Output data is stored in the list of dictionaries

Examples

>>> type(sctp_asc_7)
<class 'insights.parsers.sctp.SCTPAsc7'>
>>> sctp_asc_7.sctp_local_ips == sorted(['10.131.222.5', '10.131.222.3'])
True
>>> sctp_asc_7.data[0]['rcvbuf']
'2000000'
class insights.parsers.sctp.SCTPAscBase(context)[source]

Bases: insights.core.Parser

This parser parses the content of /proc/net/sctp/assocs file. And returns a list of dictionaries. The dictionary contains details of individual SCTP endpoint, which includes Association Struct, Socket, Socket type, Socket State, Association state, hash bucket, association id, tx queue, rx queue, uid, inode, local port, remote port, ‘local addr, remote addr, heartbeat interval, max in-stream, max out-stream, max retransmission attempt, number of init chunks send, number of shutdown chunks send, data chunks retransmitted’

parse_content(content)[source]

This method must be implemented by classes based on this class.

property sctp_local_ips

This function returns a list of all local peer’s ip addresses if SCTP endpoints are created, else [].

Type

(list)

property sctp_local_ports

This function returns a list of SCTP local peer ports if SCTP endpoints are created, else [].

Type

(list)

property sctp_remote_ips

This function returns a list of all remote peer’s ip addresses if SCTP endpoints are created, else [].

Type

(list)

property sctp_remote_ports

This function returns a list of SCTP remote peer ports if SCTP endpoints are created, else [].

Type

(list)

search(**args)[source]
(list): This function return a list of all SCTP associations when args search matches,

when args search do not match then it returns [].

class insights.parsers.sctp.SCTPEps(context)[source]

Bases: insights.core.Parser

This parser parses the content of /proc/net/sctp/eps file. It returns a list of dictionaries. The dictionary contains detail information of individual SCTP endpoint, which includes Endpoints, Socket, Socket type, Socket State, hash bucket, bind port, UID, socket inodes, Local IP address.

Typical contents of /proc/net/sctp/eps file are:

ENDPT            SOCK             STY SST HBKT LPORT   UID INODE     LADDRS
ffff88017e0a0200 ffff880300f7fa00 2   10  29   11165   200 299689357 10.0.0.102 10.0.0.70
ffff880612e81c00 ffff8803c28a1b00 2   10  30   11166   200 273361203 10.0.0.102 10.0.0.70 172.31.1.2

Output data is stored in the list of dictionaries

Examples

>>> type(sctp_info)
<class 'insights.parsers.sctp.SCTPEps'>
>>> sorted(sctp_info.sctp_local_ports) == sorted(['11165', '11166'])
True
>>> sorted(sctp_info.sctp_local_ips) == sorted(['10.0.0.102', '10.0.0.70', '172.31.1.2'])
True
>>> sorted(sctp_info.search(local_port="11165")) == sorted([{'endpoints': 'ffff88017e0a0200', 'socket': 'ffff880299f7fa00', 'sk_type': '2', 'sk_state': '10', 'hash_bkt': '29', 'local_port': '11165', 'uid': '200', 'inode': '299689357', 'local_addr': ['10.0.0.102', '10.0.0.70']}])
True
>>> len(sctp_info.search(local_port="11165")) == 1
True
>>> len(sctp_info.search(endpoints="ffff88017e0a0200")) == 1
True
>>> sctp_info.sctp_eps_ips
{'ffff88017e0a0200': ['10.0.0.102', '10.0.0.70'], 'ffff880612e81c00': ['10.0.0.102', '10.0.0.70', '172.31.1.2']}
parse_content(content)[source]

This method must be implemented by classes based on this class.

property sctp_eps_ips

This function returns a dict of all endpoints and corresponding local ip addresses used by SCTP endpoints if SCTP endpoints are created, else {}.

Type

(dict)

property sctp_local_ips

This function returns a list of all local ip addresses if SCTP endpoints are created, else [].

Type

(list)

property sctp_local_ports

This function returns a list of SCTP ports if SCTP endpoints are created, else [].

Type

(list)

search(**args)[source]
(list): This function return a list of all endpoints when args search matches,

when args search do not match then it returns [].

class insights.parsers.sctp.SCTPSnmp(context)[source]

Bases: insights.core.Parser, dict

This parser parses the content of /proc/net/sctp/snmp file, which contains statistics related to SCTP states, packets and chunks.

Sample content:

SctpCurrEstab                           5380
SctpActiveEstabs                        12749
SctpPassiveEstabs                       55
SctpAborteds                            2142
SctpShutdowns                           5295
SctpOutOfBlues                          36786
SctpChecksumErrors                      0
SctpOutCtrlChunks                       1051492

Data is stored in a dictionary.

Examples

>>> type(sctp_snmp)
<class 'insights.parsers.sctp.SCTPSnmp'>
>>> sctp_snmp.get('SctpCurrEstab')
5380
>>> sctp_snmp.get('SctpChecksumErrors') == 0
True
>>> 'SctpShutdowns' in sctp_snmp
True
>>> len(sctp_snmp)
8
>>> sorted(sctp_snmp.keys())
['SctpAborteds', 'SctpActiveEstabs', 'SctpChecksumErrors', 'SctpCurrEstab', 'SctpOutCtrlChunks', 'SctpOutOfBlues', 'SctpPassiveEstabs', 'SctpShutdowns']

Resultant Data:

{
    'SctpCurrEstab': 5380,
    'SctpActiveEstabs': 12749,
    'SctpPassiveEstabs': 55,
    'SctpAborteds': 2142,
    'SctpShutdowns': 5295,
    'SctpOutOfBlues': 36786,
    'SctpChecksumErrors': 0,
    ...
    ...
}
Raises
parse_content(content)[source]

This method must be implemented by classes based on this class.

Sealert - command /usr/bin/sealert -l "*"

class insights.parsers.sealert.Sealert(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Reads the output of /usr/bin/sealert -l "*".

Sample output:

SELinux is preventing sh from entrypoint access on the file /usr/bin/podman.

*****  Plugin catchall (100. confidence) suggests **************************

If you believe that sh should be allowed entrypoint access on the podman file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'sh' --raw | audit2allow -M my-sh
# semodule -X 300 -i my-sh.pp

Additional Information:
Source Context unconfined_u:system_r:rpm_script_t:s0-s0:c0.c1023
Target Context system_u:object_r:container_runtime_exec_t:s0
Target Objects                /usr/bin/podman [ file ]
Source                        sh
Source Path                   sh
Port                          <Unknown>
Host                          localhost.localdomain
Source RPM Packages
Target RPM Packages           podman-1.1.2-1.git0ad9b6b.fc28.x86_64
Policy RPM                    selinux-policy-3.14.1-54.fc28.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     localhost.localdomain
Platform                      Linux localhost.localdomain 4.20.7-100.fc28.x86_64
                              #1 SMP Wed Feb 6 19:17:09 UTC 2019 x86_64 x86_64
Alert Count                   1
First Seen                    2019-07-30 11:15:04 CEST
Last Seen                     2019-07-30 11:15:04 CEST
Local ID                      39a7094b-e402-4d87-9af9-e97eda41219a

Raw Audit Messages
type=AVC msg=audit(1564478104.911:4631): avc:  denied  { entrypoint } for  pid=29402 comm="sh" path="/usr/bin/podman" dev="dm-1" ino=955465 scontext=unconfined_u:system_r:rpm_script_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_runtime_exec_t:s0 tclass=file permissive=0

Hash: sh,rpm_script_t,container_runtime_exec_t,file,entrypoint

Examples

>>> type(sealert)
<class 'insights.parsers.sealert.Sealert'>
>>> sealert.raw_lines[0]
'SELinux is preventing rngd from using the dac_override capability.'
>>> sealert.reports[1].lines_stripped()[0]
'SELinux is preventing sh from entrypoint access on the file /usr/bin/podman.'
>>> str(sealert.reports[1]).split('\n')[0]
'SELinux is preventing sh from entrypoint access on the file /usr/bin/podman.'
raw_lines

Unparsed output as list of lines

Type

list[str]

reports

Sealert reports

Type

list[Report]

Raises

SkipException -- When output is empty

parse_content(content)[source]

This method must be implemented by classes based on this class.

Secure - file /var/log/secure

class insights.parsers.secure.Secure(context)[source]

Bases: insights.core.Syslog

Class for parsing the /var/log/secure file.

Sample log text:

Aug 24 09:31:39 localhost polkitd[822]: Finished loading, compiling and executing 6 rules
Aug 24 09:31:39 localhost polkitd[822]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Aug 25 13:52:54 localhost sshd[23085]: pam_unix(sshd:session): session opened for user zjj by (uid=0)
Aug 25 13:52:54 localhost sshd[23085]: error: openpty: No such file or directory

Note

Please refer to its super-class insights.core.Syslog

Note

Because timestamps in the secure log by default have no year, the year of the logs will be inferred from the year in your timestamp. This will also work around December/January crossovers.

Examples

>>> secure = shared[Secure]
>>> secure.get('session opened')
[{'timestamp':'Aug 25 13:52:54',
  'hostname':'localhost',
  'procname': 'sshd[23085]',
  'message': 'pam_unix(sshd:session): session opened for user zjj by (uid=0)',
  'raw_message': 'Aug 25 13:52:54 localhost sshd[23085]: pam_unix(sshd:session): session opened for user zjj by (uid=0)'
 }]
>>> len(list(secure.get_after(datetime(2017, 8, 25, 0, 0, 0))))
2

SelinuxConfig - file /etc/selinux/config

class insights.parsers.selinux_config.SelinuxConfig(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parse the SELinux configuration file.

Produces a simple dictionary of keys and values from the configuration file contents , stored in the data attribute. The object also functions as a dictionary itself thanks to the insights.core.LegacyItemAccess mixin class.

Sample configuration file:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Examples

>>> conf = shared[SelinuxConfig]
>>> conf['SELINUX']
'enforcing'
>>> 'AUTORELABEL' in conf
False
parse_content(content)[source]

This method must be implemented by classes based on this class.

SetupNamedChroot - file /usr/libexec/setup-named-chroot.sh

This module provides class SetupNamedChroot for parsing the output of file /usr/libexec/setup-named-chroot.sh.

class insights.parsers.setup_named_chroot.SetupNamedChroot(context)[source]

Bases: insights.core.Parser, dict

Class for parsing the /usr/libexec/setup-named-chroot.sh file.

Typical content of the filtered file is:

#!/bin/bash
# it MUST be listed last. (/var/named contains /var/named/chroot)
ROOTDIR_MOUNT='/etc/localtime /etc/named /etc/pki/dnssec-keys /etc/named.root.key /etc/named.conf
/etc/named.dnssec.keys /etc/named.rfc1912.zones /etc/rndc.conf /etc/rndc.key /usr/lib64/bind
/usr/lib/bind /etc/named.iscdlv.key /run/named /var/named /etc/protocols /etc/services'
    for all in $ROOTDIR_MOUNT; do
    for all in $ROOTDIR_MOUNT; do,
    # Check if file is mount target. Do not use /proc/mounts because detecting
raw

A list of all the active lines present

Type

list

Raises

SkipException -- When the file is empty or when the input content is not empty but there is no useful parsed data

Examples

>>> len(snc)
2
>>> snc['ROOTDIR_MOUNT']
['/etc/localtime', '/etc/named', '/etc/pki/dnssec-keys', '/etc/named.root.key', '/etc/named.conf', '/etc/named.dnssec.keys', '/etc/named.rfc1912.zones', '/etc/rndc.conf', '/etc/rndc.key', '/etc/named.iscdlv.key', '/etc/protocols', '/etc/services', '/usr/lib64/bind', '/usr/lib/bind', '/run/named', '/var/named']
parse_content(content)[source]

This method must be implemented by classes based on this class.

Slab allocator’s details.

SlabInfo - File /proc/slabinfo

class insights.parsers.slabinfo.SlabInfo(context)[source]

Bases: insights.core.Parser

Parse the content of the /proc/slabinfo file

Sample input data looks like:

slabinfo - version: 2.1
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
sw_flow                0      0   1256   13    4 : tunables    0    0    0 : slabdata      0      0      0
nf_conntrack_ffffffffaf313a40     12     12    320   12    1 : tunables    0    0    0 : slabdata      1      1      0
xfs_dqtrx              0      0    528   15    2 : tunables    0    0    0 : slabdata      0      0      0
xfs_dquot              0      0    488    8    1 : tunables    0    0    0 : slabdata      0      0      0
xfs_ili             2264   2736    168   24    1 : tunables    0    0    0 : slabdata    114    114      0
xfs_inode           4845   5120    960    8    2 : tunables    0    0    0 : slabdata    640    640      0
xfs_efd_item          76     76    416   19    2 : tunables    0    0    0 : slabdata      4      4      0
xfs_btree_cur         18     18    216   18    1 : tunables    0    0    0 : slabdata      1      1      0
xfs_log_ticket        22     22    184   22    1 : tunables    0    0    0 : slabdata      1      1      0
bio-3                 60     60    320   12    1 : tunables    0    0    0 : slabdata      5      5      0
kcopyd_job             0      0   3312    9    8 : tunables    0    0    0 : slabdata      0      0      0
dm_uevent              0      0   2608   12    8 : tunables    0    0    0 : slabdata      0      0      0
dm_rq_target_io        0      0    136   30    1 : tunables    0    0    0 : slabdata      0      0      0
ip6_dst_cache         72     72    448    9    1 : tunables    0    0    0 : slabdata      8      8      0
RAWv6                 13     13   1216   13    4 : tunables    0    0    0 : slabdata      1      1      0
UDPLITEv6              0      0   1216   13    4 : tunables    0    0    0 : slabdata      0      0      0
UDPv6                 13     13   1216   13    4 : tunables    0    0    0 : slabdata      1      1      0
tw_sock_TCPv6          0      0    256   16    1 : tunables    0    0    0 : slabdata      0      0      0
TCPv6                 15     15   2112   15    8 : tunables    0    0    0 : slabdata      1      1      0
cfq_queue              0      0    232   17    1 : tunables    0    0    0 : slabdata      0      0      0
bsg_cmd                0      0    312   13    1 : tunables    10   20   30 : slabdata     40     50     60

Examples

>>> type(pslabinfo)
<class 'insights.parsers.slabinfo.SlabInfo'>
>>> len(pslabinfo.data.keys())
21
>>> pslabinfo.slab_object('bsg_cmd', 'active_slabs')
40
>>> pslabinfo.slab_object('bsg_cmd', 'limit')
10
parse_content(content)[source]

This method must be implemented by classes based on this class.

slab_details(slab_name)[source]

(dict): On success it will return the deatils of given slab, else it will return None.

slab_object(slab_name, slab_obj)[source]

(int): On success it will return the allocated slab object number, else it will return 0.

property slab_version

On success it will return the slab version else it will return None.

Type

(str)

SMARTctl - command /sbin/smartctl -a {device}

class insights.parsers.smartctl.SMARTctl(context)[source]

Bases: insights.core.CommandParser

Parser for output of smartctl -a for each drive in system.

This stores the information from the output of smartctl in the following properties:

  • device - the name of the device after /dev/ - e.g. sda

  • information - the -i info (vendor, product, etc)

  • health - overall health assessment (-H)

  • values - the SMART values (-c) - SMART config on drive firmware

  • attributes - the SMART attributes (-A) - run time data

For legacy access, these are also available as values in the info dictionary property, keyed to their name (i.e. info[‘device’])

Each object contains a different device; the shared information for this parser in Insights will be one or more devices, so see the example below for how to iterate through the available SMARTctl information for each device.

Sample (abbreviated) output:

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-267.el7.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ST500LM021-1KJ152
Serial Number:    W620AT02
LU WWN Device Id: 5 000c50 07817bb36
...

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
...

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   099   034    Pre-fail  Always       -       179599704
  3 Spin_Up_Time            0x0003   098   098   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       546
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
...

Examples

>>> for drive in shared[SMARTctl]:
...     print "Device:", drive.device
...     print "Model:", drive.information['Device Model']
...     print "Health check:", drive.health
...     print "Last self-test status:", drive.values['Self-test execution status']
...     print "Raw read error rate:", drive.attributes['Raw_Read_Error_Rate']['RAW_VALUE']
...
Device: /dev/sda
Model: ST500LM021-1KJ152
Health check: PASSED
Last self-test status: 0
Raw read error rate: 179599704
parse_content(content)[source]

This method must be implemented by classes based on this class.

SmartpdcSettings - file /etc/smart_proxy_dynflow_core/settings.yml

This module provides parsing for smart_proxy_dynflow_core settings file. SmartpdcSettings is a parser for /etc/smart_proxy_dynflow_core/settings.yml files.

Typical output is:

# Path to dynflow database, leave blank for in-memory non-persistent database
:database:
:console_auth: true

# URL of the foreman, used for reporting back
:foreman_url: https://test.example.com

# SSL settings for client authentication against foreman.
:foreman_ssl_ca: /etc/foreman-proxy/foreman_ssl_ca.pem
:foreman_ssl_cert: /etc/foreman-proxy/foreman_ssl_cert.pem
:foreman_ssl_key: /etc/foreman-proxy/foreman_ssl_key.pem

# Listen on address
:listen: 0.0.0.0

# Listen on port
:port: 8008

Examples

>>> smartpdc_settings.data[':foreman_url']
'https://test.example.com'
>>> "/etc/foreman-proxy/foreman_ssl_ca.pem" in smartpdc_settings.data[':foreman_ssl_ca']
True
class insights.parsers.smartpdc_settings.SmartpdcSettings(context)[source]

Bases: insights.core.YAMLParser

Class for parsing the content of /etc/smart_proxy_dynflow_core/settings.yml.

Samba status commands

This module provides processing for the smbstatus command using the following parsers:

SmbstatusS - command /usr/bin/smbstatus -S

Smbstatusp - command /usr/bin/smbstatus -p

class insights.parsers.smbstatus.SmbstatusS(context, extra_bad_lines=[])[source]

Bases: insights.parsers.smbstatus.Statuslist

Class SmbstatusS parses the output of the smbstatus -S command.

Sample output of this command looks like:

Service      pid     Machine       Connected at                     Encryption   Signing
----------------------------------------------------------------------------------------
share_test   13668   10.66.208.149 Wed Sep 27 10:33:55 AM 2017 CST  -            -

The format of smbstatus -S is like table, and function parse_fixed_table could parse it.

Examples

>>> smbstatuss_info.data[0] == {'Signing': '-', 'Service': 'share_test', 'Encryption': '-', 'pid': '13668', 'Machine': '10.66.208.149', 'Connected_at': 'Wed Sep 27 10:33:55 AM 2017 CST'}
True
>>> smbstatuss_info.data[0]['pid']
'13668'
Raises

ParseException -- When there is no usefull data or the input content is empty, or does contain the header line.

data

List of dicts, where the keys in each dict are the column headers and each item in the list represents a connection.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.smbstatus.Smbstatusp(context, extra_bad_lines=[])[source]

Bases: insights.parsers.smbstatus.Statuslist

Class Smbstatusp parses the output of the smbstatus -p command.

Sample output of this command looks like:

Samba version 4.6.2
PID     Username     Group        Machine                                   Protocol Version  Encryption           Signing
--------------------------------------------------------------------------------------------------------------------------
12668   testsmb       testsmb       10.66.208.149 (ipv4:10.66.208.149:44376)  SMB2_02           -                    -

The format of smbstatus -p is like table, and function parse_fixed_table could parse it.

Examples

>>> smbstatusp_info.data[0] == {'Username': 'testsmb', 'Signing': '-', 'Group': 'testsmb', 'Encryption': '-', 'PID': '12668', 'Machine': '10.66.208.149 (ipv4:10.66.208.149:44376)', 'Protocol_Version': 'SMB2_02'}
True
>>> smbstatusp_info.data[0]['PID']
'12668'
Raises

ParseException -- When there is no usefull data or the input content is empty, or does contain the header line.

data

List of dicts, where the keys in each dict are the column headers and each item in the list represents a connection.

Type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.smbstatus.Statuslist(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Base class implementing shared code.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Simultaneous Multithreading (SMT) parsers

Parsers included in this module are:

CpuSMTActive - file /sys/devices/system/cpu/smt/active

CpuCoreOnline - files matching /sys/devices/system/cpu/cpu[0-9]*/online

CpuSiblings - files matching /sys/devices/system/cpu/cpu[0-9]*/topology/thread_siblings_list

class insights.parsers.smt.CpuCoreOnline(context)[source]

Bases: insights.core.Parser

Class for parsing /sys/devices/system/cpu/cpu[0-9]*/online matching files. Reports whether a CPU core is online. Cpu0 is always online, so it does not have the “online” file.

Typical output of this command is:

1
1
1
Raises

SkipException -- When content is empty or cannot be parsed

Examples

>>> cpu_core.core_id
0
>>> cpu_core.on
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.smt.CpuSMTActive(context)[source]

Bases: insights.core.Parser

Class for parsing /sys/devices/system/cpu/smt/active file. Reports whether SMT is enabled and active.

Typical output of this command is:

1
Raises

SkipException -- When content is empty or cannot be parsed.

Examples

>>> cpu_smt.on
True
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.smt.CpuSiblings(context)[source]

Bases: insights.core.Parser

Class for parsing /sys/devices/system/cpu/cpu[0-9]*/topology/thread_siblings_list matching files. Reports CPU core siblings.

Typical output of this command is:

0,2
1,3
0,2
1,3
Raises

SkipException -- When content is empty or cannot be parsed

Examples

>>> cpu_siblings.core_id
0
>>> cpu_siblings.siblings
[0, 2]
parse_content(content)[source]

This method must be implemented by classes based on this class.

TcpIpStats - file /proc/net/snmp

The TcpIpStats class implements the parsing of /proc/net/snmp file, which contains TCP/IP stats of individual layer.

TcpIpStatsIPV6 - file /proc/net/snmp6

The TcpIpStatsIPV6 class implements the parsing of /proc/net/snmp6 file, which contains TCP/IP stats of individual layer.

class insights.parsers.snmp.TcpIpStats(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parser for /proc/net/snmp file.

Sample input is provided in the Examples.

Examples

>>> SNMP_CONTENT = '''
... Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs ReasmFails FragOKs FragFails FragCreates
... Ip: 2 64 43767 0 0 0 0 0 41807 18407 12 73 0 0 0 10 0 0 0
... Icmp: InMsgs InErrors InCsumErrors InDestUnreachs InTimeExcds InParmProbs InSrcQuenchs InRedirects InEchos InEchoReps InTimestamps InTimestampReps InAddrMasks InAddrMaskReps OutMsgs OutErrors OutDestUnreachs OutTimeExcds OutParmProbs OutSrcQuenchs OutRedirects OutEchos OutEchoReps OutTimestamps OutTimestampReps OutAddrMasks OutAddrMaskReps
... Icmp: 34 0 0 34 0 0 0 0 0 0 0 0 0 0 44 0 44 0 0 0 0 0 0 0 0 0 0
... IcmpMsg: InType3 OutType3
... IcmpMsg: 34 44
... Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets CurrEstab InSegs OutSegs RetransSegs InErrs OutRsts InCsumErrors
... Tcp: 1 200 120000 -1 444 0 0 6 7 19269 17050 5 4 234 0
... Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti
... Udp: 18905 34 0 1348 0 0 0 3565
... UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti
... UdpLite: 0 0 0 0 0 0 0 0
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {TcpIpStats: TcpIpStats(context_wrap(SNMP_CONTENT))}
>>> stats = shared[TcpIpStats]
>>> snmp_stats = stats.get("Ip")
>>> print snmp_stats["DefaultTTL"]
64
>>> snmp_stats = stats.get("Udp")
>>> print snmp_stats["InDatagrams"]
18905

Resultant Data:

{
    'Ip':
        {
            'FragCreates': 0,
            'ReasmFails': 10,
            'Forwarding': 2,
            'ReasmOKs': 0,
            'ReasmReqds': 0,
            'ReasmTimeout': 0,
            ...
            ...
        },
    'Icmp':
        {
            'InRedirects': 0,
            'InMsgs': 34,
            'InSrcQuenchs': 0,
            ...
            ...
        }
    ...
    ...
}
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.snmp.TcpIpStatsIPV6(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parser for /proc/net/snmp6 file.

Sample input is provided in the Examples.

Examples

>>> SNMP_CONTENT = '''
... Ip6InReceives                       757
... Ip6InHdrErrors                      0
... Ip6InTooBigErrors                   0
... Ip6InNoRoutes                       0
... Ip6InAddrErrors                     0
... Ip6InDiscards                       10
... Ip6OutForwDatagrams                 0
... Ip6OutDiscards                      0
... Ip6OutNoRoutes                      0
... Ip6InOctets                         579410
... Icmp6OutErrors                      0
... Icmp6InCsumErrors                   0
...'''.strip()
>>> from insights.tests import context_wrap
>>> shared = {TcpIpStatsIPV6: TcpIpStatsIPV6(context_wrap(SNMP_CONTENT))}
>>> stats = shared[TcpIpStatsIPV6]
>>> IP6_RX_stats = stats.get("Ip6InReceives")
>>> print IP6_RX_stats
757
>>> IP6_In_Disc = stats.get("Ip6InDiscards")
>>> print IP6_In_Disc
10

Resultant Data:

{
    'Ip6InReceives': 757,
    'Ip6InHdrErrors': 0,
    'Ip6InTooBigErrors': 0,
    'Ip6InNoRoutes': 0,
    'Ip6InAddrErrors': 0,
    'Ip6InDiscards': 10,
    ...
    ...
}
parse_content(content)[source]

This method must be implemented by classes based on this class.

SockStats - file /proc/net/sockstat

The TcpIpStats class implements the parsing of /proc/net/sockstat file, which contains TCP/IP stats of individual layer.

class insights.parsers.sockstat.SockStats(context)[source]

Bases: insights.core.Parser, dict

Parser for /proc/net/sockstat file.

Sample input is provided in the Examples.

Sample content:

sockets: used 3037
TCP: inuse 1365 orphan 17 tw 2030 alloc 2788 mem 4109
UDP: inuse 6 mem 3
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

Examples

>>> type(sock_obj)
<class 'insights.parsers.sockstat.SockStats'>
>>> sock_obj.seg_details('tcp')['mem']
'4109'
>>> sock_obj.seg_element_details('tcp', 'mem')
4109
>>> sock_obj.seg_element_details('frag', 'inuse')
0
>>> sock_obj.get('sockets')
{'used': '3037'}
>>> sock_obj.get('sockets').get('used')
'3037'
>>> sock_obj.seg_element_details('tcp', 'abc') is None
True

Resultant Data:

{
    'sockets':
            {
                'used': '3037'
            },
    'tcp':
         {
             'inuse': '1365',
             'orphan': '17',
             'tw': '2030',
             'alloc': '2788',
             'mem': '4109'
         },
    'udp':
         {
             'inuse': '6',
             'mem': '3'
         },
    'udplite':
             {
                'inuse': '0'
             },
    'raw':
         {
            'inuse': '0'
         }
    'frag':
          {
            'inuse': '0',
            'memory': '0'
          }
}
Raises

SkipException -- When contents are empty

parse_content(content)[source]

This method must be implemented by classes based on this class.

seg_details(seg)[source]

Returns (dict): On success, it will return detailed memory consumption done by each segment(TCP/IP layer), on failure it will return None.

seg_element_details(seg, elem)[source]

Returns (int): On success, it will return memory consumption done by each element of the segment(TCP/IP layer), on failure it will return None.

property sock_stats

On On success, it will return detailed memory consumption done by all TCP/IP layer in single data, on failure it will return None

Type

Returns (dict)

SoftNetStats - file /proc/net/softnet_stat

This parser parses the stats from network devices. These stats includes events per cpu(in row), number of packets processed i.e packet_process (first column), number of packet drops packet_drops (second column), time squeeze i.e net_rx_action performed time_squeeze(third column), cpu collision i.e collision occur while obtaining device lock while transmitting cpu_collision packets (eighth column), received_rps number of times cpu woken up received_rps (ninth column), number of times reached flow limit count flow_limit_count (tenth column).

The typical contents of cat /proc/net/softnet_stat file is as below:

00008e78 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
000040ee 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0001608c 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0000372f 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000

Column-01: packet_process: Packet processed by each CPU.
Column-02: packet_drop: Packets dropped.
Column-03: time_squeeze: net_rx_action.
Column-08: cpu_collision: collision occur while obtaining device lock while transmitting.
Column-09: received_rps: number of times cpu woken up received_rps.
Column-10: flow_limit_count: number of times reached flow limit count.

Note

There is minimal documentation about these fields in the file, columns are not labeled and could change between the kernel releases. This format is compatible for RHEL-6 and RHEL-7 releases. Also it is unlikely that the positions of those values will change in short term.

Examples

>>> SOFTNET_STATS = '''
... 00008e78 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
... 000040ee 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
... 0001608c 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
... 0000372f 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
...'''.strip()
>>> softnet_obj = SoftNetStats(context_wrap(SOFTNET_STATS))
>>> softnet_obj.is_packet_drops()
True
>>> softnet_obj.cpu_instances
4
>>> softnet_obj.per_cpu_nstat('packet_drops')
[0, 0, 0, 1]
class insights.parsers.softnet_stat.SoftNetStats(context)[source]

Bases: insights.core.Parser

Parses /proc/net/softnet_stat file contains

cpu_instances = None

List of network stats per cpu instace

property is_packet_drops

It will check for if there is packet drop occurred on any cpu.

Parameters

None -- No input argument for the function

Returns

It will return True if observed packet drops.

Return type

(bool)

parse_content(contents)[source]

This method must be implemented by classes based on this class.

per_cpu_nstat(key)[source]

Get network stats per column for all cpu.

Parameters

(str) -- Column name for eg. packet_drops or received_rps.

Returns

Column states per cpu.

Return type

(list)

Software Collections list output - command scl --list RHEL-6/7

This module provides parser for list output of scl. This spec is valid for RHEL-6/7 only, -l|--list is deprecated in RHEL-8. On RHEL-8 same functionality can be achieved by scl list-collections spec.

Parser provided by this module is:

SoftwareCollectionsListInstalled - command scl --list

class insights.parsers.software_collections_list.SoftwareCollectionsListInstalled(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

An object for parsing the output of scl --list.

Sample input file:

devtoolset-7
httpd24
python27
rh-mysql57
rh-nodejs8
rh-php71
rh-python36
rh-ruby24

Examples

>>> type(collections)
<class 'insights.parsers.software_collections_list.SoftwareCollectionsListInstalled'>
>>> len(collections.records)
8
>>> coll1 = collections.records[0]
>>> coll1
'devtoolset-7'
>>> coll2 = collections.records[4]
>>> coll2
'rh-nodejs8'
>>> collections.exists('rh-ruby24')
True
>>> collections.exists('rh-missing_colection')
False
exists(collection_name)[source]

Checks if the collection is installed on the system.

Parameters

service_name (str) -- collections name

Returns

True if collections exists, False otherwise.

Return type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

SshDConfig - file /etc/ssh/sshd_config

The ssh module provides parsing for the sshd_config file. The SshDConfig class implements the parsing and provides a list of all configuration lines present in the file.

Sample input is provided in the Examples.

Examples

>>> sshd_config_input = '''
... #       $OpenBSD: sshd_config,v 1.93 2014/01/10 05:59:19 djm Exp $
...
... Port 22
... #AddressFamily any
... ListenAddress 10.110.0.1
... Port 22
... ListenAddress 10.110.1.1
... #ListenAddress ::
...
... # The default requires explicit activation of protocol 1
... #Protocol 2
... Protocol 1
... '''.strip()
>>> from insights.tests import context_wrap
>>> shared = {SshDConfig: SshDConfig(context_wrap(sshd_config_input))}
>>> sshd_config = shared[SshDConfig]
>>> 'Port' in sshd_config
True
>>> 'PORT' in sshd_config
True
>>> 'AddressFamily' in sshd_config
False
>>> sshd_config['port']
['22', '22']
>>> sshd_config['Protocol']
['1']
>>> [line for line in sshd_config if line.keyword == 'Port']
[KeyValue(keyword='Port', value='22', kw_lower='port'), KeyValue(keyword='Port', value='22', kw_lower='port')]
>>> sshd_config.last('ListenAddress')
'10.110.1.1'
>>> sshd_config.get_line('ListenAddress')
'ListenAddress 10.110.1.1'
>>> sshd_config.get_values('ListenAddress')
['10.110.0.1', '10.110.1.1']
>>> sshd_config.get_values('ListenAddress', default='0.0.0.0')
['10.110.0.1', '10.110.1.1']
>>> sshd_config.get_values('ListenAddress', join_with=',')
'10.110.0.1,10.110.1.1'
class insights.parsers.ssh.SshDConfig(context)[source]

Bases: insights.core.Parser

Parsing for /etc/ssh/sshd_config file.

Properties:
lines (list): List of KeyValue namedtupules for each line in

the configuration file.

keywords (set): Set of keywords present in the configuration

file, each keyword has been converted to lowercase.

class KeyValue(keyword, value, kw_lower, line)

Bases: tuple

namedtuple: Represent name value pair as a namedtuple with case .

property keyword
property kw_lower
property line
property value
get(keyword)[source]

Get all declarations of this keyword in the configuration file.

Returns

a list of named tuples with the following properties:
  • keyword - the keyword as given on that line

  • value - the value of the keyword

  • kw_lower - the keyword converted to lower case

  • line - the complete line as found in the config file

Return type

(list)

get_line(keyword, default='')[source]

(str): Get the line with the last declarations of this keyword in the configuration file, optionally pretending that we had a line with the default value and a comment informing the user that this was a created default line.

This is a hack, but it’s commonly used in the sshd configuration because of the many lines that are commonly omitted because they have their default value.

Parameters
  • keyword (str) -- Keyword to find

  • default -- optional value to supply if not found

get_values(keyword, default='', join_with=None, split_on=None)[source]

Get all the values assigned to this keyword.

Firstly, if the keyword is not found in the configuration file, the value of the default option is used (defaulting to '').

Then, if the join_with option is given, this string is used to join the values found on each separate definition line. Otherwise, each separate definition line is returned as a string.

Finally, if the split_on option is given, this string is used to split the combined string above into a list. Otherwise, the combined string is returned as is.

last(keyword, default=None)[source]

str: Returns the value of the last keyword found in config.

Parameters
  • keyword (str) -- Keyword to find

  • default -- optional value to supply if not found

line_uses_plus(keyword)[source]

(union[bool, None]): Get the line with the last declarations of this keyword in the configuration file and returns whether the “+” option syntax is used.

A “+” before the list of values denotes that the values are appended to the openssh defaults for the particular keyword.

Returns True if the “+” is used, False if a line with the keyword was found but it doesn’t use the “+” or None if such a line doesn’t exist.

Reasoning for the implementation:

  • The “+” means “added to the defaults”.

  • The defaults depend on the particular openssh-server version and the parser doesn’t know the version.

  • Therefore, it is infeasible to add the evaluation logic for “+” into get_values().

  • Adding the logic into a combiner would mean a requirement that the combiner has a complete database of all defaults in all openssh-server version - infeasible again.

  • Not every keyword allows the use of “+” - it wouldn’t make sense to parse “+” into KeyValue as it would make meaningless parsing for some options and meaningful for others. Building a database which options in which openssh-server versions support it or not would be infeasible.

  • The way chosen as the most sensible is this - line_uses_plus() used selectively by a rule for those options that support it, and it is up to the developer of such a rule to check it for those options manually.

Parameters

keyword (str) -- Keyword to find

parse_content(content)[source]

This method must be implemented by classes based on this class.

SshConfig - file for ssh client config

This module contains parsers that check the ssh client config files.

Parsers provided by this module are:

EtcSshConfig - file /etc/ssh/ssh_config

ForemanSshConfig - file /usr/share/foreman/.ssh/ssh_config

ForemanProxySshConfig - file /usr/share/foreman-proxy/.ssh/ssh_config

class insights.parsers.ssh_client_config.EtcSshConfig(context)[source]

Bases: insights.parsers.ssh_client_config.SshClientConfig

This parser reads the file /etc/ssh/ssh_config

Sample output:

#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
#
# Uncomment this if you want to use .local domain
# Host *.local
#   CheckHostIP no
ProxyCommand ssh -q -W %h:%p gateway.example.com

Host *
    GSSAPIAuthentication yes
# If this option is set to yes then remote X11 clients will have full access
# to the original X11 display. As virtually no X11 client supports the untrusted
# mode correctly we set this to yes.
    ForwardX11Trusted yes
# Send locale-related environment variables
    SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
    SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
    SendEnv LC_IDENTIFICATION LC_ALL LANGUAGE
    SendEnv XMODIFIERS

Host proxytest
    HostName 192.168.122.2
global_lines

The list of site-wide configuration, as namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’]).

Type

list

host_lines

The dict of all host-specific definitions, as {‘Host_name’: [namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’])]}

Type

dict

Examples

>>> len(etcsshconfig.global_lines)
1
>>> etcsshconfig.global_lines[0].keyword
'ProxyCommand'
>>> etcsshconfig.global_lines[0].value
'ssh -q -W %h:%p gateway.example.com'
>>> 'Host_*' in etcsshconfig.host_lines
True
>>> etcsshconfig.host_lines['Host_*'][0].keyword
'GSSAPIAuthentication'
>>> etcsshconfig.host_lines['Host_*'][0].value
'yes'
>>> etcsshconfig.host_lines['Host_*'][1].keyword
'ForwardX11Trusted'
>>> etcsshconfig.host_lines['Host_*'][1].value
'yes'
>>> etcsshconfig.host_lines['Host_proxytest'][0].keyword
'HostName'
>>> etcsshconfig.host_lines['Host_proxytest'][0].value
'192.168.122.2'
class insights.parsers.ssh_client_config.ForemanProxySshConfig(context)[source]

Bases: insights.parsers.ssh_client_config.SshClientConfig

This parser reads the file /usr/share/foreman-proxy/.ssh/ssh_config

Sample output:

#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
#
# Uncomment this if you want to use .local domain
# Host *.local
#   CheckHostIP no
ProxyCommand ssh -q -W %h:%p gateway.example.com

Host *
    GSSAPIAuthentication yes
# If this option is set to yes then remote X11 clients will have full access
# to the original X11 display. As virtually no X11 client supports the untrusted
# mode correctly we set this to yes.
    ForwardX11Trusted yes
# Send locale-related environment variables
    SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
    SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
    SendEnv LC_IDENTIFICATION LC_ALL LANGUAGE
    SendEnv XMODIFIERS

Host proxytest
    HostName 192.168.122.2
global_lines

The list of site-wide configuration, as namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’]).

Type

list

host_lines

The dict of all host-specific definitions, as {‘Host_name’: [namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’])]}

Type

dict

Examples

>>> len(foreman_proxy_ssh_config.global_lines)
1
>>> foreman_proxy_ssh_config.global_lines[0].keyword
'ProxyCommand'
>>> foreman_proxy_ssh_config.global_lines[0].value
'ssh -q -W %h:%p gateway.example.com'
>>> 'Host_*' in foreman_proxy_ssh_config.host_lines
True
>>> foreman_proxy_ssh_config.host_lines['Host_*'][0].keyword
'GSSAPIAuthentication'
>>> foreman_proxy_ssh_config.host_lines['Host_*'][0].value
'yes'
>>> foreman_proxy_ssh_config.host_lines['Host_*'][1].keyword
'ForwardX11Trusted'
>>> foreman_proxy_ssh_config.host_lines['Host_*'][1].value
'yes'
>>> foreman_proxy_ssh_config.host_lines['Host_proxytest'][0].keyword
'HostName'
>>> foreman_proxy_ssh_config.host_lines['Host_proxytest'][0].value
'192.168.122.2'
class insights.parsers.ssh_client_config.ForemanSshConfig(context)[source]

Bases: insights.parsers.ssh_client_config.SshClientConfig

This parser reads the file /usr/share/foreman/.ssh/ssh_config

Sample output:

#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
#
# Uncomment this if you want to use .local domain
# Host *.local
#   CheckHostIP no
ProxyCommand ssh -q -W %h:%p gateway.example.com

Host *
    GSSAPIAuthentication yes
# If this option is set to yes then remote X11 clients will have full access
# to the original X11 display. As virtually no X11 client supports the untrusted
# mode correctly we set this to yes.
    ForwardX11Trusted yes
# Send locale-related environment variables
    SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
    SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
    SendEnv LC_IDENTIFICATION LC_ALL LANGUAGE
    SendEnv XMODIFIERS

Host proxytest
    HostName 192.168.122.2
global_lines

The list of site-wide configuration, as namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’]).

Type

list

host_lines

The dict of all host-specific definitions, as {‘Host_name’: [namedtuple(‘KeyValue’, [‘keyword’, ‘value’, ‘line’])]}

Type

dict

Examples

>>> len(foremansshconfig.global_lines)
1
>>> foremansshconfig.global_lines[0].keyword
'ProxyCommand'
>>> foremansshconfig.global_lines[0].value
'ssh -q -W %h:%p gateway.example.com'
>>> 'Host_*' in foremansshconfig.host_lines
True
>>> foremansshconfig.host_lines['Host_*'][0].keyword
'GSSAPIAuthentication'
>>> foremansshconfig.host_lines['Host_*'][0].value
'yes'
>>> foremansshconfig.host_lines['Host_*'][1].keyword
'ForwardX11Trusted'
>>> foremansshconfig.host_lines['Host_*'][1].value
'yes'
>>> foremansshconfig.host_lines['Host_proxytest'][0].keyword
'HostName'
>>> foremansshconfig.host_lines['Host_proxytest'][0].value
'192.168.122.2'
class insights.parsers.ssh_client_config.SshClientConfig(context)[source]

Bases: insights.core.Parser

Base class for ssh client configuration file.

Raises

SkipException -- When input content is empty. Not found any parse results.

class KeyValue(keyword, value, line)

Bases: tuple

property keyword
property line
property value
parse_content(content)[source]

This method must be implemented by classes based on this class.

SSSD_Config - file /etc/sssd/sssd.config

SSSD’s configuration file is in a standard ‘ini’ format.

The ‘sssd’ section will define one or more active domains, which are then configured in the ‘domain/{domain}’ section of the configuration. These domains are then available via the ‘domains’ method, and the configuration of a domain can be fetched as a dictionary using the ‘domain_config’ method.

Example

>>> sssd_conf = shared[SSSD_Config]
>>> sssd_conf.getint('nss', 'reconnection_retries')
3
>>> sssd_conf.domains()
['example.com']
>>> domain = sssd_conf.domain_config('example.com')
>>> 'ldap_uri' in domain
True
class insights.parsers.sssd_conf.SSSD_Config(context)[source]

Bases: insights.core.IniConfigFile

Parse the content of the /etc/sssd/sssd.config file.

The ‘sssd’ section must always exist. Within that, the ‘domains’ parameter is usually defined to give a comma-separated list of the domains that sssd is to manage.

domain_config(domain)[source]

Return the configuration dictionary for a specific domain, given as the raw name as listed in the ‘domains’ property of the sssd section. This then looks for the equivalent ‘domain/{domain}’ section of the config file.

property domains

Returns the list of domains defined in the ‘sssd’ section. This is used to refer to the domain-specific sections of the configuration.

SSSDLog - files matching /var/log/sssd/*.log

class insights.parsers.sssd_logs.SSSDLog(context)[source]

Bases: insights.core.LogFileOutput

Parser class for reading SSSD log files. The main work is done by the LogFileOutput super-class.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample input:

(Tue Feb 14 09:45:02 2017) [sssd] [sbus_remove_timeout] (0x2000): 0x7f5aceb6a970
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn: 0x7f5aceb5cff0
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_remove_timeout] (0x2000): 0x7f5aceb63eb0
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn: 0x7f5aceb578b0
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_remove_timeout] (0x2000): 0x7f5aceb60f30
(Tue Feb 14 09:45:02 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn: 0x7f5aceb58360
(Tue Feb 14 09:45:06 2015) [sssd] [monitor_hup] (0x0020): Received SIGHUP.
(Tue Feb 14 09:45:07 2015) [sssd] [te_server_hup] (0x0020): Received SIGHUP. Rotating logfiles.

Each line is parsed into a dictionary with the following keys:

  • timestamp - the date of the log line (as a string)

  • datetime - the date as a datetime object (if conversion is possible)

  • module - the module logging the message

  • function - the function within the module

  • level - the debug level (as a string)

  • message - the body of the message

  • raw_message - the raw message before being split.

Examples

>>> logs = shared[SSSDLog]
>>> hups = logs.get("SIGHUP")
>>> print len(hups)
2
>>> hups[0]['module']
'monitor_hup'

Subscription manager list outputs - command subscription-manager list

This module provides parsers for various list outputs of subscription-manager.

Parsers provided by this module are:

SubscriptionManagerListConsumed - command subscription-manager list --consumed

SubscriptionManagerListInstalled - command subscription-manager list --installed

class insights.parsers.subscription_manager_list.SubscriptionManagerList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A general object for parsing the output of subscription-manager list. This should be subclassed to read the specific output - e.g. --consumed or --installed.

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(*args, **kwargs)[source]

Search for records that match the given keys and values. See the insights.parsers.keyword_search() function for more details on usage.

Parameters

**kwargs -- Key-value pairs of search parameters.

Returns

A list of records that matched the search criteria.

Return type

(list)

Examples

>>> len(consumed.search(Service_Level='PREMIUM'))
1
>>> consumed.search(Provides__contains='Red Hat Enterprise Virtualization')
[]
class insights.parsers.subscription_manager_list.SubscriptionManagerListConsumed(context, extra_bad_lines=[])[source]

Bases: insights.parsers.subscription_manager_list.SubscriptionManagerList

Read the output of subscription-manager list --consumed.

Sample input file:

+-------------------------------------------+
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux Server, Premium (1-2 sockets) (Up to 1 guest)
Provides:          Oracle Java (for RHEL Server)
                   Red Hat Software Collections Beta (for RHEL Server)
                   Red Hat Enterprise Linux Server
                   Red Hat Beta
SKU:               RH0155783S
Contract:          12345678
Account:           1000001
Serial:            0102030405060708090
Pool ID:           8a85f981477e5284014783abaf5d4dcd
Active:            True
Quantity Used:     1
Service Level:     PREMIUM
Service Type:      L1-L3
Status Details:    Subscription is current
Subscription Type: Standard
Starts:            11/14/14
Ends:              07/06/15
System Type:       Physical

Examples

>>> type(consumed)
<class 'insights.parsers.subscription_manager_list.SubscriptionManagerListConsumed'>
>>> len(consumed.records)
1
>>> sub1 = consumed.records[0]
>>> sub1['SKU']
'RH0155783S'
>>> sub1['Active']  # Type conversion on Active field
True
>>> sub1['Status Details']  # Keys appear as given
'Subscription is current'
>>> sub1['Provides'][1]
'Red Hat Software Collections Beta (for RHEL Server)'
>>> sub1['Starts']  # Basic field as text - note month/day/year
'11/14/14'
>>> sub1['Starts timestamp'].year
2014
>>> consumed.all_current  # Are all subscriptions listed as current?
True
property all_current

(bool) Does every subscription record have the Status Details value set to ‘Subscription is current’?

class insights.parsers.subscription_manager_list.SubscriptionManagerListInstalled(context, extra_bad_lines=[])[source]

Bases: insights.parsers.subscription_manager_list.SubscriptionManagerList

Read the output of subscription-manager list --installed.

Sample input file:

+-------------------------------------------+
Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Software Collections (for RHEL Server)
Product ID:     201
Version:        2
Arch:           x86_64
Status:         Subscribed
Status Details:
Starts:         04/27/15
Ends:           04/27/16

Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.1
Arch:           x86_64
Status:         Subscribed
Status Details:
Starts:         04/27/15
Ends:           04/27/16

Examples

>>> type(installed)
<class 'insights.parsers.subscription_manager_list.SubscriptionManagerListInstalled'>
>>> len(installed.records)
2
>>> prod1 = installed.records[0]
>>> prod1['Product ID']  # Note - not converted to number
'201'
>>> prod1['Starts']  # String date as is
'04/27/15'
>>> prod1['Starts timestamp'].year  # Extra converted to date
2015
>>> installed.all_subscribed
True
property all_subscribed

(bool) Does every product record have the Status value set to ‘Subscribed’?

Subscription manager release commands

Shared parser for parsing output of the subscription-manager release commands.

SubscriptionManagerReleaseShow - command subscription-manager release --show

class insights.parsers.subscription_manager_release.SubscriptionManagerReleaseShow(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the output of subscription-manager release --show command.

Typical output of the command is:

Release: 7.2
set

the set release.

Type

str

major

the major version of the set release.

Type

int

minor

the minor version of the set release.

Type

int

Examples

>>> type(rhsm_rel)
<class 'insights.parsers.subscription_manager_release.SubscriptionManagerReleaseShow'>
>>> rhsm_rel.set
'7.2'
>>> rhsm_rel.major
7
>>> rhsm_rel.minor
2
parse_content(content)[source]

This method must be implemented by classes based on this class.

Swift Conf Files - file /etc/swift/

This module provides parsers for swift config files under /etc/swift directory.

SwiftObjectExpirerConf - file /etc/swift/object-expirer.conf

SwiftProxyServerConf - file /etc/swift/proxy-server.conf

SwiftConf - file /etc/swift/swift.conf

class insights.parsers.swift_conf.SwiftConf(context)[source]

Bases: insights.core.IniConfigFile

This class is to parse the content of /etc/swift/swift.conf.

/etc/swift/swift.conf is in the standard ‘ini’ format and is read by the insights.core.IniConfigFile parser class.

Sample configuration file:

[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
# Use only printable chars (python -c "import string; print(string.printable)")
swift_hash_path_prefix = changeme
swift_hash_path_suffix = changeme

[storage-policy:0]
name = gold
policy_type = replication
default = yes

[storage-policy:1]
name = silver
policy_type = replication

[storage-policy:2]
name = ec42
policy_type = erasure_coding
ec_type = liberasurecode_rs_vand
ec_num_data_fragments = 4
ec_num_parity_fragments = 2

Examples

>>> 'swift-hash' in swift_conf.sections()
True
>>> swift_conf.has_option('storage-policy:2', 'policy_type') is True
True
>>> swift_conf.get('storage-policy:2', 'policy_type') == 'erasure_coding'
True
>>> swift_conf.get('storage-policy:2', 'ec_type') == 'liberasurecode_rs_vand'
True
class insights.parsers.swift_conf.SwiftObjectExpirerConf(context)[source]

Bases: insights.core.IniConfigFile

This class is to parse the content of the /etc/swift/object-expirer.conf.

/etc/swift/object-expirer.conf` is in the standard ‘ini’ format and is read by the insights.core.IniConfigFile parser class.

Sample configuration file:

[DEFAULT]

[object-expirer]
# auto_create_account_prefix = .
auto_create_account_prefix = .
process=0
concurrency=1
recon_cache_path=/var/cache/swift
interval=300
reclaim_age=604800
report_interval=300
processes=0
expiring_objects_account_name=expiring_objects

[pipeline:main]
pipeline = catch_errors cache proxy-server

[app:proxy-server]
use = egg:swift#proxy

[filter:cache]
use = egg:swift#memcache
memcache_servers = 172.16.64.60:11211

[filter:catch_errors]
use = egg:swift#catch_errors

Examples

>>> 'filter:cache' in object_expirer_conf
True
>>> object_expirer_conf.get('filter:cache', 'memcache_servers') == '172.16.64.60:11211'
True
>>> object_expirer_conf.getint('object-expirer', 'report_interval')
300
class insights.parsers.swift_conf.SwiftProxyServerConf(context)[source]

Bases: insights.core.IniConfigFile

This class is to parse the content of the /etc/swift/proxy-server.conf.

The swift proxy - server configuration file /etc/swift/proxy-server.conf is in the standard ‘ini’ format and is read by the insights.core.IniConfigFile parser class.

Sample configuration file:

[DEFAULT]
bind_port = 8080
bind_ip = 172.20.15.20
workers = 0
[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging cache ratelimit

[app:proxy-server]
use = egg:swift  # proxy
set log_name = proxy-server
set log_facility = LOG_LOCAL1

[filter:catch_errors]
use = egg:swift  # catch_errors

Examples

>>> 'app:proxy-server' in proxy_server_conf
True
>>> proxy_server_conf.get('filter:catch_errors', 'use') == 'egg:swift#catch_errors'
True
>>> proxy_server_conf.getint('DEFAULT', 'bind_port')
8080

SwiftLog - file /var/log/containers/swift/swift.log and /var/log/swift/swift.log

class insights.parsers.swift_log.SwiftLog(context)[source]

Bases: insights.core.Syslog

Class for parsing /var/log/containers/swift/swift.log and /var/log/swift/swift.log file.

Provide access to swift log using the insights.core.Syslog parser class.

Sample swift.log file content:

Sep 29 23:50:29 rh-server object-server: Starting object replication pass.
Sep 29 23:50:29 rh-server object-server: Nothing replicated for 0.01691198349 seconds.
Sep 29 23:50:29 rh-server object-server: Object replication complete. (0.00 minutes)
Sep 29 23:50:38 rh-server container-server: Beginning replication run
Sep 29 23:50:38 rh-server container-server: Replication run OVER
Sep 29 23:50:38 rh-server container-server: Attempted to replicate 0 dbs in 0.00064 seconds (0.00000/s)

Examples

>>> obj_server_lines = swift_log.get("object-server")
>>> len(obj_server_lines)
3
>>> obj_server_lines[0].get("procname")
'object-server'
>>> obj_server_lines[0].get("message")
'Starting object replication pass.'

/sys/bus Device Usage Information

A parser to parse the usage information of devices connected on sys/bus.

Parsers included in this module are:

CdcWDM - file /sys/bus/usb/drivers/cdc_wdm/module/refcnt

class insights.parsers.sys_bus.CdcWDM(context)[source]

Bases: insights.core.Parser

This file /sys/bus/usb/drivers/cdc_wdm/module/refcnt contains device usage count, i.e if a device is in use then the non-zero value will be present in the file.

Sample Content:

0 - Not in use.
1 - Device is opened and it is in use.

Examples:

>>> type(device_usage)
<class 'insights.parsers.sys_bus.CdcWDM'>
>>> device_usage.device_usage_cnt
1
>>> device_usage.device_in_use
True
Raises
property device_in_use

True when device in use else False.

Type

Returns (bool)

property device_usage_cnt

device usage count.

Type

Returns (int)

parse_content(content)[source]

This method must be implemented by classes based on this class.

Sysconfig - files in /etc/sysconfig/

This is a collection of parsers that all deal with the system’s configuration files under the /etc/sysconfig/ folder. Parsers included in this module are:

CorosyncSysconfig - file /etc/sysconfig/corosync

ChronydSysconfig - file /etc/sysconfig/chronyd

DirsrvSysconfig - file /etc/sysconfig/dirsrv

DockerStorageSetupSysconfig - file /etc/sysconfig/docker-storage-setup

DockerSysconfig - file /etc/sysconfig/docker

DockerSysconfigStorage - file /etc/sysconfig/docker-storage

ForemanTasksSysconfig - file /etc/sysconfig/foreman-tasks

HttpdSysconfig - file /etc/sysconfig/httpd

IrqbalanceSysconfig - file /etc/sysconfig/irqbalance

KdumpSysconfig - file /etc/sysconfig/kdump

LibvirtGuestsSysconfig - file /etc/sysconfig/libvirt-guests

MemcachedSysconfig - file /etc/sysconfig/memcached

MongodSysconfig - file /etc/sysconfig/mongod

NetconsoleSysconfig -file /etc/sysconfig/netconsole

NetworkSysconfig -file /etc/sysconfig/network

NtpdSysconfig - file /etc/sysconfig/ntpd

SshdSysconfig - file /etc/sysconfig/sshd

PuppetserverSysconfig - file /etc/sysconfig/puppetserver

Up2DateSysconfig - file /etc/sysconfig/rhn/up2date

VirtWhoSysconfig - file /etc/sysconfig/virt-who

IfCFGStaticRoute - files /etc/sysconfig/network-scripts/route-*

class insights.parsers.sysconfig.ChronydSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser analyzes the /etc/sysconfig/chronyd configuration file.

Sample Input:

OPTIONS="-d"
#HIDE="me"

Examples

>>> 'OPTIONS' in chronyd_syscfg
True
>>> 'HIDE' in chronyd_syscfg
False
>>> chronyd_syscfg['OPTIONS']
'-d'
class insights.parsers.sysconfig.CorosyncSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser reads the /etc/sysconfig/corosync file. It uses the SysconfigOptions parser class to convert the file into a dictionary of options. It also provides the options property as a helper to retrieve the COROSYNC_OPTIONS variable.

Sample Input:

# COROSYNC_INIT_TIMEOUT specifies number of seconds to wait for corosync
# initialization (default is one minute).
COROSYNC_INIT_TIMEOUT=60
# COROSYNC_OPTIONS specifies options passed to corosync command
# (default is no options).
# See "man corosync" for detailed descriptions of the options.
COROSYNC_OPTIONS=""

Examples

>>> 'COROSYNC_OPTIONS' in cs_syscfg
True
>>> cs_syscfg.options
''
property options

The value of the COROSYNC_OPTIONS variable.

Type

(str)

class insights.parsers.sysconfig.DirsrvSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser parses the dirsrv service’s start-up configuration /etc/sysconfig/dirsrv.

Sample Input:

#STARTPID_TIME=10 ; export STARTPID_TIME
#PID_TIME=600 ; export PID_TIME
KRB5CCNAME=/tmp/krb5cc_995
KRB5_KTNAME=/etc/dirsrv/ds.keytab

Examples

>>> dirsrv_syscfg.get('KRB5_KTNAME')
'/etc/dirsrv/ds.keytab'
>>> 'PID_TIME' in dirsrv_syscfg
False
class insights.parsers.sysconfig.DockerStorageSetupSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Parser for parsing /etc/sysconfig/docker-storage-setup

Sample Input:

VG=vgtest
AUTO_EXTEND_POOL=yes
##name = mydomain
POOL_AUTOEXTEND_THRESHOLD=60
POOL_AUTOEXTEND_PERCENT=20

Examples

>>> dss_syscfg['VG'] # Pseudo-dict access
'vgtest'
>>> 'name' in dss_syscfg
False
>>> dss_syscfg.get('POOL_AUTOEXTEND_THRESHOLD')
'60'
class insights.parsers.sysconfig.DockerSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Parser for parsing the /etc/sysconfig/docker file using the standard SysconfigOptions parser class. The ‘OPTIONS’ variable is also provided in the options property as a convenience.

Sample Input:

OPTIONS="--selinux-enabled"
DOCKER_CERT_PATH="/etc/docker"

Examples

>>> 'OPTIONS' in docker_syscfg
True
>>> docker_syscfg['OPTIONS']
'--selinux-enabled'
>>> docker_syscfg.options
'--selinux-enabled'
>>> docker_syscfg['DOCKER_CERT_PATH']
'/etc/docker'
property options

Return the value of the ‘OPTIONS’ variable, or ‘’ if not defined.

class insights.parsers.sysconfig.DockerSysconfigStorage(context)[source]

Bases: insights.core.SysconfigOptions

A Parser for /etc/sysconfig/docker-storage.

Sample input::

DOCKER_STORAGE_OPTIONS=”--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true”

Examples

>>> 'DOCKER_STORAGE_OPTIONS' in docker_syscfg_storage
True
>>> docker_syscfg_storage["DOCKER_STORAGE_OPTIONS"]
'--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true'
>>> docker_syscfg_storage.storage_options
'--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true'
property storage_options

Return the value of the ‘DOCKER_STORAGE_OPTIONS’ variable, or ‘’ if not defined.

class insights.parsers.sysconfig.ForemanTasksSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Parse the /etc/sysconfig/foreman-tasks configuration file.

Sample configuration file:

FOREMAN_USER=foreman
BUNDLER_EXT_HOME=/usr/share/foreman
RAILS_ENV=production
FOREMAN_LOGGING=warn

Examples

>>> ft_syscfg['RAILS_ENV']
'production'
>>> 'AUTO' in ft_syscfg
False
class insights.parsers.sysconfig.HttpdSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser analyzes the /etc/sysconfig/httpd configuration file.

Sample Input:

HTTPD=/usr/sbin/httpd.worker
#
# To pass additional options (for instance, -D definitions) to the
# httpd binary at startup, set OPTIONS here.
#
OPTIONS=

Examples

>>> httpd_syscfg['HTTPD']
'/usr/sbin/httpd.worker'
>>> httpd_syscfg.get('OPTIONS')
''
>>> 'NOOP' in httpd_syscfg
False
class insights.parsers.sysconfig.IfCFGStaticRoute(context)[source]

Bases: insights.core.SysconfigOptions

IfCFGStaticRoute is a parser for the static route network interface definition files in /etc/sysconfig/network-scripts. These are pulled into the network scripts using source, so they are mainly bash environment declarations of the form KEY=value. These are stored in the data property as a dictionary. Quotes surrounding the value

Because this parser reads multiple files, the interfaces are stored as a list within the parser and need to be iterated through in order to find specific interfaces.

Sample configuration from a static connection in file /etc/sysconfig/network-scripts/rute-test-net:

ADDRESS0=10.65.223.0
NETMASK0=255.255.254.0
GATEWAY0=10.65.223.1

Examples

>>> conn_info['ADDRESS0']
'10.65.223.0'
>>> conn_info.static_route_name
'test-net'
static_route_name

static route name

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.sysconfig.IrqbalanceSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser analyzes the /etc/sysconfig/irqbalance configuration file.

Sample Input:

#IRQBALANCE_ONESHOT=yes
#
IRQBALANCE_BANNED_CPUS=f8
IRQBALANCE_ARGS="-d"

Examples

>>> irqb_syscfg['IRQBALANCE_BANNED_CPUS']
'f8'
>>> irqb_syscfg.get('IRQBALANCE_ARGS')  # quotes will be stripped
'-d'
>>> irqb_syscfg.get('IRQBALANCE_ONESHOT') is None
True
>>> 'ONESHOT' in irqb_syscfg
False
class insights.parsers.sysconfig.KdumpSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser reads data from the /etc/sysconfig/kdump file.

This parser sets the following properties for ease of access:

  • KDUMP_COMMANDLINE

  • KDUMP_COMMANDLINE_REMOVE

  • KDUMP_COMMANDLINE_APPEND

  • KDUMP_KERNELVER

  • KDUMP_IMG

  • KDUMP_IMG_EXT

  • KEXEC_ARGS

These are set to the value of the named variable in the kdump sysconfig file, or ‘’ if not found.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.sysconfig.LibvirtGuestsSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser analyzes the /etc/sysconfig/libvirt-guests configuration file.

Sample Input:

# URIs to check for running guests
# example: URIS='default xen:/// vbox+tcp://host/system lxc:///'
#URIS=default
ON_BOOT=ignore

Examples

>>> libvirt_guests_syscfg.get('ON_BOOT')
'ignore'
class insights.parsers.sysconfig.MemcachedSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser analyzes the /etc/sysconfig/memcached configuration file.

Sample Input:

PORT="11211"
USER="memcached"
# max connection 2048
MAXCONN="2048"
# set ram size to 2048 - 2GiB
CACHESIZE="4096"
# disable UDP and listen to loopback ip 127.0.0.1, for network connection use real ip e.g., 10.0.0.5
OPTIONS="-U 0 -l 127.0.0.1"

Examples

>>> memcached_syscfg.get('OPTIONS')
'-U 0 -l 127.0.0.1'
class insights.parsers.sysconfig.MongodSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

A parser for analyzing the mongod service configuration file, like ‘/etc/sysconfig/mongod’ and ‘/etc/opt/rh/rh-mongodb26/sysconfig/mongod’.

Sample Input:

OPTIONS="--quiet -f /etc/mongod.conf"

Examples

>>> mongod_syscfg.get('OPTIONS')
'--quiet -f /etc/mongod.conf'
>>> mongod_syscfg.get('NO_SUCH_OPTION') is None
True
>>> 'NOSUCHOPTION' in mongod_syscfg
False
class insights.parsers.sysconfig.NetconsoleSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Parse the /etc/sysconfig/netconsole file.

Sample Input:

# The local port number that the netconsole module will use
LOCALPORT=6666

Examples

>>> 'LOCALPORT' in netcs_syscfg
True
>>> 'DEV' in netcs_syscfg
False
class insights.parsers.sysconfig.NetworkSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

This parser parses the /etc/sysconfig/network configuration file

Sample Input:

NETWORKING=yes
HOSTNAME=rhel7-box
GATEWAY=172.31.0.1
NM_BOND_VLAN_ENABLED=no

Examples

>>> 'NETWORKING' in net_syscfg
True
>>> net_syscfg['GATEWAY']
'172.31.0.1'
class insights.parsers.sysconfig.NtpdSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

A parser for analyzing the /etc/sysconfig/ntpd configuration file.

Sample Input:

OPTIONS="-x -g"
#HIDE="me"

Examples

>>> 'OPTIONS' in ntpd_syscfg
True
>>> 'HIDE' in ntpd_syscfg
False
>>> ntpd_syscfg['OPTIONS']
'-x -g'
class insights.parsers.sysconfig.PrelinkSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

A parser for analyzing the /etc/sysconfig/prelink configuration file.

Sample Input:

# Set this to no to disable prelinking altogether
# (if you change this from yes to no prelink -ua
# will be run next night to undo prelinking)
PRELINKING=no

# Options to pass to prelink
# -m    Try to conserve virtual memory by allowing overlapping
#       assigned virtual memory slots for libraries which
#       never appear together in one binary
# -R    Randomize virtual memory slot assignments for libraries.
#       This makes it slightly harder for various buffer overflow
#       attacks, since library addresses will be different on each
#       host using -R.
PRELINK_OPTS=-mR

Examples

>>> prelink_syscfg.get('PRELINKING')
'no'
class insights.parsers.sysconfig.PuppetserverSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Parse the /etc/sysconfig/puppetserver configuration file.

Sample configuration file:

USER="puppet"
GROUP="puppet"
INSTALL_DIR="/opt/puppetlabs/server/apps/puppetserver"
CONFIG="/etc/puppetlabs/puppetserver/conf.d"
START_TIMEOUT=300

Examples

>>> pps_syscfg['START_TIMEOUT']
'300'
>>> 'AUTO' in pps_syscfg
False
class insights.parsers.sysconfig.SshdSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

A parser for analyzing the /etc/sysconfig/sshd configuration file.

Sample Input:

# Configuration file for the sshd service.

# The server keys are automatically generated if they are missing.
# To change the automatic creation, adjust sshd.service options for
# example using  systemctl enable sshd-keygen@dsa.service  to allow creation
# of DSA key or  systemctl mask sshd-keygen@rsa.service  to disable RSA key
# creation.

# System-wide crypto policy:
# To opt-out, uncomment the following line
# CRYPTO_POLICY=
CRYPTO_POLICY=

Examples

>>> sshd_syscfg.get('CRYPTO_POLICY')
''
>>> 'NONEXISTENT_VAR' in sshd_syscfg
False
>>> 'CRYPTO_POLICY' in sshd_syscfg
True
class insights.parsers.sysconfig.Up2DateSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

Class to parse the /etc/sysconfig/rhn/up2date

Typical content example:

serverURL[comment]=Remote server URL
#serverURL=https://rhnproxy.glb.tech.markit.partners
serverURL=https://rhnproxy.glb.tech.markit.partners/XMLRPC

Examples

>>> 'serverURL' in u2d_syscfg
True
>>> u2d_syscfg['serverURL']
'https://rhnproxy.glb.tech.markit.partners/XMLRPC'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.sysconfig.VirtWhoSysconfig(context)[source]

Bases: insights.core.SysconfigOptions

A parser for analyzing the /etc/sysconfig/virt-who configuration file.

Sample Input:

# Register ESX machines using vCenter
# VIRTWHO_ESX=0
# Register guests using RHEV-M
VIRTWHO_RHEVM=1

# Options for RHEV-M mode
VIRTWHO_RHEVM_OWNER=
TEST_OPT="A TEST"

Examples

>>> vwho_syscfg['VIRTWHO_RHEVM']
'1'
>>> vwho_syscfg.get('VIRTWHO_RHEVM_OWNER')
''
>>> vwho_syscfg.get('NO_SUCH_OPTION') is None
True
>>> 'NOSUCHOPTION' in vwho_syscfg
False
>>> vwho_syscfg.get('TEST_OPT')  # Quotes are stripped
'A TEST'

Kernel system control information

Shared parsers for parsing file /etc/sysctl.conf and command sysctl -a.

Parsers included in this module are:

Sysctl - command sysctl -a

SysctlConf - file /etc/sysctl.conf

SysctlConfInitramfs - command lsinitrd

class insights.parsers.sysctl.Sysctl(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Parse the output of sysctl -a command.

Sample input:

kernel.domainname = example.com
kernel.modprobe = /sbin/modprobe

Examples

>>> type(sysctl)
<class 'insights.parsers.sysctl.Sysctl'>
>>> sysctl['kernel.domainname']
'example.com'
>>> sysctl.get('kernel.modprobe')
'/sbin/modprobe'
>>> 'kernel.modules_disabled' in sysctl
False
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.sysctl.SysctlConf(context)[source]

Bases: insights.core.Parser

Parse /etc/sysctl.conf file

Sample input:

# sysctl.conf sample
#
  kernel.domainname = example.com

; this one has a space which will be written to the sysctl!
  kernel.modprobe = /sbin/mod probe
data

Dictionary containing key/value pairs for the lines in the configuration file. Dictionary is in order keywords first appear in the lines.

Type

OrderedDict

Examples

>>> type(sysctl_conf)
<class 'insights.parsers.sysctl.SysctlConf'>
>>> sysctl_conf.data['kernel.domainname']
'example.com'
>>> sysctl_conf.data['kernel.modprobe']
'/sbin/mod probe'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.sysctl.SysctlConfInitramfs(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.LogFileOutput

Shared parser for the output of lsinitrd applied to kdump initramfs images to view sysctl.conf and sysctl.d configurations.

For now, the file is treated as raw lines (as a LogFileOutput parser. This is because the output of the command, applied to multiple files to examine multiple files does not seem to be unambiguously parsable.

Since the only plugins requiring the file to date “grep out” certain strings, this approach will suffice.

Note

Please refer to its super-class insights.core.LogFileOutput

Sample input:

initramfs:/etc/sysctl.conf
========================================================================
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
fs.inotify.max_user_watches=524288
========================================================================

initramfs:/etc/sysctl.d/*.conf
========================================================================
========================================================================

Examples

>>> type(sysctl_initramfs)
<class 'insights.parsers.sysctl.SysctlConfInitramfs'>
>>> sysctl_initramfs.get('max_user_watches')
[{'raw_message': 'fs.inotify.max_user_watches=524288'}]
parse_content(content)[source]

This method must be implemented by classes based on this class.

System time configuration

ChronyConf - file /etc/chronyd.conf

class insights.parsers.system_time.ChronyConf(context)[source]

Bases: insights.parsers.system_time.NTPConfParser

A parser for analyzing the chrony service config file /etc/chrony.conf

Uses the NTPConfParser class defined in this module.

LocalTime - command file -L /etc/localtime

class insights.parsers.system_time.LocalTime(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A parser for working with the output of command: file -L /etc/localtime

Sample Input:

/etc/localtime: timezone data, version 2, 5 gmt time flags, 5 std time flags, no leap seconds, 69 transition times, 5 abbreviation chars

Examples

>>> localtime = shared[LocalTime]
>>> localtime.data['name']
'/etc/localtime'
>>> localtime.data['version']
'2'
>>> localtime.data['gmt_time_flag']
'5'
>>> localtime.data['leap_second']
'no'
parse_content(content)[source]

This method must be implemented by classes based on this class.

NTPConfParser base class

class insights.parsers.system_time.NTPConfParser(context)[source]

Bases: insights.core.Parser

NTP and Chrony both use the same format for their configuration file - a series of keywords with optional values. Some keywords can appear more than once, so all keyword values are stored as a list of strings. Keywords that have no value, like ‘iburst’ or ‘rtcsync’, are left as keys but have None as a value.

Also provides the servers and peers properties as (sorted) lists of the found ‘server’ and ‘peer’ data (respectively).

Sample Input::
>>> ntp_conf_data = '''
... server 0.rhel.pool.ntp.org iburst
... server 1.rhel.pool.ntp.org iburst
... server 2.rhel.pool.ntp.org iburst
... server 3.rhel.pool.ntp.org iburst
... # Enable kernel RTC synchronization.
... rtcsync
... leapsecmode slew
... maxslewrate 1000
... smoothtime 400 0.001 leaponly
... tinker step 0.9
... '''

Examples

>>> ntp = shared[NTP_conf]
>>> 'rtcsync' in ntp.data # Single word options are present but None
True
>>> ntp.data['rtcsync'] # Not in dictionary if option not set
None
>>> len(ntp.data['server'])
4
>>> ntp.data['server'][0]
'0.rhel.pool.ntp.org iburst'
>>> ntp.servers[0] # same data as above
'0.rhel.pool.ntp.org iburst'
>>> ntp.data['maxslewrate']
'1000'
>>> ntp.get_last('rtcsync') # See above for fetching single-word options
None
>>> ntp.get_last('leapsecmode')
'slew'
>>> ntp.get_last('tinker', 'panic', 'none') # Use default value
'none'
>>> ntp.get_last('tinker', 'step', '1') # Use value set in file
'0.9'
>>> ntp.get_param('tinker', 'step') # Get list of all settings
['0.9']
get_last(keyword, param=None, default=None)[source]

Get the parameters for a given keyword, or default if keyword or parameter are not present in the configuration.

This finds the last declaration of the given parameter (which is the one which takes effect). If no parameter is given, then the entire line is treated as the parameter and returned.

Parameters
  • keyword (str) -- The keyword name, e.g. ‘tinker’ or ‘driftfile’

  • param (str) -- The parameter name, e.g. ‘panic’ or ‘step’. If not given, the last definition of that keyword is given.

Returns

The value of the given parameter, or None if not found.

Return type

str or None

get_param(keyword, param=None, default=None)[source]

Get all the parameters for a given keyword, or default if keyword or parameter are not present in the configuration.

This finds every declaration of the given parameter (which is the one which takes effect). If no parameter is given, then the entire line is treated as the parameter. There is always at least one element returned - the default, or

Parameters
  • keyword (str) -- The keyword name, e.g. ‘tinker’ or ‘driftfile’

  • param (str) -- The parameter name, e.g. ‘panic’ or ‘step’. If not given, all the definitions of that keyword are given.

  • default (str) -- The default (singular) value if the keyword or parameter is not found. If not given, None is used.

Returns

All the values of the given parameter, or an empty list if not found.

Return type

list

parse_content(content)[source]

This method must be implemented by classes based on this class.

NTPConf - file /etc/ntpd.conf

class insights.parsers.system_time.NTPConf(context)[source]

Bases: insights.parsers.system_time.NTPConfParser

A parser for analyzing the ntpd service config file /etc/ntp.conf

Uses the NTPConfParser class defined in this module.

NtpTime - command ntptime

class insights.parsers.system_time.NtpTime(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

A parser for working with the output of the ntptime.

This doesn’t attempt to get much out of the output; useful things that it retrieves are:

  • ntp_gettime - the return code of the ntp_gettime() call.

  • ntp_adjtime - the return code of the ntp_adjtime() call.

  • status - the hexadecimal status code as a string.

  • flags - the flags in brackets after the status code.

Sample Input:

ntp_gettime() returns code 0 (OK)
  time dbbc595d.1adbd720  Thu, Oct 27 2016 18:45:49.104, (.104917550),
  maximum error 263240 us, estimated error 102 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency 4.201 ppm, interval 1 s,
  maximum error 263240 us, estimated error 102 us,
  status 0x2011 (PLL,INS,NANO),
  time constant 2, precision 0.001 us, tolerance 500 ppm,

Examples

>>> ntptime = shared[NtpTime]
>>> ntptime.data['stats']
'0x2011'
>>> ntptime.data['ntp_gettime']
'0'
>>> ntptime.data['flags']
['PLL', 'INS', 'NANO']
>>> ntptime.data['interval']  # Other values are integers or floats
1
>>> ntptime.data['precision']  # Note floats may not be exact
0.001
>>> ntptime.data['maximum error']
263240
parse_content(content)[source]

This method must be implemented by classes based on this class.

SystemctlShow - command systemctl show

Parsers included in this module are:

SystemctlShowServiceAll - command systemctl show *.service

SystemctlShowTarget - command systemctl show *.target

class insights.parsers.systemctl_show.SystemctlShow(*args, **kwargs)[source]

Bases: insights.core.CommandParser, dict

Warning

This class is deprecated, please use SystemctlShowServiceAll instead.

Class for parsing systemctl show <Service_Name> command output. Empty properties are suppressed.

Sample Input:

TimeoutStartUSec=1min 30s
LimitNOFILE=65536
LimitMEMLOCK=
LimitLOCKS=18446744073709551615

Sample Output:

{"LimitNOFILE"     : "65536",
"TimeoutStartUSec" : "1min 30s",
"LimitLOCKS"       : "18446744073709551615"}

In CMD’s output, empty properties are suppressed by default.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.systemctl_show.SystemctlShowCinderVolume(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show openstack-cinder-volume.

Typical output of /bin/systemctl show openstack-cinder-volume command is:

Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
WatchdogUSec=0
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=65536
LimitAS=18446744073709551615
LimitNPROC=63391
Transient=no
LimitNOFILE=4096
...

Examples

>>> systemctl_show_cinder_volume["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowHttpd(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show httpd.

Typical output of /bin/systemctl show httpd command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:32 CST
ExecMainStartTimestampMonotonic=104261679
ExecMainExitTimestampMonotonic=0
ExecMainPID=2747
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/sbin/httpd ; argv[]=/usr/sbin/httpd $OPTIONS -DFOREGROUND ; ignore_errors=no ; start_time=[Tue 2018-05-15 09:30:08 CST] ; stop_time=[n/a] ; pid=1605 ; code=(null) ; status=0/0 }
ExecReload={ path=/usr/sbin/httpd ; argv[]=/usr/sbin/httpd $OPTIONS -k graceful ; ignore_errors=no ; start_time=[Wed 2018-05-16 03:07:01 CST] ; stop_time=[Wed 2018-05-16 03:07:01 CST] ; pid=21501 ; code=exited ; status=0 }
ExecStop={ path=/bin/kill ; argv[]=/bin/kill -WINCH ${MAINPID} ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/httpd.service
LimitNOFILE=4096
...

Examples

>>> systemctl_show_httpd["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowMariaDB(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show mariadb.

Typical output of /bin/systemctl show mariadb command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStopUSec=5min
ExecStartPre={ path=/usr/libexec/mariadb-prepare-db-dir ; argv[]=/usr/libexec/mariadb-prepare-db-dir %n ; ignore_errors=no ; start_time=[Mon 2017-05-22 06:49:01 EDT] ; stop_time=[Mon 2017-05-22 06:49:02 EDT] ; pid=1946 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/mysqld_safe ; argv[]=/usr/bin/mysqld_safe --basedir=/usr ; ignore_errors=no ; start_time=[Mon 2017-05-22 06:49:02 EDT] ; stop_time=[n/a] ; pid=2304 ; code=(null) ; status=0/0 }
ExecStartPost={ path=/usr/libexec/mariadb-wait-ready ; argv[]=/usr/libexec/mariadb-wait-ready $MAINPID ; ignore_errors=no ; start_time=[Mon 2017-05-22 06:49:02 EDT] ; stop_time=[Mon 2017-05-22 06:49:12 EDT] ; pid=2305 ; code=exited ; status=0 }
Slice=system.slice
ControlGroup=/system.slice/mariadb.service
After=network.target -.mount systemd-journald.socket tmp.mount basic.target syslog.target system.slice
MemoryCurrent=18446744073709551615
LimitNOFILE=4096
...

Examples

>>> systemctl_show_mariadb["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowNginx(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show nginx.

Typical output of /bin/systemctl show nginx command is:

Type=forking
Restart=no
PIDFile=/run/nginx.pid
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=5s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestampMonotonic=0
ExecMainExitTimestampMonotonic=0
ExecMainPID=0
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/usr/bin/rm ; argv[]=/usr/bin/rm -f /run/nginx.pid ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStartPre={ path=/usr/sbin/nginx ; argv[]=/usr/sbin/nginx -t ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecStart={ path=/usr/sbin/nginx ; argv[]=/usr/sbin/nginx ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
...

Examples

>>> systemctl_show_nginx["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowPulpCelerybeat(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show pulp_celerybeat.

Typical output of /bin/systemctl show pulp_celerybeat command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:32 CST
ExecMainStartTimestampMonotonic=104261679
ExecMainExitTimestampMonotonic=0
ExecMainPID=2747
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/celery ; argv[]=/usr/bin/celery beat --scheduler=pulp.server.async.scheduler.Schedul
Slice=system.slice
After=basic.target network.target system.slice -.mount systemd-journald.socket
LimitNOFILE=4096
...

Examples

>>> systemctl_show_pulp_celerybeat["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowPulpResourceManager(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show pulp_resource_manager.

Typical output of /bin/systemctl show pulp_resource_manager command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:33 CST
ExecMainStartTimestampMonotonic=105028117
ExecMainExitTimestampMonotonic=0
ExecMainPID=2810
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/celery ; argv[]=/usr/bin/celery worker -A pulp.server.async.app -n resource_manager@
Slice=system.slice
After=basic.target network.target system.slice -.mount systemd-journald.socket
LimitNOFILE=4096
...

Examples

>>> systemctl_show_pulp_resource_manager["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowPulpWorkers(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show pulp_workers.

Typical output of /bin/systemctl show pulp_workers command is:

Type=oneshot
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=0
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestampMonotonic=0
ExecMainStartTimestamp=Thu 2018-01-11 14:22:33 CST
ExecMainStartTimestampMonotonic=105521850
ExecMainExitTimestamp=Thu 2018-01-11 14:22:33 CST
ExecMainExitTimestampMonotonic=105593405
ExecStart={ path=/usr/libexec/pulp-manage-workers ; argv[]=/usr/libexec/pulp-manage-workers start ; ignore_err
ExecStop={ path=/usr/libexec/pulp-manage-workers ; argv[]=/usr/libexec/pulp-manage-workers stop ; ignore_error
Slice=system.slice
After=systemd-journald.socket system.slice network.target basic.target
LimitNOFILE=4096
...

Examples

>>> systemctl_show_pulp_workers["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowQdrouterd(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show qdrouterd.

Typical output of /bin/systemctl show qdrouterd command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:32 CST
ExecMainStartTimestampMonotonic=104261679
ExecMainExitTimestampMonotonic=0
ExecMainPID=2747
ExecMainCode=0
ExecMainStatus=0
Slice=system.slice
LimitNOFILE=4096
...

Examples

>>> systemctl_show_qdrouterd["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowQpidd(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show qpidd.

Typical output of /bin/systemctl show qpidd command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:32 CST
ExecMainStartTimestampMonotonic=104261679
ExecMainExitTimestampMonotonic=0
ExecMainPID=2747
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/sbin/qpidd ; argv[]=/usr/sbin/qpidd --config /etc/qpid/qpi
Slice=system.slice
ControlGroup=/system.slice/qpidd.service
LimitNOFILE=4096
...

Examples

>>> systemctl_show_qpidd["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowServiceAll(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, dict

Class for parsing systemctl show *.service command output. Empty properties are suppressed.

Sample Input:

Id=postfix.service
Names=postfix.service
TimeoutStartUSec=1min 30s
LimitNOFILE=65536
LimitMEMLOCK=
LimitLOCKS=18446744073709551615

Id=postgresql.service
Names=postgresql.service
Requires=basic.target
LimitMSGQUEUE=819200
LimitNICE=0

Sample Output:

{
    "postfix.service": {
        "Id"               : "postfix.service",
        "Names"            : "postfix.service",
        "LimitNOFILE"      : "65536",
        "TimeoutStartUSec" : "1min 30s",
        "LimitLOCKS"       : "18446744073709551615",
    },
    "postgresql.service": {
        "Id"               : "postgresql.service",
        "Names"            : "postgresql.service",
        "Requires"         : "basic.target",
        "LimitMSGQUEUE"    : "819200",
        "LimitNICE"        : "0",
    }
}

Examples

>>> 'postfix' in systemctl_show_all  # ".service" is needed
False
>>> 'postfix.service' in systemctl_show_all
True
>>> systemctl_show_all['postfix.service']['Id']
'postfix.service'
>>> 'LimitMEMLOCK' in systemctl_show_all['postfix.service']
False
>>> systemctl_show_all['postfix.service']['LimitLOCKS']
'18446744073709551615'
>>> 'postgresql.service' in systemctl_show_all
True
>>> systemctl_show_all['postgresql.service']['LimitNICE']
'0'
Raises
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.systemctl_show.SystemctlShowSmartpdc(*args, **kwargs)[source]

Bases: insights.parsers.systemctl_show.SystemctlShow

Warning

This parser is deprecated, please use SystemctlShowServiceAll instead.

Class for systemctl show smart_proxy_dynflow_core.

Typical output of /bin/systemctl show smart_proxy_dynflow_core command is:

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
ExecMainStartTimestamp=Thu 2018-01-11 14:22:32 CST
ExecMainStartTimestampMonotonic=104261679
ExecMainExitTimestampMonotonic=0
ExecMainPID=2747
ExecMainCode=0
ExecMainStatus=0
Slice=system.slice
LimitNOFILE=4096
...

Examples

>>> systemctl_show_smartpdc["LimitNOFILE"]
'4096'
class insights.parsers.systemctl_show.SystemctlShowTarget(context, extra_bad_lines=[])[source]

Bases: insights.parsers.systemctl_show.SystemctlShowServiceAll

Class for parsing systemctl show *.target command output. Empty properties are suppressed.

This class is inherited from SystemctlShowServiceAll.

Sample Input:

Id=network.target
Names=network.target
WantedBy=NetworkManager.service
Conflicts=shutdown.target
Before=tuned.service network-online.target rhsmcertd.service kdump.service httpd.service rsyslog.service rc-local.service insights-client.timer insights-client.service sshd.service postfix.service
After=firewalld.service network-pre.target network.service NetworkManager.service
Documentation=man:systemd.special(7) http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget
Description=Network
LoadState=loaded
ActiveState=active
SubState=active
FragmentPath=/usr/lib/systemd/system/network.target
UnitFileState=static
UnitFilePreset=disabled
InactiveExitTimestamp=Tue 2020-02-25 10:39:46 GMT
InactiveExitTimestampMonotonic=15332468
ActiveEnterTimestamp=Tue 2020-02-25 10:39:46 GMT
ActiveEnterTimestampMonotonic=15332468
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=no

Sample Output:

{'network.target': {'ActiveEnterTimestamp': 'Tue 2020-02-25 10:39:46 GMT',
                    'ActiveEnterTimestampMonotonic': '15332468',
                    'ActiveExitTimestampMonotonic': '0',
                    'ActiveState': 'active',
                    'After': 'firewalld.service network-pre.target '
                             'network.service NetworkManager.service',
                    'Before': 'tuned.service network-online.target '
                              'rhsmcertd.service kdump.service httpd.service '
                              'rsyslog.service rc-local.service '
                              'insights-client.timer insights-client.service '
                              'sshd.service postfix.service',
                    'CanStart': 'no',
                    'Conflicts': 'shutdown.target',
                    'Description': 'Network',
                    'Documentation': 'man:systemd.special(7) '
                                     'http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget',
                    'FragmentPath': '/usr/lib/systemd/system/network.target',
                    'Id': 'network.target',
                    'InactiveEnterTimestampMonotonic': '0',
                    'InactiveExitTimestamp': 'Tue 2020-02-25 10:39:46 GMT',
                    'InactiveExitTimestampMonotonic': '15332468',
                    'LoadState': 'loaded',
                    'Names': 'network.target',
                    'SubState': 'active',
                    'UnitFilePreset': 'disabled',
                    'UnitFileState': 'static',
                    'WantedBy': 'NetworkManager.service'})

Examples

>>> 'network.target' in systemctl_show_target
True
>>> systemctl_show_target.get('network.target').get('WantedBy', None)
'NetworkManager.service'
>>> systemctl_show_target.get('network.target').get('RequiredBy', None)
Raises

Command systool outputs - Commands

Command systool uses APIs provided by libsysfs to gather information.

Parser included in this module is:

SystoolSCSIBus - command /bin/systool -b scsi -v

class insights.parsers.systool.SystoolSCSIBus(context, extra_bad_lines=[])[source]

Bases: insights.core.LegacyItemAccess, insights.core.CommandParser

Class for parsing /bin/systool -b scsi -v command output

Typical command output:

Bus = "scsi"

  Device = "1:0:0:0"
  Device path = "/sys/devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:0:0:0"
    delete              = <store method only>
    device_blocked      = "0"
    device_busy         = "0"
    dh_state            = "detached"
    eh_timeout          = "10"
    evt_capacity_change_reported= "0"
    evt_inquiry_change_reported= "0"
    evt_lun_change_reported= "0"
    evt_media_change    = "0"
    evt_mode_parameter_change_reported= "0"
    evt_soft_threshold_reached= "0"
    iocounterbits       = "32"
    iodone_cnt          = "0x15b"
    ioerr_cnt           = "0x3"
    iorequest_cnt       = "0x16c"
    modalias            = "scsi:t-0x05"
    model               = "CD-ROM          "
    queue_depth         = "1"
    queue_type          = "none"
    rescan              = <store method only>
    rev                 = "1.0 "
    scsi_level          = "6"
    state               = "running"
    timeout             = "30"
    type                = "5"
    uevent              = "DEVTYPE=scsi_device
DRIVER=sr
MODALIAS=scsi:t-0x05"
    unpriv_sgio         = "0"
    vendor              = "VBOX    "

  Device = "2:0:0:0"
  ...

Examples

>>> len(res.data)
2
>>> res.data.keys()
['1:0:0:0', '2:0:0:0']
>>> res.device_names
['1:0:0:0', '2:0:0:0']
>>> res.data['1:0:0:0'] == res.devices[0]
True
>>> res.get_device_state('1:0:0:0')
'running'
>>> res.get_device_state('2:0:0:0')
None
get_device_state(device_name)[source]

Return the state value of the given device_name. Return None for unexist device_name or no state value.

parse_content(content)[source]

This method must be implemented by classes based on this class.

TeamdctlConfigDump - command teamdctl {team interface} config dump

This module provides processing for the output of the command teamdctl {team interface} config dump.

insights.parsers.teamdctl_config_dump.data

Dictionary of keys with values in dict.

Type

dict

Sample configuration file:

{
    "device": "team0",
    "hwaddr": "DE:5D:21:A8:98:4A",
    "link_watch": [
        {
            "delay_up": 5,
            "name": "ethtool"
        },
        {
            "name": "nsna_ping",
            "target_host ": "target.host"
        }
    ],
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "runner": {
        "hwaddr_policy": "only_active",
        "name": "activebackup"
    }
}

Examples

>>> str(teamdctl_config_dump.device_name)
'team0'
>>> str(teamdctl_config_dump.runner_name)
'activebackup'
>>> str(teamdctl_config_dump.runner_hwaddr_policy)
'only_active'
class insights.parsers.teamdctl_config_dump.TeamdctlConfigDump(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of teamdctl {team interface} config dump

property device_name

Return the type of the teaming device name

Type

str

property runner_hwaddr_policy

Return the type of the teaming runner hwaddr policy

Type

str

property runner_name

Return the type of the teaming runner name

Type

str

TeamdctlStateDump - command teamdctl {team interface} state dump

This module provides processing for the output of the command teamdctl {team interface} state dump which is JSON pattern.

Examples

>>> teamdctl_state_dump_content = '''
... {
...     "runner": {
...         "active_port": "eno1"
...     },
...     "setup": {
...         "daemonized": false,
...         "dbus_enabled": true,
...         "debug_level": 0,
...         "kernel_team_mode_name": "activebackup",
...         "pid": 4464,
...         "pid_file": "/var/run/teamd/team0.pid",
...         "runner_name": "activebackup",
...         "zmq_enabled": false
...     },
...     "team_device": {
...         "ifinfo": {
...             "dev_addr": "2c:59:e5:47:a9:04",
...             "dev_addr_len": 6,
...             "ifindex": 5,
...             "ifname": "team0"
...         }
...     }
... }
... '''.strip()
>>> from insights.parsers.teamdctl_state_dump import TeamdctlStateDump
>>> from insights.tests import context_wrap
>>> shared = {TeamdctlStateDump: TeamdctlStateDump(context_wrap(teamdctl_state_dump_content))}
>>> result = shared[TeamdctlStateDump]
>>> result['runner']['active_port']
'eno1'
>>> result.runner_type
'activebackup'
>>> result.team_ifname
'team0'
class insights.parsers.teamdctl_state_dump.TeamdctlStateDump(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, insights.core.JSONParser

Class to parse the output of teamdctl {team interface} state dump.

property runner_type

Return the type of the teaming runner

Type

str

property team_ifname

Return the teaming device name

Type

str

Tmpfiles.d configuration - files in /etc/tmpfiles.d

The TmpfilesD class parser provides a ‘rules’ list, as well as a find_file method to find rules for a particular tmp file.

class insights.parsers.tmpfilesd.TmpFilesD(context)[source]

Bases: insights.core.Parser

Parse files in /etc/tmpfiles.d, /usr/lib/tmpfiles.d/, and /run/tmpfiles.d.

This parser reads the files and records the filename in each line.

files

A list of the files that tmpfiles.d is managing.

Type

list

rules

A list of dictionaries with the values of each rule.

Type

list

Sample input:

# /usr/lib/tmpfiles.d/dnf.conf
r! /var/cache/dnf/*/*/download_lock.pid
e  /var/cache/dnf/ - - - 30d

Examples

>>> tmpfiles = shared[TmpFilesd][0] # List is per filename
>>> len(tmpfiles.rules)
2
>>> tmpfiles.files
['/var/cache/dnf/*/*/download_lock.pid', '/var/cache/dnf/']
>>> tmpfiles.rules[1]
{'path': '/var/cache/dnf/', 'type': 'e', 'mode': '-', 'age': '30d',
 'gid': '-', 'uid': '-', 'argument': None}
>>> tmpfiles.find_file('download_lock.pid')
[{'path': '/var/cache/dnf/*/*/download_lock.pid', 'type': 'r!',
  'mode': None, 'age': None, 'gid': None, 'uid': None, 'argument': None}]
find_file(filename)[source]

Find any rules containing the file being searched.

This method returns a list of dictionaries where the the managed file is found.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Parsers for usage of VirtualDirContext option in Tomcat config files

This module provides the following parsers:

TomcatVirtualDirContextFallback

This is a parser for a command for finding config files in default location:

/usr/bin/find /usr/share -maxdepth 1 -name 'tomcat*' -exec grep -R -s 'VirtualDirContext' --include '*.xml' '{}' +

It is especially useful if the Tomcat server is not running.

TomcatVirtualDirContextTargeted

This is a parser for a command for finding config files in the custom location defined in a command line:

/bin/grep -R -s 'VirtualDirContext' --include '*.xml' {catalina}

Where catalina variable is computed as following:

/bin/ps auxww | awk '/java/ { match($0, "\-Dcatalina\.home=([^[:space:]]+)", a); match($0, "\-Dcatalina\.base=([^[:space:]]+)", b); if (a[1] != "" || b[1] != "") print a[1] " " b[1] }'

Both parsers detect whether there are any config files which contain VirtualDirContext.

Sample input:

/usr/share/tomcat/conf/server.xml:    <Resources className="org.apache.naming.resources.VirtualDirContext"

Examples:

>>> shared[TomcatVirtualDirContextFallback].data
{'/usr/share/tomcat/conf/server.xml':
 ['    <Resources className="org.apache.naming.resources.VirtualDirContext"'],
 }
class insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextBase(*args, **kwargs)[source]

Bases: insights.core.CommandParser

Generic parser which finds whether there is a VirtualDirContext option used in TomCat configuration file.

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextFallback(*args, **kwargs)[source]

Bases: insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextBase

Reports whether there is a VirtualDirContext option used in TomCat configuration file. Looks for the configuration files in default location.

class insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextTargeted(*args, **kwargs)[source]

Bases: insights.parsers.tomcat_virtual_dir_context.TomcatVirtualDirContextBase

Reports whether there is a VirtualDirContext option used in TomCat configuration file. Looks for the configuration files in location derived from running Tomcat command.

tomcat_xml - XML files for Tomcat

Classes to parse Tomcat XML configuration files:

TomcatWebXml - files from /etc/tomcat*/web.xml and /conf/tomcat/tomcat*/web.xml

TomcatServerXml - files from (tomcat base directory)/conf/server.xml or conf/tomcat/tomcat*/server.xml

Note

The tomcat XML files are found in the directory specified in the java commands

class insights.parsers.tomcat_xml.TomcatServerXml(context)[source]

Bases: insights.core.XMLParser

Parse the server.xml of Tomcat.

Sample input:

<?xml version='1.0' encoding='utf-8'?>
<Server port="8005" shutdown="SHUTDOWN">

  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JasperListener" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>
  <Service name="Catalina">
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
               maxThreads="150" scheme="https" secure="true"
               clientAuth="want"
               sslProtocols="TLSv1.2,TLSv1.1,TLSv1"
               keystoreFile="conf/keystore"
               truststoreFile="conf/keystore"
               keystorePass="oXQ8LfAGsf97KQxwwPta2X3vnUv7P5QM"
               keystoreType="PKCS12"
               ciphers="SSL_RSA_WITH_3DES_EDE_CBC_SHA,
                    TLS_RSA_WITH_AES_256_CBC_SHA,
                    TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
                    TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
                    TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
                    TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
                    TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
                    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"
               truststorePass="oXQ8LfAGsf97KQxwwPta2X3vnUv7P5QM" />

    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
    <Engine name="Catalina" defaultHost="localhost">
      <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
             resourceName="UserDatabase"/>
      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true"
            xmlValidation="false" xmlNamespaceAware="false">
      </Host>
    </Engine>
  </Service>
</Server>

Examples

>>> type(server_xml)
<class 'insights.parsers.tomcat_xml.TomcatServerXml'>
>>> server_xml.file_path
'/usr/share/tomcat/server.xml'
>>> hosts = server_xml.get_elements(".//Service/Engine/Host")
>>> len(hosts)
1
>>> hosts[0].get('name')
'localhost'
class insights.parsers.tomcat_xml.TomcatWebXml(context)[source]

Bases: insights.core.XMLParser

Parse the web.xml of Tomcat.

Currently it only stores the setting of session-timeout.

data

special settings, e.g. session-timeout get from the xml file.

Type

dict

Sample input:

<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
    version="2.5">

    <servlet>
        <servlet-name>default</servlet-name>
        <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
        <init-param>
            <param-name>debug</param-name>
            <param-value>0</param-value>
        </init-param>
        <init-param>
            <param-name>listings</param-name>
            <param-value>false</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <session-config>
        <session-timeout>30</session-timeout>
    </session-config>

    <welcome-file-list>
        <welcome-file>index.html</welcome-file>
        <welcome-file>index.htm</welcome-file>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>

</web-app>

Examples

>>> type(web_xml)
<class 'insights.parsers.tomcat_xml.TomcatWebXml'>
>>> web_xml.get('session-timeout') == 30
True
parse_dom()[source]

Get the setting of ‘session-timeout’ from the parsed Elements in data and return.

Returns

Currently only ‘session-timeout’ is added into the dictionary. An empty dict, when ‘session-timeout’ setting cannot be found.

Return type

(dict)

transparent_hugepage sysfs settings

Module for parsing the sysfs settings for transparent_hugepage:

ThpUseZeroPage - file /sys/kernel/mm/transparent_hugepage/use_zero_page

Gets the contents of /sys/kernel/mm/transparent_hugepage/use_zero_page, which is either 0 or 1.

Sample input:

0

Examples

>>> shared[ThpUseZeroPage].use_zero_page
0

ThpEnabled - file /sys/kernel/mm/transparent_hugepage/enabled

Gets the contents of /sys/kernel/mm/transparent_hugepage/enabled, which is something like always [madvise] never where the active value is in brackets.

If no option is active (that should never happen), active_option will contain None.

Sample input:

always [madvise] never

Examples

>>> shared[ThpEnabled].line
always [madvise] never
>>> shared[ThpEnabled].active_option
madvise
class insights.parsers.transparent_hugepage.ThpEnabled(context)[source]

Bases: insights.core.Parser

Gets the contents of /sys/kernel/mm/transparent_hugepage/enabled, which is something like always [madvise] never where the active value is in brackets. If no option is active (that should never happen), active_option will contain None.

line

Contents of the input file.

Type

str

active_option

The active option for transparent huge pages, or None if not present.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.transparent_hugepage.ThpUseZeroPage(context)[source]

Bases: insights.core.Parser

Gets the contents of /sys/kernel/mm/transparent_hugepage/use_zero_page, which is either 0 or 1.

use_zero_page

The setting, should be 0 or 1.

Type

str

parse_content(content)[source]

This method must be implemented by classes based on this class.

Tuned - command /usr/sbin/tuned-adm list

This parser reads the output of the /usr/sbin/tuned-adm list command and reads it into a simple dictionary in the data property with two of three keys:

  • available - the list of available profiles

  • active - the active profile name

  • preset - the profile name that’s preset to be used when tuned is active

The active key is available when tuned is running, because the active profile is only listed when the daemon is active. If tuned is not running, the tuned-adm command will list the profile that will be used when the daemon is running, and this is given in the preset key.

Sample data:

Available profiles:
- balanced
- desktop
- latency-performance
- network-latency
- network-throughput
- powersave
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: virtual-guest

Examples

>>> result = shared[Tuned]
>>> 'active' in result.data
True
>>> result.data['active']
'virtual-guest'
>>> len(result.data['available'])
9
>>> 'balanced' in result.data['available']
True
class insights.parsers.tuned.Tuned(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse data from the /usr/sbin/tuned-adm list command.

parse_content(content)[source]

This method must be implemented by classes based on this class.

TunedConfIni - file /etc/tuned.conf

class insights.parsers.tuned_conf.TunedConfIni(context)[source]

Bases: insights.core.IniConfigFile

This class parses the /etc/tuned.conf file using the IniConfigFile base parser.

Sample configuration file:

#
# Net tuning section
#
[NetTuning]
# Enabled or disable the plugin. Default is True. Any other value
# disables it.
enabled=False

#
# CPU monitoring section
#
[CPUMonitor]
# Enabled or disable the plugin. Default is True. Any other value
# disables it.
# enabled=False

Examples

>>> 'NetTuning' in tuned_obj.sections()
True
>>> tuned_obj.get('NetTuning', 'enabled') == "False"
True
>>> tuned_obj.getboolean('NetTuning', 'enabled') == False
True
>>> sorted(tuned_obj.sections())==sorted(['CPUMonitor', 'NetTuning'])
True
parse_content(content, allow_no_value=True)[source]

Parses content of the config file.

In child class overload and call super to set flag allow_no_values and allow keys with no value in config file:

def parse_content(self, content):
    super(YourClass, self).parse_content(content,
                                         allow_no_values=True)

UdevRules - file /usr/lib/udev/rules.d/

The parsers included in this module are:

UdevRulesFCWWPN - file /usr/lib/udev/rules.d/59-fc-wwpn-id.rules

class insights.parsers.udev_rules.UdevRulesFCWWPN(context)[source]

Bases: insights.core.LogFileOutput

Read the content of /usr/lib/udev/rules.d/59-fc-wwpn-id.rules file.

Note

The syntax of the .rules file is complex, and no rules require to get the serialized parsed result currently. An only existing rule’s supposed to check the syntax of some specific line, so here the insights.core.LogFileOutput is the base class.

Examples

>>> type(udev_rules)
<class 'insights.parsers.udev_rules.UdevRulesFCWWPN'>
>>> 'ENV{FC_TARGET_WWPN}!="$*"; GOTO="fc_wwpn_end"' in udev_rules.lines
True

Uname - command uname -a

The Uname class reads the output of the uname -a command and interprets it. It also does a number of handy extra things, like deriving the RHEL release from the kernel version.

Uname objects can also be compared by their kernel versions.

An example from the following uname -a output:

Linux server1.example.com 2.6.32-504.el6.x86_64 #1 SMP Tue Sep 16 01:56:35 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux

Example

>>> type(uname)
<class 'insights.parsers.uname.Uname'>
>>> uname.version
'2.6.32'
>>> uname.release
'504.el6'
>>> uname.arch
'x86_64'
>>> uname.nodename
'server1.example.com'

Uname objects can be created from, and compared to, other Uname objects or kernel strings:

>>> early_rhel6 = Uname.from_kernel('2.6.32-71')
>>> late_rhel6 = Uname.from_release('6.7')
>>> late_rhel6 > early_rhel6
True
>>> early_rhel6 > '2.6.32-279.el6.x86_64'
False
class insights.parsers.uname.RedhatRelease(major, minor)

Bases: tuple

property major
property minor
class insights.parsers.uname.Uname(context)[source]

Bases: insights.core.CommandParser

A utility class to parse uname content data and compare version and release information.

The input is a uname content string. The content is parsed into specific uname elements that are made available through instance variables. Operators are provided for comparison of version and release information. Uname content is expected to be in the format returned by the uname -a command. The following instance variables are provided by this class:

  • kernel: Provides an unparsed copy of the full version and release string provided in the uname content input. No validation is performed on this information. Generally in the format #.#.#-#.#.#.el#.arch.

  • name: The kernel name, usually Linux.

  • nodename: Hostname of the computer where the uname command was executed. This information may obfuscated for security.

  • version: The major identification number for the kernel release. It should be in the format #.#.#[.#] or a UnameError exception will be raised.

  • release: The minor identification number for the kernel release. This information is generally in the format #.#.#.el#, however this is not strictly enforced. If the release.arch information cannot be reliably parsed then release and release_arch will be the same value.

  • release_arch: This is the release plus the kernel architecture information as provided in arch.

  • arch: This contains the kernel architecture information like x86_64 or s390. A list of known architectures is provided by the global variable KNOWN_ARCHITECTURES. This information is not always present in the uname content.

  • ver_rel: This is a combination of version and release in the format version-release.

  • rhel_release: A list of two elements, the major and minor RHEL product release numbers.

fixed_by(*fixes, **kwargs)[source]

Determine whether the Uname object is fixed by a range of releases or by a specific release.

Parameters
  • fixes: List of one or more Uname objects to compare to the current object. fixes is a list of one or more Uname objects and each will be compared with the current object to determine a match.

  • kwargs: List of key word argument/Uname object pairs. Currently only introduced_in is supported as a keyword. When used the current Uname object is checked to see if it is prior to the introduced_in release. It will be further checked against fixes only if it is the same as or newer than the introduced_in release.

classmethod from_kernel(kernel)[source]

Create a Uname object from a kernel NVR (e.g. ‘2.6.32-504.el6.x86_64’).

Parameters
  • kernel - the kernel version and release string.

classmethod from_release(release)[source]

Attempt to create a Uname object from a release (e.g. ‘7.2’).

This translates from the release to the kernel version for that release, and then uses that to generate a Uname object using the class from_kernel method. If the release does not match a known release, it returns None.

Parameters
  • release: RHEL release version.

classmethod from_uname_str(uname_str)[source]

Create a Uname object from a string containing the output of ‘uname -a’.

Parameters
  • uname_str - the string output of uname -a

parse_content(content)[source]

Parses uname content into individual uname components.

Parameters
  • content: Uname content from Insights to be parsed.

Exceptions
  • UnameError: Raised when there are any errors evaluating the uname content.

classmethod parse_nvr(nvr, data=None, arch=True)[source]

Called by parse_uname_line to separate the version, release and arch information.

Parameters
  • nvr: Uname content to parse.

  • arch: Flag to indicate whether there is architecture information in the release.

Exceptions
  • UnameError: Raised on errors in evaluating the uname content.

exception insights.parsers.uname.UnameError(msg, uname_line)[source]

Bases: Exception

Exception subclass for errors related to uname content data and the Uname class.

This exception should not be caught by rules plugins unless it is necessary for the plugin to return a particular answer when a problem occurs with uname data. If a plugin catches this exception it must reraise it so that the engine has the opportunity to handle it/log it as necessary.

insights.parsers.uname.pad_release(release_to_pad, num_sections=4)[source]

Pad out package and kernel release versions so that LooseVersion comparisons will be correct.

Release versions with less than num_sections will be padded in front of the last section with zeros.

For example

pad_release("390.el6", 4)

will return 390.0.0.el6 and

pad_release("390.11.el6", 4)

will return 390.11.0.el6.

If the number of sections of the release to be padded is greater than num_sections, a ValueError will be raised.

Units Manged By Systemctl (services)

Parsers included in this module are:

ListUnits - command /bin/systemctl list-units

UnitFiles - command /bin/systemctl list-unit-files

class insights.parsers.systemd.unitfiles.ListUnits(*args, **kwargs)[source]

Bases: insights.core.Parser

The ListUnits class parses the output of /bin/systemctl list-units and provides information about all the services listed under it.

Output of Command:

UNIT                                LOAD   ACTIVE SUB       DESCRIPTION
sockets.target                      loaded active active    Sockets
swap.target                         loaded active active    Swap
systemd-shutdownd.socket            loaded active listening Delayed Shutdown Socket
neutron-dhcp-agent.service          loaded active running   OpenStack Neutron DHCP Agent
neutron-openvswitch-agent.service   loaded active running   OpenStack Neutron Open vSwitch Agent
...
unbound-anchor.timer                loaded active waiting   daily update of the root trust anchor for DNSSEC

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

161 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

Example

>>> units.get_service_details('swap.target') == {'LOAD': 'loaded', 'ACTIVE': 'active', 'SUB': 'active', 'UNIT': 'swap.target', 'DESCRIPTION': 'Swap'}
True
>>> units.unit_list['swap.target'] == {'LOAD': 'loaded', 'ACTIVE': 'active', 'SUB': 'active', 'UNIT': 'swap.target', 'DESCRIPTION': 'Swap'}
True
>>> units.is_active('swap.target')
True
>>> units.get_service_details('random.service') == {'LOAD': None, 'ACTIVE': None, 'SUB': None, 'UNIT': None, 'DESCRIPTION': None}
True
get_service_details(service_name)[source]

Return the service details collected by systemctl.

Parameters

service_name (str) -- service name including its extension.

Returns

Dictionary containing details for the service. if service is not present dictonary values will be None:

{'LOAD': 'loaded', 'ACTIVE': 'active', 'SUB': 'running', 'UNIT': 'neutron-dhcp-agent.service'}

Return type

dict

is_active(service_name)[source]

Return the ACTIVE state of service managed by systemd.

Parameters

service_name (str) -- service name including its extension.

Returns

True if service is active False if inactive

Return type

bool

is_failed(service_name)[source]

Return the ACTIVE state of service managed by systemd.

Parameters

service_name (str) -- service name including its extension.

Returns

True if service is failed, False in all other states.

Return type

bool

is_loaded(service_name)[source]

Return the LOAD state of service managed by systemd.

Parameters

service_name (str) -- service name including its extension.

Returns

True if service is loaded False if not loaded

Return type

bool

is_running(service_name)[source]

Return the SUB state of service managed by systemd.

Parameters

service_name (str) -- service name including its extension.

Returns

True if service is running False in all other states.

Return type

bool

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

property service_names

Returns a list of all UNIT names.

Type

list

unit_list = None

Dictionary service detail like active, running, exited, dead

Type

dict

class insights.parsers.systemd.unitfiles.UnitFiles(*args, **kwargs)[source]

Bases: insights.core.Parser

The UnitFiles class parses the output of /bin/systemctl list-unit-files and provides information about enabled services.

Output of Command::

UNIT FILE STATE mariadb.service enabled neutron-openvswitch-agent.service enabled neutron-ovs-cleanup.service enabled neutron-server.service enabled runlevel0.target disabled runlevel1.target disabled runlevel2.target enabled

Example

>>> conf.is_on('mariadb.service')
True
>>> conf.is_on('runlevel0.target')
False
>>> conf.exists('neutron-server.service')
True
>>> conf.exists('runlevel1.target')
True
>>> 'mariadb.service' in conf.services
True
>>> 'runlevel0.target' in conf.services
True
>>> 'nonexistent-service.service' in conf.services
False
>>> conf.services['mariadb.service']
True
>>> conf.services['runlevel1.target']
False
>>> conf.services['nonexistent-service.service']
Traceback (most recent call last):
  File "<doctest insights.parsers.systemd.unitfiles.UnitFiles[11]>", line 1, in <module>
    conf.services['nonexistent-service.service']
KeyError: 'nonexistent-service.service'
exists(service_name)[source]

Checks if the service is listed in systemctl.

Parameters

service_name (str) -- service name including ‘.service’

Returns

True if service exists, False otherwise.

Return type

bool

is_on(service_name)[source]

Checks if the service is enabled in systemctl.

Parameters

service_name (str) -- service name including ‘.service’

Returns

True if service is enabled, False if it is disabled. None if the service doesn’t exist.

Return type

Union[bool, None]

parse_content(content)[source]

Main parsing class method which stores all interesting data from the content.

Parameters

content (context.content) -- Parser context content

parsed_lines = None

Dictionary of content lines access by service name.

Type

dict

service_list = None

List of service names in order of appearance.

Type

list

services = None

Dictionary of bool indicating if service is enabled, access by service name .

Type

dict

up2date Logs - Files /var/log/up2date

Modules for parsing the content of log file /var/log/up2date in sosreport archives of RHEL.

class insights.parsers.up2date_log.Up2dateLog(context)[source]

Bases: insights.core.LogFileOutput

Class for parsing the log file: /var/log/up2date.

Note

Please refer to its super-class insights.core.LogFileOutput

Example content of /var/log/up2date command is:

[Thu Feb  1 02:46:25 2018] rhn_register updateLoginInfo() login info
[Thu Feb  1 02:46:35 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #1
[Thu Feb  1 02:46:40 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #2
[Thu Feb  1 02:46:45 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #3
[Thu Feb  1 02:46:50 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #4
[Thu Feb  1 02:46:55 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #5

...

Examples

>>> ulog.get('Temporary failure in name resolution')[0]['raw_message']
"[Thu Feb  1 02:46:35 2018] rhn_register A socket error occurred: (-3, 'Temporary failure in name resolution'), attempt #1"

Uptime - command /usr/bin/uptime

Parse the output of the uptime command into six attributes:

  • currtime: the time on the system as a string.

  • loadavg: a three element array of strings for the one, five and fifteen minute load averages.

  • updays: a string of the number of days the system has been up, or ‘’ if the system has been running for less than a day.

  • uphhmm: a string of the fraction of a day in hours and minutes that the system has been running. Times reported by uptime as e.g. ‘30 mins’ are converted in to hh:mm format.

  • users: a string containing the number of users uptime reports as using the system.

  • uptime: a datetime.timedelta object of the total duration of uptime.

These can also be queried as named keys in the data attribute.

Sample output:

11:51:06 up  3:17,  1 user,  load average: 0.12, 0.20, 0.28

Examples

>>> uptime = shared[Uptime]
>>> from datetime import timedelta
>>> uptime.uptime > timedelta(days=1)
False
>>> uptime.updays
''
>>> uptime.users
'1'
>>> uptime.loadavg[1]
'0.20'
>>> uptime.data['currtime']
'11:51:06'
class insights.parsers.uptime.Uptime(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parser class to parse the output of uptime.

parse_content(content)[source]

This method must be implemented by classes based on this class.

VDOStatus - command /usr/bin/vdo status

Module for parsing the output of command vdo status. The bulk of the content is split on the colon and keys are kept as is.

class insights.parsers.vdo_status.VDOStatus(context)[source]

Bases: insights.core.YAMLParser

Class for parsing vdo status command output.

Typical output of command vdo status looks like:

VDO status:
  Date: '2019-07-27 04:40:40-04:00'
  Node: rdma-qe-04.lab.bos.redhat.com
Kernel module:
  Loaded: true
  Name: kvdo
  Version information:
    kvdo version: 6.1.0.153
Configuration:
  File: /etc/vdoconf.yml
  Last modified: '2019-07-26 05:07:48'
VDOs:
  vdo1:
    Acknowledgement threads: 1
    Activate: enabled
    Device mapper status: 0 8370216 vdo /dev/sda5 albserver online cpu=2,bio=4,ack=1,bioRotationInterval=64
    Physical size: 7G
    Slab size: 2G
    Storage device: /dev/sda5
    VDO statistics:
      /dev/mapper/vdo1:
        overhead blocks used: 787140
        physical blocks: 1835008
        data blocks used: 0
  vdo2:
    Acknowledgement threads: 1
    Activate: enabled
    Device mapper status: 0 8370212 vdo /dev/sda6 albserver online cpu=2,bio=4,ack=1,bioRotationInterval=64
    VDO statistics:
      /dev/mapper/vdo1:
        1K-blocks: 7340032

Examples

>>> type(vdo)
<class 'insights.parsers.vdo_status.VDOStatus'>
>>> vdo['Kernel module']['Name']
'kvdo'
>>> vdo['Configuration']['File']
'/etc/vdoconf.yml'
>>> vdo['VDOs']['vdo1']['Activate']
'enabled'
>>> vdo['VDOs']['vdo1']['VDO statistics']['/dev/mapper/vdo1']['1K-blocks']
7340032
>>> vdo['VDO status']
{'Date': '2019-07-24 20:48:16-04:00', 'Node': 'dell-m620-10.rhts.gsslab.pek2.redhat.com'}
>>> vdo['VDOs']['vdo2']['Acknowledgement threads']
1
>>> vdo.get_slab_size_of_vol('vdo1')
'2G'
>>> vdo.volumns
['vdo1', 'vdo2']
>>> vdo.get_physical_blocks_of_vol('vdo1')
1835008
>>> vdo.get_physical_used_of_vol('vdo1')
0
>>> vdo.get_physical_free_of_vol('vdo1')
1047868
>>> vdo.get_logical_used_of_vol('vdo1')
0
>>> vdo.get_overhead_used_of_vol('vdo1')
787140
Raises

ParseException -- When input content is not available to parse

data

the result parsed of ‘vdo status’

Type

dict

volumns

The list the vdo volumns involved

Type

list

get_logical_blocks_of_vol(vol)[source]

The logical blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of logical blocks

Return type

int

get_logical_free_of_vol(vol)[source]

The logical free blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of logical free

Return type

int

get_logical_used_of_vol(vol)[source]

The logical used blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of logical blocks used

Return type

int

get_overhead_used_of_vol(vol)[source]

The overhead used blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of overhead blocks used

Return type

int

get_physical_blocks_of_vol(vol)[source]

The physical blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

physical blocks size

Return type

int

get_physical_free_of_vol(vol)[source]

The physical free blocks of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of physical free

Return type

int

get_physical_used_of_vol(vol)[source]

The physical used blocks of a specified volumn

Parameters

vol (str) -- The vdo volumne name specified

Returns

Returns size of physical blocks used

Return type

int

get_slab_size_of_vol(vol)[source]

The slab size of a specified volumne

Parameters

vol (str) -- The vdo volumne name specified

Returns

Slab size of specified vdo volumne

Return type

str

property volumns

The volumns appeared in vdo status

Returns

vdo volumns

Return type

list

This module contains the following parsers:

VDSMConfIni - file /etc/vdsm/vdsm.conf

VDSMLoggerConf - file /etc/vdsm/logger.conf

class insights.parsers.vdsm_conf.VDSMConfIni(context)[source]

Bases: insights.core.IniConfigFile

This class parses the /etc/vdsm/vdsm.conf file using the IniConfigFile base parser.

Sample configuration file:

[vars]
ssl = true
cpu_affinity = 1

[addresses]
management_port = 54321
qq = 345

Examples

>>> 'vars' in conf
True
>>> conf.get('addresses', 'qq') == '345'
True
>>> conf.getboolean('vars', 'ssl')
True
>>> conf.getint('addresses', 'management_port')
54321
class insights.parsers.vdsm_conf.VDSMLoggerConf(context)[source]

Bases: insights.core.IniConfigFile

This class parses the /etc/vdsm/logger.conf file using the IniConfigFile base parser.

Sample configuration file:

[loggers]
keys=root,vds,storage,virt,ovirt_hosted_engine_ha,ovirt_hosted_engine_ha_config,IOProcess,devel

[formatter_long]
format: %(asctime)s %(levelname)-5s (%(threadName)s) [%(name)s] %(message)s (%(module)s:%(lineno)d)
class: vdsm.logUtils.TimezoneFormatter

[logger_root]
level=DEBUG
handlers=syslog,logfile
propagate=0

[formatters]
keys=long,simple,none,sysform

[logger_ovirt_hosted_engine_ha]
level=DEBUG
handlers=
qualname=ovirt_hosted_engine_ha
propagate=1

[formatter_sysform]
format= vdsm %(name)s %(levelname)s %(message)s
datefmt=

Examples

>>> len(vdsm_logger_conf.sections())
18
>>> vdsm_logger_conf.has_option('formatter_long', 'class')
True
>>> vdsm_logger_conf.has_option('loggers', 'keys')
True
>>> vdsm_logger_conf.getboolean('logger_root', 'propagate')
False
>>> vdsm_logger_conf.items('loggers') == {'keys': 'root,vds,storage,virt,ovirt_hosted_engine_ha,ovirt_hosted_engine_ha_config,IOProcess,devel'}
True
>>> vdsm_logger_conf.get('logger_ovirt_hosted_engine_ha', 'level') == 'DEBUG'
True
>>> vdsm_logger_conf.get('formatter_sysform', 'datefmt') == ''
True

VDSMLog - file /var/log/vdsm/vdsm.log and /var/log/vdsm/import/import-*.log

class insights.parsers.vdsm_log.VDSMImportLog(context)[source]

Bases: insights.core.LogFileOutput

Parser for the log file detailing virtual machine imports.

Sample log file:

[    0.2] preparing for copy
[    0.2] Copying disk 1/1 to /rhev/data-center/958ca292-9126/f524d2ba-155a/images/502f5598-335d-/d4b140c8-9cd5
[    0.0] >>> source, dest, and storage-type have different lengths

Example

>>> log = vdsm_import_logs.get('preparing for copy')
>>> len(log)
1
>>> log[0].get('raw_message', None)
'[    0.2] preparing for copy'
>>> vdsm_import_logs.vm_uuid              # file: import-1f9efdf5-2584-4a2a-8f85-c3b6f5dac4e0-20180130T154807.log
'1f9efdf5-2584-4a2a-8f85-c3b6f5dac4e0'
>>> vdsm_import_logs.file_datetime
datetime.datetime(2018, 1, 30, 15, 48, 07)
vm_uuid

UUID of imported VM

Type

str

file_datetime

Date and time that import began.

Type

datetime

get_after(timestamp, s=None)[source]

Find all the (available) logs that are after the given time stamp.

If s is not supplied, then all lines are used. Otherwise, only the lines contain the s are used. s can be either a single string or a strings list. For list, all keywords in the list must be found in the line.

Parameters
  • timestamp (float) -- log lines after this time are returned.

  • s (str or list) -- one or more strings to search for. If not supplied, all available lines are searched.

Yields

Log lines with time stamps after the given time.

Raises

TypeError -- The timestamp should be in float type, otherwise a TypeError will be raised.

parse_content(content)[source]

Parse import-@UUID-@datetime.log log file.

class insights.parsers.vdsm_log.VDSMLog(context)[source]

Bases: insights.core.LogFileOutput

Logs from the Virtual Desktop and Server Manager.

Uses LogFileOutput as the base class - see its documentation for more specific usage details.

Sample logs from VDSM version 3:

Thread-60::DEBUG::2015-05-08 18:01:03,071::blockSD::600::Storage.Misc.excCmd::(getReadDelay) '/bin/dd if=/dev/5a30691d-4fae-4023-ae96-50704f6b253c/metadata iflag=direct of=/dev/null bs=4096 count=1' (cwd None)
Thread-60::DEBUG::2015-05-08 18:01:03,090::blockSD::600::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.00038933 s, 10.5 MB/s\n'; <rc> = 0
Thread-65::DEBUG::2015-05-08 18:01:04,835::blockSD::600::Storage.Misc.excCmd::(getReadDelay) '/bin/dd if=/dev/e70cce65-0d02-4da4-8781-6aeeef5c86ff/metadata iflag=direct of=/dev/null bs=4096 count=1' (cwd None)
Thread-65::DEBUG::2015-05-08 18:01:04,857::blockSD::600::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000157193 s, 26.1 MB/s\n'; <rc> = 0
Thread-4662::DEBUG::2015-05-08 18:01:05,560::task::595::TaskManager.Task::(_updateState) Task=`9a7948f6-b6d9-42c2-b91f-7e0346dfc1d6`::moving from state init -> state preparing

Example

>>> from insights.parsers.vdsm_log import VDSMLog
>>> from insights.tests import context_wrap
>>> vdsm_log = VDSMLog(context_wrap(VDSM_LOG))
>>> vdsm_log.get('TaskManager')
'Thread-4662::DEBUG::2015-05-08 18:01:05,560::task::595::TaskManager.Task::(_updateState) Task=`9a7948f6-b6d9-42c2-b91f-7e0346dfc1d6`::moving from state init -> state preparing'
>>> list(vdsm_log.parse_lines(vdsm_log.get('TaskManager')))[0]  # from generator to list to subscript
{'thread': 'Thread-4662',
 'level': 'DEBUG',
 'asctime': datetime(2015, 5, 8, 18, 1, 5, 56000),
 'module': 'task',
 'line': '595',
 'logname': 'TaskManager.Task',
 'message': 'Task=`9a7948f6-b6d9-42c2-b91f-7e0346dfc1d6`::moving from state init -> state preparing'
}

Note: VDSM version 4 has different logs format than version 3. VDSM version 4 logs parser is designed to closely match log format as referred in /etc/vdsm/logger.conf which is as below:

format: %(asctime)s %(levelname)-5s (%(threadName)s) [%(name)s] %(message)s (%(module)s:%(lineno)d)

Sample logs from VDSM version 4:

2017-04-18 14:00:00,000+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.02 seconds (__init__:515)
2017-04-18 14:00:01,807+0200 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:10.34.60.219:49213 (protocoldetector:72)
2017-04-18 14:00:01,808+0200 ERROR (Reactor thread) [ProtocolDetector.SSLHandshakeDispatcher] Error during handshake: unexpected eof (m2cutils:304)
2017-04-18 14:00:03,304+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-04-18 14:00:05,870+0200 INFO  (jsonrpc/7) [dispatcher] Run and protect: getSpmStatus(spUUID=u'00000002-0002-0002-0002-00000000024f', options=None) (logUtils:51)

Example

>>> from insights.parsers.vdsm_log import VDSMLog
>>> from insights.tests import context_wrap
>>> vdsm_log = VDSMLog(context_wrap(VDSM_LOG))
>>> lines_with_error = vdsm_log.get('ERROR')
>>> list(vdsm_log.parse_lines(lines_with_error))[0]
{'asctime': datetime(2017, 4, 18, 14, 0, 1, 808000),
 'levelname': 'ERROR',
 'thread_name': 'Reactor thread',
 'name': 'ProtocolDetector.SSLHandshakeDispatcher',
 'message': 'Error during handshake: unexpected eof',
 'module': 'm2cutils',
 'lineno': '304'
}
parse_lines(lines)[source]

Parse log lines to be used with keep_scan or get

Parameters

lines (list) -- Lines to be parsed

Yields

Dictionary with following keys

  • asctime(datetime) - date and time as datetime object

  • level(str) - log level. Can me INFO, ERROR, WARN or DEBUG

  • thread(str) - thread name

  • logname(str) - filter name

  • message(str) - the body of the message

  • module(str) - module name

  • lineno(str) - line number which triggered the log

This will NOT parse Python Traceback. Any unparsed line(s) will be yield as a list

VgDisplay - command vgdisplay

class insights.parsers.vgdisplay.VgDisplay(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parse the output of the vgdisplay -vv or vgdisplay commands.

The vg_list property is the main access to the list of volume groups in the command output. Each volume group is stored as a dictionary of keys and values drawn from the property list of the volume group. The volume group’s logical and physical volumes are stored in the ‘Logical Volumes’ and ‘Physical Volumes’ sub-keys, respectively.

Sample command output of vgdisplay -vv (pruned for clarity):

Couldn't find device with uuid VVLmw8-e2AA-ECfW-wDPl-Vnaa-0wW1-utv7tV.

--- Volume group ---
VG Name               RHEL7CSB
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  13
...

--- Logical volume ---
LV Path                /dev/RHEL7CSB/Root
LV Name                Root
VG Name                RHEL7CSB
LV Size                29.30 GiB
...

--- Physical volumes ---
PV Name               /dev/mapper/luks-96c66446-77fd-4431-9508-f6912bd84194
PV UUID               EfWV9V-03CX-E6zc-JkMw-yQae-wdzp-Je1KUn
PV Status             allocatable
Total PE / Free PE    118466 / 4036

Volume groups are kept in the vg_list property in the order they were found in the file.

Lines containing ‘Couldn’t find device with uuid’ and ‘missing physical volume’ are stored in a debug_info property.

Examples

>>> vg_info = shared[VgDisplay]
>>> len(vg_info.vg_list)
1
>>> vgdata = vg_info.vg_list[0]
>>> vgdata['VG Name']
'RHEL7CSB'
>>> vgdata['VG Size']
'462.76 GiB'
>>> 'Logical Volumes' in vgdata
True
>>> lvs = vgdata['Logical Volumes']
>>> type(lvs)
dict
>>> lvs.keys()  # Note - keyed by device name
['/dev/RHEL7CSB/Root']
>>> lvs['/dev/RHEL7CSB/Root']['LV Name']
'Root'
>>> lvs['/dev/RHEL7CSB/Root']['LV Size']
'29.30 GiB'
>>> 'Physical Volumes' in vgdata
True
>>> vgdata['Physical Volumes'].keys()
['/dev/mapper/luks-96c66446-77fd-4431-9508-f6912bd84194']
>>> vgdata['Physical Volumes']['/dev/mapper/luks-96c66446-77fd-4431-9508-f6912bd84194']['PV UUID']
'EfWV9V-03CX-E6zc-JkMw-yQae-wdzp-Je1KUn'
>>> vg_info.debug_info
["Couldn't find device with uuid VVLmw8-e2AA-ECfW-wDPl-Vnaa-0wW1-utv7tV."]
parse_content(content)[source]

This method must be implemented by classes based on this class.

VirshListAll - command virsh --readonly list --all

This module provides VM status using output of command virsh --readonly list --all.

class insights.parsers.virsh_list_all.VirshListAll(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Parsing output of virsh --readonly list --all.

Typical output of virsh --readonly list --all command is:

Id    Name                           State
----------------------------------------------------
2     rhel7.4                        running
4     rhel7.0                        paused
-     centos6.8-router               shut off
-     cfme-5.7.13                    shut off
-     cfme-rhos-5.9.0.15             shut off
-     fedora-24-kernel               shut off
-     fedora-saio_fedoraSaio         shut off
-     fedora24-misc                  shut off
-     freebsd11.0                    shut off
-     guixSD                         shut off
-     miq-gap-1                      shut off
-     rhel7.2                        shut off
-     RHOSP10                        shut off

Examples

>>> len(output.search(state='shut off')) == 11
True
>>> len(output.search(id=None)) == 11
True
>>> len(output.search(id=2)) == 1
True
>>> output.search(name='rhel7.4') == [{'state': 'running', 'id': 2, 'name': 'rhel7.4'}]
True
>>> output.get_vm_state('rhel7.0') == 'paused'
True
>>> output.get_vm_state('rhel9.0') is None
True
>>> 'cfme' in output
False
>>> 'cfme-5.7.13' in output
True
fields

List of KeyValue namedtupules for each line in the command.

Type

list

cols

List id key value pair derived from the command.

Type

list

keywords

keywords present in the command, each keyword is converted to lowercase.

Type

list

get_vm_state(vmname)[source]

Get VM state associated with vmname

Typical output is virsh --readonly list --all command:

 Id    Name                           State
----------------------------------------------------
 2     rhel7.4                        running
 4     rhel7.0                        paused

Example

>>> output.get_vm_state('rhel7.0')
'paused'
Parameters

vmname (str) -- A key. For ex. rhel7.0.

Returns

State of VM. Returns None if, vmname does not exist.

Return type

str

keyvalue

namedtuple: Represent name value pair as a namedtuple with case.

alias of KeyValue

parse_content(content)[source]

This method must be implemented by classes based on this class.

search(**kw)[source]

Search item based on key value pair.

Example

>>> len(output.search(state='shut off')) == 11
True
>>> len(output.search(id=None)) == 11
True
>>> len(output.search(id=2)) == 1
True

VirtUuidFacts - files /etc/rhsm/facts/virt_uuid.facts

This module provides parsing for the /etc/rhsm/facts/virt_uuid.facts file. The VirtUuidFacts class is based on a shared class which processes the JSON information into a dictionary.

Sample input data looks like:

{"virt.uuid": "4546B285-6C41-5D6R-86G5-0BFR4B3625FS", "uname.machine": "x86"}

Examples

>>> len(virt_uuid_facts.data)
2
class insights.parsers.virt_uuid_facts.VirtUuidFacts(*args, **kwargs)[source]

Bases: insights.core.JSONParser

Warning

This parser is deprecated, please use insights.parsers.subscription_manager_list.SubscriptionManagerFactsList instead.

VirtWhat - Command virt-what

Parses the output of the virt-what command to check if the host is running in a virtual machine.

Sample input::

kvm

Examples

>>> vw = shared[VirtWhat]
>>> vw.is_virtual
True
>>> vw.is_physical
False
>>> vw.generic
'kvm'
>>> 'aws' in vw
False

Note

For virt-what-1.13-8 or older on RHEL7, the command fails when running without an environment (or very restricted environment), and reports below error:

virt-what: virt-what-cpuid-helper program not found in $PATH
class insights.parsers.virt_what.VirtWhat(*args, **kwargs)[source]

Bases: insights.core.CommandParser

Class for parsing virt-what command.

generic

The type of the virtual machine. ‘baremetal’ if physical machine.

Type

str

errors

List of the error information if any error occurs.

Type

list

specifics

List of the specific information if the command outputs.

Type

list

property is_physical

Is the host running in a physical machine? None when something is wrong.

Type

bool

property is_virtual

Is the host running in a virtual machine? None when something is wrong.

Type

bool

parse_content(content)[source]

This method must be implemented by classes based on this class.

VirtWhoConf - File /etc/virt-who.conf and /etc/virt-who.d/*.conf

The VirtWhoConf class parses the virt-who configuration files in ini-like format.

Note

The configuration files under /etc/virt-who.d/ might contain sensitive information, like password. It must be filtered.

class insights.parsers.virt_who_conf.VirtWhoConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.IniConfigFile

Parse the virt-who configuration files /etc/virt-who.conf and /etc/virt-who.d/*.conf.

Sample configuration file:

## This is a template for virt-who global configuration files. Please see
## virt-who-config(5) manual page for detailed information.
##
## virt-who checks /etc/virt-who.conf for sections 'global' and 'defaults'.
## The sections and their values are explained below.
## NOTE: These sections retain their special meaning and function only when present in /etc/virt-who.conf
##
## You can uncomment and fill following template or create new file with
## similar content.

#Terse version of the general config template:
[global]

interval=3600
#reporter_id=
debug=False
oneshot=False
#log_per_config=False
#log_dir=
#log_file=
#configs=

[defaults]
owner=Satellite
env=Satellite

Examples

>>> vwho_conf = shared[VirtWhoConf]
>>> 'global' in vwho_conf
True
>>> vwho_conf.has_option('global', 'debug')
True
>>> vwho_conf.get('global', 'oneshot')
"False"
>>> vwho_conf.getboolean('global', 'oneshot')
False
>>> vwho_conf.get('global', 'interval')
"3600"
>>> vwho_conf.getint('global', 'interval')
3600
>>> vwho_conf.items('defaults')
{'owner': 'Satellite', 'env': 'Satellite'}

VirtlogdConf - file /etc/libvirt/virtlogd.conf

The VirtlogdConf class parses the file /etc/libvirt/virtlogd.conf.

class insights.parsers.virtlogd_conf.VirtlogdConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse content of /etc/libvirt/virtlogd.conf. The virtlogd.conf is in the standard conf file format and is read by the base parser class LegacyItemAccess.

Sample /etc/libvirt/virtlogd.conf file:

# Master virtlogd daemon configuration file
#

#################################################################
#
# Logging controls
#

# Logging level: 4 errors, 3 warnings, 2 information, 1 debug
# basically 1 will log everything possible
#log_level = 3

# Logging filters:
# A filter allows to select a different logging level for a given category
# of logs
# The format for a filter is one of:
#    x:name
#    x:+name
#      where name is a string which is matched against source file name,
#      e.g., "remote", "qemu", or "util/json", the optional "+" prefix
#      tells libvirt to log stack trace for each message matching name,
#      and x is the minimal level where matching messages should be logged:
#    1: DEBUG
#    2: INFO
#    3: WARNING
#    4: ERROR
#
# Multiple filter can be defined in a single @filters, they just need to be
# separated by spaces.
#
# e.g. to only get warning or errors from the remote layer and only errors
# from the event layer:
#log_filters="3:remote 4:event"

# Logging outputs:
# An output is one of the places to save logging information
# The format for an output can be:
#    x:stderr
#      output goes to stderr
#    x:syslog:name
#      use syslog for the output and use the given name as the ident
#    x:file:file_path
#      output to a file, with the given filepath
#    x:journald
#      ouput to the systemd journal
# In all case the x prefix is the minimal level, acting as a filter
#    1: DEBUG
#    2: INFO
#    3: WARNING
#    4: ERROR
#
# Multiple output can be defined, they just need to be separated by spaces.
# e.g. to log all warnings and errors to syslog under the virtlogd ident:
#log_outputs="3:syslog:virtlogd"
#

# The maximum number of concurrent client connections to allow
# over all sockets combined.
#max_clients = 1024


# Maximum file size before rolling over. Defaults to 2 MB
#max_size = 2097152

# Maximum number of backup files to keep. Defaults to 3,
# not including the primary active file
max_backups = 3

Examples

>>> conf.get('max_backups')
'3'
data

Ex: {'max_backups': '3'}

Type

dict

parse_content(content)[source]

This method must be implemented by classes based on this class.

VmaRaEnabledS390x - file /sys/kernel/mm/swap/vma_ra_enabled

Parser to parse the output of file /sys/kernel/mm/swap/vma_ra_enabled

class insights.parsers.vma_ra_enabled_s390x.VmaRaEnabledS390x(context)[source]

Bases: insights.core.Parser

Base class to parse /sys/kernel/mm/swap/vma_ra_enabled file, the file content will be stored in a string.

Sample output for file:

True

Examples

>>> type(vma)
<class 'insights.parsers.vma_ra_enabled_s390x.VmaRaEnabledS390x'>
>>> vma.ra_enabled
True
ra_enabled

The result parsed

Type

bool

Raises

SkipException -- When file content is empty

parse_content(content)[source]

This method must be implemented by classes based on this class.

Crash log vmcore-dmesg.txt

This module provides vmcore-dmesg.txt crash logs that are stored in /var/crash/[host]-YYYY-MM-DD-HH:MM:SS/.

Exemplary vmcore-dmesg.txt looks like:

[  345.691798] device-mapper: raid: Failed to read superblock of device at position 0
[  345.693497] device-mapper: raid: Discovered old metadata format; upgrading to extended metadata format
[  345.701166] md/raid1:mdX: active with 1 out of 2 mirrors
[  345.726870] BUG: unable to handle kernel NULL pointer dereference at 00000000000005ec
[  345.727782] IP: [<ffffffffc0852ffb>] read_balance+0x1db/0x4e0 [raid1]
[  345.728570] PGD 0
[  345.729279] Oops: 0000 [#1] SMP
[  345.729950] Modules linked in: raid1 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx
[  345.734752]  drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm bfa ahci crct10dif_pclmul crct10dif_common
[  345.737334] CPU: 6 PID: 952 Comm: systemd-udevd Kdump: loaded Not tainted 3.10.0-862.9.1.el7.x86_64 #1
[  345.738198] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.4.3 01/17/2017
[  345.739334] task: ffff9f888d030fd0 ti: ffff9f86b76b4000 task.ti: ffff9f86b76b4000

The class inherites LogFileOutput and doesn’t implement new methods/attributes. vmcore_dmesg defined in Specs allowes multioutput parsing. Filtering is allowed.

Examples

>>> vmcore_dmesg.get("Modules linked in:")
[{'raw_message': '[  345.729950] Modules linked in: raid1 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx'}]
>>> vmcore_dmesg.get("device-mapper")
[{'raw_message': '[  345.691798] device-mapper: raid: Failed to read superblock of device at position 0'}, {'raw_message': '[  345.693497] device-mapper: raid: Discovered old metadata format; upgrading to extended metadata format'}]
class insights.parsers.vmcore_dmesg.VMCoreDmesg(context)[source]

Bases: insights.core.LogFileOutput

This class parses data in /var/crash/[host]-YYYY-MM-DD-HH:MM:SS/vmcore-dmesg.txt. Inherited from LogFileOutput. See LogFileOutput for details. No new methods or attributes. MultiOutput is allowed. Filtering is allowed.

VMwareToolsConf - file /etc/vmware-tools/tools.conf

The VMware tools configuration file /etc/vmware-tools/tools.conf is in the standard ‘ini’ format and is read by the IniConfigFile parser. vmtoolsd.service provided by open-vm-tools package is configured using /etc/vmware-tools/tools.conf.

Sample /etc/vmware-tools/tools.conf file:

[guestinfo]
disable-query-diskinfo = true

[logging]
log = true

vmtoolsd.level = debug
vmtoolsd.handler = file
vmtoolsd.data = /tmp/vmtoolsd.log

Examples

>>> list(conf.sections()) == [u'guestinfo', u'logging']
True
>>> conf.has_option('guestinfo', 'disable-query-diskinfo')
True
>>> conf.getboolean('guestinfo', 'disable-query-diskinfo')
True
>>> conf.get('guestinfo', 'disable-query-diskinfo') == u'true'
True
class insights.parsers.vmware_tools_conf.VMwareToolsConf(context)[source]

Bases: insights.core.IniConfigFile

Class for VMware tool configuration file content.

Parsers for VSFTPD configuration

This module contains two parsers:

VsftpdPamConf - file /etc/pam.d/vsftpd

VsftpdConf - file /etc/vsftpd.conf

class insights.parsers.vsftpd.VsftpdConf(context)[source]

Bases: insights.core.Parser, insights.core.LegacyItemAccess

Parsing for /etc/vsftpd.conf. Key=value pairs are stored in a dictionary, made available directly through the object itself thanks to the insights.core.LegacyItemAccess mixin.

Reference:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/Reference_Guide/s1-ftp-vsftpd-conf.html

Sample content:

# No anonymous login
anonymous_enable=NO
# Let local users login
local_enable=YES

# Write permissions
write_enable=YES

Examples

>>> conf = shared[VsftpdConf]
>>> 'anonymous_enable' in conf
True
>>> 'chmod_enable' in conf
False
>>> conf['anonymous_enable']
'NO'
parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.vsftpd.VsftpdPamConf(context)[source]

Bases: insights.parsers.pam.PamDConf

Parsing for the /etc/pam.d/vsftpd PAM configuration.

See the insights.parsers.pam.PamDConf class documentation for more information on how this class is used.

Sample PAM configuration for vsftpd:

#%PAM-1.0
session    optional     pam_keyinit.so    force revoke
auth       required     pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
auth       required     pam_shells.so
auth       include      password-auth
account    include      password-auth
session    required     pam_loginuid.so
session    include      password-auth

Examples

>>> vs_pam = shared[VsftpdPamConf]
>>> vs_pam[0]
<insights.parsers.pam.PamConfEntry at 0x15c6cd0>
>>> vs_pam[0].interface
'session'
>>> vs_pam[0].control_flags
[ControlFlag(flag='optional', value=None)]
>>> vs_pam[0].control_flags[0].flag
'optional'
>>> vs_pam[0].module_name
'pam_keyinit.so'
>>> vs_pam[0].module_args
'force revoke'

Parsers for file /sys/kernel/debug/x86/*_enabled outputs

This module provides the following parsers:

X86PTIEnabled - file /sys/kernel/debug/x86/pti_enabled

X86IBPBEnabled - file /sys/kernel/debug/x86/ibpb_enabled

X86IBRSEnabled - file /sys/kernel/debug/x86/ibrs_enabled

X86RETPEnabled - file /sys/kernel/debug/x86/retp_enabled

class insights.parsers.x86_debug.X86DebugEnabled(context)[source]

Bases: insights.core.Parser

Class for parsing file /sys/kernel/debug/x86/*_enabled

value

the result parsed of /sys/kernel/debug/x86/*_enabled

Type

int

Raises

SkipException -- When input content is empty

parse_content(content)[source]

This method must be implemented by classes based on this class.

class insights.parsers.x86_debug.X86IBPBEnabled(context)[source]

Bases: insights.parsers.x86_debug.X86DebugEnabled

Class for parsing file /sys/kernel/debug/x86/ibpb_enabled Typical output of file /sys/kernel/debug/x86/retp_enabled looks like:

1

Examples

>>> type(dva)
<class 'insights.parsers.x86_debug.X86IBPBEnabled'>
>>> dva.value
1
value

the result parsed of ‘/sys/kernel/debug/x86/ibpb_enabled’

Type

int

Raises

SkipException -- When input content is empty

class insights.parsers.x86_debug.X86IBRSEnabled(context)[source]

Bases: insights.parsers.x86_debug.X86DebugEnabled

Class for parsing file /sys/kernel/debug/x86/ibrs_enabled Typical output of file /sys/kernel/debug/x86/ibrs_enabled looks like:

0

Examples

>>> type(dl)
<class 'insights.parsers.x86_debug.X86IBRSEnabled'>
>>> dl.value
1
value

the result parsed of ‘/sys/kernel/debug/x86/ibrs_enabled’

Type

int

Raises

SkipException -- When input content is empty

class insights.parsers.x86_debug.X86PTIEnabled(context)[source]

Bases: insights.parsers.x86_debug.X86DebugEnabled

Class for parsing file /sys/kernel/debug/x86/pti_enabled Typical output of file /sys/kernel/debug/x86/pti_enabled looks like:

0

Examples

>>> type(dv)
<class 'insights.parsers.x86_debug.X86PTIEnabled'>
>>> dv.value
1
value

the result parsed of ‘/sys/kernel/debug/x86/pti_enabled’

Type

int

Raises

SkipException -- When input content is empty

class insights.parsers.x86_debug.X86RETPEnabled(context)[source]

Bases: insights.parsers.x86_debug.X86DebugEnabled

Class for parsing file /sys/kernel/debug/x86/retp_enabled Typical output of file /sys/kernel/debug/x86/retp_enabled looks like:

1

Examples

>>> type(dval)
<class 'insights.parsers.x86_debug.X86RETPEnabled'>
>>> dval.value
1
value

the result parsed of ‘/sys/kernel/debug/x86/retp_enabled’

Type

int

Raises

SkipException -- When input content is empty

XFSInfo - command /usr/sbin/xfs_info {mount}

The XFSInfo parser reads the output of the xfs_info command and turns it into a dictionary of keys and values in several sections, as given in the output of the command:

meta-data=/dev/sda      isize=256    agcount=32, agsize=16777184 blks
         =              sectsz=512   attr=2
data     =              bsize=4096   blocks=536869888, imaxpct=5
         =              sunit=32     swidth=128 blks
naming   =version 2     bsize=4096
log      =internal      bsize=4096   blocks=32768, version=2
         =              sectsz=512   sunit=32 blks, lazy-count=1
realtime =none          extsz=524288 blocks=0, rtextents=0

The main sections are meta-data, data, naming, log and realtime, stored under those keys in the object’s xfs_info property. Each section can optionally have a ‘specifier’, which is the first thing after the section name (e.g. version or /dev/sda). If no specifier is found on the line, none is recorded for the section. The specifier can also have a value (e.g. 2 for the version), which is recorded in the specifier_value key in the section.

Each ‘key=value’ pair until the next given section start (or end of file), is recorded as an entry in the section dictionary, with all values that are numeric being converted to integers (i.e. usually anything without a ‘blks’ suffix).

Because the spec for this parser can collect multiple files, the shared parser information contains a list of XFSInfo objects, one per file system.

In addition, the data_size and log_size values are calculated as properties from the block size and blocks in the data and log, respectively.

insights.parsers.xfs_info.xfs_info

A dictionary of dictionaries containing the data from the report, keyed on the five section names in the output: ‘meta-data’, ‘data’, ‘naming’, ‘log’, and ‘realtime’. ‘meta-data’, ‘data’ and ‘log’ are always present. Within each dictionary a special key ‘specifier’ stores any data immediately after the section name - e.g. ‘/dev/sda’ or ‘version’ in the case of the output below. Any data immediately following that is stored in the specifier_value key. Otherwise, data is read in key=value pairs - e.g. from the output below, the isize key will have the value 32 (an integer). Data values given in blocks are left as is, so the value of the agsize key is ‘16777184 blks’ as a string.

Type

dict

insights.parsers.xfs_info.mount

If the mount point can be derived from the file name of the original output, then this attribute contains the reconstructed mount point name.

Type

str

insights.parsers.xfs_info.device

The device name immediately after the ‘meta-data’ section heading.

Type

str

insights.parsers.xfs_info.data_size

The size of the data segment in bytes, from multiplying the blocks and bsize values of the data section together.

Type

int

insights.parsers.xfs_info.log_size

The size of the log segment in bytes, from multiplying the blocks and bsize values of the log section together.

Type

int

Sample output (from file ‘sos_commands/xfs/xfs_info_.data’):

meta-data=/dev/sda      isize=256    agcount=32, agsize=16777184 blks
         =              sectsz=512   attr=2
data     =              bsize=4096   blocks=536869888, imaxpct=5
         =              sunit=32     swidth=128 blks
naming   =version 2     bsize=4096
log      =internal      bsize=4096   blocks=32768, version=2
         =              sectsz=512   sunit=32 blks, lazy-count=1
realtime =none          extsz=524288 blocks=0, rtextents=0

Examples

>>> xfs = shared[XFSInfo][0]  # first XFS filesystem as an example
>>> xfs.xfs_info['meta-data']['specifier']
'/dev/sda'
>>> 'specifier_value' in xfs.xfs_info['meta-data']
False
>>> xfs.xfs_info['meta-data']['agcount']
32
>>> xfs.xfs_info['meta-data']['agsize']
'16777184 blks'
>>> xfs.data_size
2199019061248
>>> 'crc' in xfs.xfs_info['data']
False
class insights.parsers.xfs_info.XFSInfo(context)[source]

Bases: insights.core.CommandParser

This mapper reads the output of the xfs_info command.

As this spec can collect more than one file, the mapper will return a list of XFSInfo objects, which need to be iterated through to find the information on the mount point or device you need.

parse_content(content)[source]

In general the pattern is:

section =sectionkey key1=value1 key2=value2, key3=value3
        = key4=value4
nextsec =sectionkey sectionvalue  key=value otherkey=othervalue

Sections are continued over lines as per RFC822. The first equals sign is column-aligned, and the first key=value is too, but the rest seems to be comma separated. Specifiers come after the first equals sign, and sometimes have a value property, but sometimes not.

XinetdConf - files /etc/xinetd.conf and in /etc/xinetd.d/

This module provides parsing for the /etc/xinetd.conf and /etc/xinetd.d/* files.

Sample input data of file /etc/xinetd.conf looks like:

defaults
{
        enabled                 =
        no_access               = 0.0.0.0/0
        instances               = 60
        per_source              = 128
        log_type                = SYSLOG authpriv
        log_on_success          = HOST PID DURATION EXIT
        log_on_failure          = HOST
        cps                     = 25 30
        max_load                = 2
}

includedir /etc/xinetd.d

Sample input data of file /etc/xinetd.d/tftp looks like:

service tftp
{
        socket_type             = dgram
        protocol                = udp
        wait                    = yes
        user                    = root
        server                  = /usr/sbin/in.tftpd
        server_args             = -s /var/lib/tftpboot
        disable                 = yes
        per_source              = 11
        cps                     = 100 2
        flags                   = IPv4
}

Examples

>>> xinetd_conf = shared[XinetdConf].data
>>> assert xinetd_conf.get('is_valid') == True
>>> assert xinetd_conf.get('is_includedir') == True
>>> xinetd_conf.get('is_includedir')
'/etc/xinetd.d'
>>> 'defaults' in xinetd_conf
True
>>> xinetd_conf.get('defaults')
{   'enabled': '',
    'v6only': 'no',
    'log_on_failure': 'HOST',
    'umask': '002',
    'log_on_success': 'PID HOST DURATION EXIT',
    'instances': '50',
    'per_source': '10',
    'groups': 'yes',
    'cps': '50 10',
    'log_type': 'SYSLOG daemon info'
}
>>> 'tftp' in xinetd_conf
True
>>> xinetd_conf.get('tftp')
{   'protocol': 'udp',
    'socket_type': 'dgram',
    'server': '/usr/sbin/in.tftpd',
    'server_args': '-s /var/lib/tftpboot',
    'disable': 'yes',
    'flags': 'IPv4',
    'user': 'root',
    'per_source': '11',
    'cps': '100 2',
    'wait': 'yes'
}
>>> 'abc' in xinetd_conf
False
>>> xinetd_conf.get('abc')
>>>
>>>
>>> XINETD_CONF_BAD = '''
... defaults {
...         umask           = 002
... }
...
... includedir /etc/xinetd.d
... '''
>>> xinetd_conf = shared[XinetdConf].data
>>> assert xinetd_conf.get('is_valid') == False
>>> 'defaults' in xinetd_conf
False
class insights.parsers.xinetd_conf.XinetdConf(context)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

Parse contents of file /etc/xinetd.conf and /etc/xinetd.d/*.

parse_content(content)[source]

This method must be implemented by classes based on this class.

Yum - Commands

Parsers for yum commands.

This module contains the classes that parse the output of the commands yum -C --noplugins repolist.

YumRepoList - command yum -C --noplugins repolist

class insights.parsers.yum.YumRepoList(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser

Class for parsing the output of yum -C --noplugins repolist command.

Typical output of the command is:

repo id                                             repo name                                                                                                    status
rhel-7-server-e4s-rpms/x86_64                       Red Hat Enterprise Linux 7 Server - Update Services for SAP Solutions (RPMs)                                 12,250
!rhel-ha-for-rhel-7-server-e4s-rpms/x86_64          Red Hat Enterprise Linux High Availability (for RHEL 7 Server) Update Services for SAP Solutions (RPMs)         272
*rhel-sap-hana-for-rhel-7-server-e4s-rpms/x86_64    RHEL for SAP HANA (for RHEL 7 Server) Update Services for SAP Solutions (RPMs)                                   21
repolist: 12,768

Or sometimes it just outputs repo id and status:

repo id                                                                               status
LME_EPEL_6_x86_64                                                                        26123
LME_FSMLabs_Timekeeper_timekeeper                                                            2
LME_HP_-_Software_Delivery_Repository_Firmware_Pack_for_ProLiant_-_6Server_-_Current      1163
LME_HP_-_Software_Delivery_Repository_Scripting_Took_Kit_-_6Server                          17
LME_HP_-_Software_Delivery_Repository_Service_Pack_for_ProLiant_-_6Server_-_Current       1915
LME_HP_-_Software_Delivery_Repository_Smart_Update_Manager_-_6Server                        30
LME_LME_Custom_Product_Mellanox_OFED                                                       114
LME_LME_Custom_Product_OMD_RPMS                                                             14
LME_LME_Custom_Product_RPMs                                                                  5
LME_LME_Custom_Product_SNOW_Repository                                                       2
rhel-6-server-optional-rpms                                                            10400+1
rhel-6-server-rpms                                                                    18256+12
rhel-6-server-satellite-tools-6.2-rpms                                                      55
repolist: 58096

Examples

>>> len(repolist)
3
>>> 'rhel-7-server-e4s-rpms/x86_64' in repolist.repos
True
>>> 'rhel-7-server-e4s-rpms' in repolist.repos
False
>>> 'rhel-7-server-e4s-rpms' in repolist.rhel_repos
True
>>> repolist['rhel-7-server-e4s-rpms/x86_64']['name']
'Red Hat Enterprise Linux 7 Server - Update Services for SAP Solutions (RPMs)'
>>> repolist[0]['name']
'Red Hat Enterprise Linux 7 Server - Update Services for SAP Solutions (RPMs)'
>>> repolist['rhel-ha-for-rhel-7-server-e4s-rpms/x86_64']['id']
'!rhel-ha-for-rhel-7-server-e4s-rpms/x86_64'
>>> len(repolist_no_reponame)
13
>>> len(repolist_no_reponame.rhel_repos)
3
>>> 'rhel-6-server-rpms' in repolist_no_reponame.repos
True
>>> 'rhel-6-server-optional-rpms' in repolist_no_reponame.rhel_repos
True
>>> repolist_no_reponame[0]['id']
'LME_EPEL_6_x86_64'
>>> repolist_no_reponame[0].get('name', '')
''
data

list of repos wrapped in dictionaries

Type

list

repos

dict of all listed repos where the key is the full repo-id without “!” or “*”. But you can get it from the value part if needed. For example:

self.repos = {
    'rhel-7-server-e4s-rpms/x86_64': {
        'id': 'rhel-7-server-e4s-rpms/x86_64',
        'name': 'Red Hat Enterprise Linux 7 Server - Update Services for SAP Solutions (RPMs)',
        'status': '12,250'
    },
    'rhel-ha-for-rhel-7-server-e4s-rpms/x86_64': {
        'id': '!rhel-ha-for-rhel-7-server-e4s-rpms/x86_64',
        'name': 'Red Hat Enterprise Linux High Availability (for RHEL 7 Server) Update Services for SAP Solutions (RPMs)',
        'status': '272'
    },
    'rhel-sap-hana-for-rhel-7-server-e4s-rpms/x86_64': {
        'id': '*rhel-sap-hana-for-rhel-7-server-e4s-rpms/x86_64',
        'name': 'RHEL for SAP HANA (for RHEL 7 Server) Update Services for SAP Solutions (RPMs)',
        'status': '21'
    }
}
Type

dict

rhel_repos

list of all the rhel repos and the item is just the repo id without server and arch info. For example:

self.rhel_repos = ['rhel-7-server-e4s-rpms', 'rhel-ha-for-rhel-7-server-e4s-rpms', 'rhel-sap-hana-for-rhel-7-server-e4s-rpms']
Type

list

property eus

list of the EUS part of each repo

parse_content(content)[source]

This method must be implemented by classes based on this class.

property rhel_repos

list of RHEL repos/Repo IDs

YumConf - file /etc/yum.conf

This module provides parsing for the /etc/yum.conf file. The YumConf class parses the information in the file /etc/yum.conf. See the IniConfigFile class for more information on attributes and methods.

Sample input data looks like:

[main]

cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3

[rhel-7-server-rpms]

metadata_expire = 86400
baseurl = https://cdn.redhat.com/content/rhel/server/7/$basearch
name = Red Hat Enterprise Linux 7 Server (RPMs)
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 1
gpgcheck = 1

Examples

>>> yconf = shared[YumConf]
>>> yconf.defaults()
{'admin_token': 'ADMIN', 'compute_port': '8774'}
>>> 'main' in yconf
True
>>> 'rhel-7-server-rpms' in yconf
True
>>> yconf.has_option('main', 'gpgcheck')
True
>>> yconf.has_option('main', 'foo')
False
>>> yconf.get('rhel-7-server-rpms', 'enabled')
'1'
>>> yconf.items('main')
{'plugins': '1',
 'keepcache': '0',
 'cachedir': '/var/cache/yum/$basearch/$releasever',
 'exactarch': '1',
 'obsoletes': '1',
 'installonly_limit': '3',
 'debuglevel': '2',
 'gpgcheck': '1',
 'logfile': '/var/log/yum.log'}
class insights.parsers.yum_conf.YumConf(context)[source]

Bases: insights.core.IniConfigFile

Parse contents of file /etc/yum.conf.

parse_content(content)[source]

Parses content of the config file.

In child class overload and call super to set flag allow_no_values and allow keys with no value in config file:

def parse_content(self, content):
    super(YourClass, self).parse_content(content,
                                         allow_no_values=True)

YumListInstalled - Command yum list installed

The YumListInstalled class parses the output of the yum list installed command. Each line is parsed and stored in a YumInstalledRpm object.

Sample input data:

Loaded plugins: product-id, search-disabled-repos, subscription-manager
Installed Packages
GConf2.x86_64                    3.2.6-8.el7             @rhel-7-server-rpms
GeoIP.x86_64                     1.5.0-11.el7            @anaconda/7.3
ImageMagick.x86_64               6.7.8.9-15.el7_2        @rhel-7-server-rpms
NetworkManager.x86_64            1:1.4.0-17.el7_3        installed
NetworkManager.x86_64            1:1.8.0-9.el7           installed
NetworkManager-config-server.noarch
                                 1:1.8.0-9.el7           installed
Uploading Enabled Repositories Report
Loaded plugins: priorities, product-id, rhnplugin, rhui-lb, subscription-
              : manager, versionlock

Examples

>>> type(rpms)
<class 'insights.parsers.yum_list_installed.YumListInstalled'>
>>> 'GeoIP' in rpms
True
>>> rpms.get_max('GeoIP')
0:GeoIP-1.5.0-11.el7
>>> rpms.expired_cache
True
>>> type(rpms.get_max('GeoIP'))
<class 'insights.parsers.yum_list_installed.YumInstalledRpm'>
>>> rpm = rpms.get_max('GeoIP')
>>> rpm.package
'GeoIP-1.5.0-11.el7'
>>> rpm.nvr
'GeoIP-1.5.0-11.el7'
>>> rpm.source
>>> rpm.name
'GeoIP'
>>> rpm.version
'1.5.0'
>>> rpm.release
'11.el7'
>>> rpm.arch
'x86_64'
>>> rpm.epoch
'0'
>>> from insights.parsers.yum_list_installed import YumInstalledRpm
>>> rpm2 = YumInstalledRpm.from_package('GeoIP-1.6.0-11.el7.x86_64')
>>> rpm == rpm2
False
>>> rpm > rpm2
False
>>> rpm < rpm2
True
class insights.parsers.yum_list_installed.YumInstalledRpm(data)[source]

Bases: insights.parsers.installed_rpms.InstalledRpm

The same as insights.parsers.installed_rpms.InstalledRpm but with an additional .repo attribute.

repo = None

yum / dnf repository name, if available.

Type

str

class insights.parsers.yum_list_installed.YumListInstalled(context)[source]

Bases: insights.core.CommandParser, insights.parsers.installed_rpms.RpmList

YumListInstalled shares the insights.parsers.installed_rpms.RpmList interface with insights.parsers.installed_rpms.InstalledRpms. The only difference is YumListInstalled takes the output of yum list installed as its source data, and the YumInstalledRpm instances it produces contain a .repo attribute.

expired_cache = None

Indicates if the yum repo cache is expired.

Type

bool

parse_content(content)[source]

yum list installed output is basically tabular with an ignorable set of rows at the top and a line “Installed Packages” that designates the following rows as data. Each column has a maximum width, and if any column overflows, the following columns wrap to the next line and indent to their usual starting positions. It’s also possible for the data rows to be followed by more lines that should be ignored. Since yum list installed is for human consumption, the footer lines can be syntactically ambiguous with data lines. We use heuristics to check for an invalid row to signal the end of data.

YumLog - file /var/log/yum.log

This module provides parsing for the /var/log/yum.log file. The YumLog class implements parsing for the file, which looks like:

May 13 15:54:49 Installed: libevent-2.0.21-4.el7.x86_64
May 13 15:54:49 Installed: tmux-1.8-4.el7.x86_64
May 23 18:06:24 Installed: wget-1.14-10.el7_0.1.x86_64
May 23 18:10:05 Updated: 1:openssl-libs-1.0.1e-51.el7_2.5.x86_64
May 23 18:10:05 Installed: 1:perl-parent-0.225-244.el7.noarch
May 23 18:10:05 Installed: perl-HTTP-Tiny-0.033-3.el7.noarch
May 23 16:09:09 Erased: redhat-access-insights-batch
May 23 18:10:05 Installed: perl-podlators-2.5.1-3.el7.noarch
May 23 18:10:05 Installed: perl-Pod-Perldoc-3.20-4.el7.noarch
May 23 18:10:05 Installed: 1:perl-Pod-Escapes-1.04-286.el7.noarch
May 23 18:10:06 Installed: perl-Text-ParseWords-3.29-4.el7.noarch

The information is stored as a list of Entry objects, each of which contains attributes for the position in the log, timestamp of the action, the package’s state in the system, and the affected package as an InstalledRpm.

Note

The examples in this module may be executed with the following command:

python -m insights.parsers.yumlog

Examples

>>> content = '''
... May 23 18:06:24 Installed: wget-1.14-10.el7_0.1.x86_64
... Jan 24 00:24:00 Updated: glibc-2.12-1.149.el6_6.4.x86_64
... Jan 24 00:24:09 Updated: glibc-devel-2.12-1.149.el6_6.4.x86_64
... Jan 24 00:24:10 Updated: nss-softokn-3.14.3-19.el6_6.x86_64
... Jan 24 18:10:05 Updated: 1:openssl-libs-1.0.1e-51.el7_2.5.x86_64
... Jan 24 00:24:11 Updated: glibc-2.12-1.149.el6_6.4.i686
... May 23 16:09:09 Erased: redhat-access-insights-batch
... Jan 24 00:24:11 Updated: glibc-devel-2.12-1.149.el6_6.4.i686
... '''.strip()
>>> from insights.tests import context_wrap
>>> yl = YumLog(context_wrap(content))
>>> e = yl.present_packages.get('nss-softokn')
>>> e.pkg.release
'19.el6_6'
>>> e = yl.present_packages.get('openssl-libs')
>>> e.pkg.name
'openssl-libs'
>>> e.pkg.version
'1.0.1e'
>>> len(yl)
8
>>> indices = [e.idx for e in yl]
>>> indices == range(len(yl))
True
class insights.parsers.yumlog.Entry(idx, timestamp, state, pkg)

Bases: tuple

namedtuple: Represents a line in /var/log/yum.log.

property idx
property pkg
property state
property timestamp
class insights.parsers.yumlog.YumLog(context)[source]

Bases: insights.core.Parser

Class for parsing /var/log/yum.log

ERASED = 'Erased'

Package Erased

INSTALLED = 'Installed'

Package Installed

UPDATED = 'Updated'

Package Updated

parse_content(content)[source]

Parses contents of each line in /var/log/yum.log.

Each line in the file contains 5 fields that are parsed into the attributes of Entry instances:

  • month

  • day

  • time

  • state

  • package

The month, day, and time form the Entry.timestamp. Entry.state contains the state of the package, one of ERASED, INSTALLED, or UPDATED. Entry.pkg contains the InstalledRpm instance corresponding to the parse package. Entry.idx is the zero-based line number of the Entry in the file. It can be used to tell ordering of events.

Parameters

content (list) -- Lines of /var/log/yum.log to be parsed.

Raises

ParseException -- if a line can’t be parsed for any reason.

property present_packages

list of latest Entry instances for installed packages.

ZdumpV - command /usr/sbin/zdump -v /etc/localtime -c 2019,2039

The /usr/sbin/zdump -v /etc/localtime -c 2019,2039 command provides information about ‘Daylight Saving Time’ in file /etc/localtime from 2019 to 2039.

Sample content from command zdump -v /etc/localtime -c 2019,2039 is:

/etc/localtime  Sun Mar 10 06:59:59 2019 UTC = Sun Mar 10 01:59:59 2019 EST isdst=0 gmtoff=-18000
/etc/localtime  Sun Mar 10 07:00:00 2019 UTC = Sun Mar 10 03:00:00 2019 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Nov  7 05:59:59 2038 UTC = Sun Nov  7 01:59:59 2038 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Nov  7 06:00:00 2038 UTC = Sun Nov  7 01:00:00 2038 EST isdst=0 gmtoff=-18000

Examples

>>> dst = zdump[0]
>>> dst.get('utc_time')
datetime.datetime(2019, 3, 10, 6, 59, 59)
>>> dst.get('utc_time_raw')
'Sun Mar 10 06:59:59 2019 UTC'
>>> dst.get('local_time')
datetime.datetime(2019, 3, 10, 1, 59, 59)
>>> dst.get('local_time_raw')
'Sun Mar 10 01:59:59 2019 EST'
>>> dst.get('isdst')
0
>>> dst.get('gmtoff')
-18000
class insights.parsers.zdump_v.ZdumpV(context, extra_bad_lines=[])[source]

Bases: insights.core.CommandParser, list

Parse the output from the /usr/sbin/zdump -v /etc/localtime -c 2019,2039 command and store the ‘Daylight Saving Time’ information into a list.

Raises

SkipException -- When nothing is parsed.

Warning

The value in key local_time doesn’t include the TimeZone information

parse_content(content)[source]

This method must be implemented by classes based on this class.

insights.parsers.zdump_v.str2datetime(timestamp, tz=False)[source]

This function translates the time stamp into a datetime object.

Parameters
  • timestamp (str) -- the time stamp from command zdump -v

  • tz (bool) -- True if it’s UTC TimeZone.

Returns

the datetime object about the time stamp time_string (str): the formatted time stamp

Return type

time (datetime)

ZiplConf - configuration file for zipl

A parser file for parsing and extracting data from /etc/zipl.conf file.

Sample input:

[defaultboot]
defaultauto
prompt=1
timeout=5
default=linux
target=/boot
[linux]
    image=/boot/vmlinuz-3.10.0-693.el7.s390x
    ramdisk=/boot/initramfs-3.10.0-693.el7.s390x.img
    parameters="root=/dev/mapper/rhel_gss5-root crashkernel=auto rd.dasd=0.0.0100 rd.dasd=0.0.0101 rd.dasd=0.0.0102 rd.lvm.lv=rhel_gss5/root rd.lvm.lv=rhel_gss5/swap net.ifnames=0 rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=0,portname=gss5,portno=0 LANG=en_US.UTF-8"
[linux-0-rescue-a27932c8d57248e390cee3798bbd3709]
    image=/boot/vmlinuz-0-rescue-a27932c8d57248e390cee3798bbd3709
    ramdisk=/boot/initramfs-0-rescue-a27932c8d57248e390cee3798bbd3709.img
    parameters="root=/dev/mapper/rhel_gss5-root crashkernel=auto rd.dasd=0.0.0100 rd.dasd=0.0.0101 rd.dasd=0.0.0102 rd.lvm.lv=rhel_gss5/root rd.lvm.lv=rhel_gss5/swap net.ifnames=0 rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=0,portname=gss5,portno=0"
# Configuration for dumping to SCSI disk
# Separate IPL and dump partitions
[dumpscsi]
target=/boot
dumptofs=/dev/sda2
parameters="dump_dir=/mydumps dump_compress=none dump_mode=auto"
# Menu containing two DASD boot configurations
:menu1
1=linux
2=linux-0-rescue-a27932c8d57248e390cee3798bbd3709
default=1
prompt=1
timeout=30

This module contains one parser:

ZiplConf - file /etc/zipl.conf

Examples

>>> zipl_info['linux']['image']
'/boot/vmlinuz-3.10.0-693.el7.s390x'
>>> zipl_info.images
{'linux':'/boot/vmlinuz-3.10.0-693.el7.s390x','linux-0-rescue-a27932c8d57248e390cee3798bbd3709':'/boot/vmlinuz-0-rescue-a27932c8d57248e390cee3798bbd3709'}
>>> zipl_info.dumptofses
{'dumpscsi':'/dev/sda2'}
>>> zipl_info[':menu1']['1']
'linux'
>>> 'defaultauto' in zipl_info['global']
True
>>> zipl_info['global']['defaultauto']
None
class insights.parsers.zipl_conf.ZiplConf(*args, **kwargs)[source]

Bases: insights.core.LegacyItemAccess, insights.core.Parser

The zipl.conf file basically contains key-value pairs or single command based on the line. Section name is quoted with ‘[]’ and menu name is started with ‘:’.

Raises

ParseException -- when the first active line is not a section

property dumptofses

Get all dumptofs items referenced in zipl configuration file

Returns

Returns a dict of the section and dumptofs names referenced

in zipl configuration file

Return type

(dict)

property images

Get all image items referenced in zipl configuration file

Returns

Returns a dict of the section and image names referenced

in zipl configuration file

Return type

(dict)

parse_content(content)[source]

This method must be implemented by classes based on this class.

Shared Combiners Catalog

Contents:

CephOsdTree

Combiner provides the information about ceph osd tree. It uses the results of the CephOsdTree, CephInsights and CephOsdTreeText parsers. The order from most preferred to least preferred is CephOsdTree, CephInsights, CephOsdTreeText.

Examples

>>> type(cot)
<class 'insights.combiners.ceph_osd_tree.CephOsdTree'>
>>> cot['nodes'][0]['children']
[-7, -3, -5, -9]
class insights.combiners.ceph_osd_tree.CephOsdTree(cot, ci, cott)[source]

Bases: insights.core.LegacyItemAccess

Combiner provides the information about ceph osd tree. It uses the results of the CephOsdTree, CephInsights and CephOsdTreeText parsers. The order from most preferred to least preferred is CephOsdTree, CephInsights, CephOsdTreeText.

Ceph Version

Combiner for Ceph Version information. It uses the results of the CephVersion, CephInsights and CephReport parsers. The order from most preferred to least preferred is CephVersion`, CephInsights, CephReport.

Examples

>>> type(cv)
<class 'insights.combiners.ceph_version.CephVersion'>
>>> cv.version
'3.2'
>>> cv.major
'3'
>>> cv.minor
'2'
>>> cv.downstream_release
'0'
>>> cv.upstream_version["release"]
12
>>> cv.upstream_version["major"]
2
>>> cv.upstream_version["minor"]
8
class insights.combiners.ceph_version.CephVersion(cv, ci, cr)[source]

Bases: object

Combiner for Ceph Version information. It uses the results of the CephVersion, CephInsights and CephReport parsers. The order from most preferred to least preferred is CephVersion`, CephInsights, CephReport.

Cloud Provider

Combiner for Cloud information. It uses the results of the multiple parsers:

  • InstalledRpms,

  • YumRepoList and

  • DMIDecode parsers

The combiner uses these parsers determine the Cloud Provider based on a set of criteria that is unique to each cloud provider.

Examples

>>> cp_aws.cloud_provider
'aws'
>>> cp_aws.cp_bios_version == {'aws': '4.2.amazon', 'google': '', 'azure': '', 'alibaba': ''}
True
>>> cp_aws.cp_rpms == {'aws': ['rh-amazon-rhui-client-2.2.124-1.el7'], 'google': [], 'azure': [], 'alibaba': []}
True
>>> cp_aws.cp_uuid['aws']
'EC2F58AF-2DAD-C57E-88C0-A81CB6084290'
>>> cp_azure.cloud_provider
'azure'
>>> cp_azure.cp_yum == {'aws': [], 'google': [], 'azure': ['rhui-microsoft-azure-rhel7-2.2-74'], 'alibaba': []}
True
>>> cp_azure.cp_asset_tag['azure']
'7783-7084-3265-9085-8269-3286-77'
>>> cp_alibaba.cloud_provider
'alibaba'
>>> cp_alibaba.cp_manufacturer == {'aws': '', 'google': '', 'azure': '', 'alibaba': 'Alibaba Cloud'}
True
class insights.combiners.cloud_provider.CloudProvider(rpms, dmidcd, yrl)[source]

Bases: object

Combiner class to provide cloud vendor facts

cp_bios_vendor

Dictionary containing a value , for each provider, of Bios vendor used to determine cloud provider. Each providers value will be empty if none found

Type

dict

cp_bios_version

Dictionary containing a value, for each provider, of Bios version used to determine cloud provider. Each providers value will be empty if none found

Type

dict

cp_rpms

Dictionary containing a list, for each provider, of rpm information used to determine cloud provider. Each providers list will be empty if no matches found

Type

dict

cp_yum

Dictionary containing a list, for each provider, of yum repo information used to determine cloud provider. Each providers list will be empty if no matches found

Type

dict

cp_asset_tag

Dictionary containing a value, for each provider, of rpm information used to determine cloud provider. Each providers value will be empty if no matches found

Type

dict

cp_uuid

Dictionary containing a value, for each provider, of uuid information used to determine cloud provider. Each providers value will be empty if no matches are found

Type

dict

cp_manufacturer

Dictionary containing a value, for each provider, of system information used to determine cloud provider. Provider value will be empty if no matches are found.

Type

dict

cloud_provider

String representing the cloud provider that was detected. If none are detected then it will have the default value None.

Type

str

ALIBABA = 'alibaba'

Alibaba Cloud Provider Constant

AWS = 'aws'

AWS Cloud Provider Constant

AZURE = 'azure'

AZURE Cloud Provider Constant

GOOGLE = 'google'

GOOGLE Cloud Provider Constant

CpuVulnsAll - combiner for CPU vulnerabilities

This combiner provides an interface to CPU vulnerabilities parsers for cpu vulnerabilities

class insights.combiners.cpu_vulns_all.CpuVulnsAll(cpu_vulns)[source]

Bases: dict

Class to capsulate the parsers of cpu_vulns, files information will be stored in a list of dictionaries, each dictionary is for one file, the dictionary key is the file name, dictionary value is the file content.

Sample output for files:
/sys/devices/system/cpu/vulnerabilities/spectre_v1:

Mitigation: Load fences

/sys/devices/system/cpu/vulnerabilities/meltdown:

Mitigation: PTI

Examples

>>> type(cvb)
<class 'insights.combiners.cpu_vulns_all.CpuVulnsAll'>
>>> list(cvb.keys())
['meltdown', 'spectre_v1']
>>> cvb['meltdown']
'Mitigation: PTI'
Raises

SkipComponent -- Not available data

Dmesg

Combiner for Dmesg information. It uses the results of the following parsers (if they are present): insights.parsers.dmesg.DmesgLineList, insights.parsers.dmesg_log.DmesgLog

Typical output of the /var/log/dmesg file is:

[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.10.0-862.el7.x86_64 (mockbuild@x86-034.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Wed Mar 21 18:14:51 EDT 2018
[    2.090905] SELinux:  Completing initialization.
[    2.090907] SELinux:  Setting up existing superblocks.
[    2.099684] systemd[1]: Successfully loaded SELinux policy in 82.788ms.
[    2.117410] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.117429] systemd[1]: Inserted module 'ip_tables'
[    2.376551] systemd-journald[441]: Received request to flush runtime journal from PID 1
[    2.716874] cryptd: max_cpu_qlen set to 100
[    2.804152] AES CTR mode by8 optimization enabled

Typical output of the dmesg command is:

[    2.939498] [TTM] Initializing pool allocator
[    2.939502] [TTM] Initializing DMA pool allocator
[    2.940800] [drm] fb mappable at 0xFC000000
[    2.940947] fbcon: cirrusdrmfb (fb0) is primary device
[    2.957375] Console: switching to colour frame buffer device 128x48
[    2.959322] cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device
[    2.959334] [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0
[    3.062459] XFS (vda1): Ending clean mount
[    5.048484] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    5.102434] Ebtables v2.0 registered

Examples

>>> dmesg.dmesg_cmd_available
True
>>> dmesg.dmesg_log_available
True
>>> dmesg.dmesg_log_wrapped
False
class insights.combiners.dmesg.Dmesg(dmesg_cmd, dmesg_log)[source]

Bases: object

Combiner for dmesg command and /var/log/dmesg file.

GrubConf - The valid GRUB configuration

Combiner for Red Hat Grub v1 and Grub v2 information.

This combiner uses the parsers: insights.parsers.grub_conf.Grub1Config, insights.parsers.grub_conf.Grub1EFIConfig, insights.parsers.grub_conf.Grub2Config, insights.parsers.grub_conf.Grub2EFIConfig, and insights.parsers.grub_conf.BootLoaderEntries.

It determines which parser was used by checking one of the follwing parsers/combiners: insights.parsers.installed_rpms.InstalledRpms, insights.parsers.cmdline.CmdLine, insights.parsers.ls_sys_firmware.LsSysFirmware, and insights.combiners.redhat_release.RedHatRelease.

class insights.combiners.grub_conf.BootLoaderEntries(grub_bles, sys_firmware)[source]

Bases: object

Combine all insights.parsers.grub_conf.BootLoaderEntries parsers into one Combiner

Raises

SkipComponent -- when no any BootLoaderEntries Parsers.

version

The version of the GRUB configuration, 1 or 2

Type

int

is_efi

If the host is boot with EFI

Type

bool

entries

List of all boot entries in GRUB configuration.

Type

list

boot_entries

List of all boot entries which only contains the name and cmdline.

Type

list

kernel_initrds

Dict of the kernel and initrd files referenced in GRUB configuration files

Type

dict

is_kdump_iommu_enabled

If any kernel entry contains “intel_iommu=on”

Type

bool

class insights.combiners.grub_conf.GrubConf(grub1, grub2, grub1_efi, grub2_efi, grub_bles, rpms, cmdline, sys_firmware, rh_rel)[source]

Bases: object

Process Grub configuration v1 or v2 based on which type is passed in.

Examples

>>> type(grub_conf)
<class 'insights.combiners.grub_conf.GrubConf'>
>>> grub_conf.version
2
>>> grub_conf.is_efi
False
>>> grub_conf.kernel_initrds
{'grub_initrds': ['/initramfs-3.10.0-327.36.3.el7.x86_64.img'],
 'grub_kernels': ['/vmlinuz-3.10.0-327.36.3.el7.x86_64']}
>>> grub_conf.is_kdump_iommu_enabled
False
>>> grub_conf.get_grub_cmdlines('')
[]
Raises

Exception -- when cannot find any valid grub configuration.

version

returns 1 or 2, version of the GRUB configuration

Type

int

is_efi

returns True if the host is boot with EFI

Type

bool

kernel_initrds

returns a dict of the kernel and initrd files referenced in GRUB configuration files

Type

dict

is_kdump_iommu_enabled

returns True if any kernel entry contains “intel_iommu=on”

Type

bool

get_grub_cmdlines(search_text=None)[source]

Get the boot entries in which cmdline contains the search_text, return all the boot entries by default.

Parameters

search_text (str) -- keyword to find in the cmdline, being set to None by default.

Returns

A list of insights.parsers.grub_conf.BootEntry objects fo each boot entry in which the cmdline contains the search_text. When search_text is None, returns the objects of all of the boot entries.

Hostname

Combiner for hostname information. It uses the results of all the Hostname parsers, Facter and the SystemID parser to get the fqdn, hostname and domain information.

class insights.combiners.hostname.Hostname(hf, hd, hs, ft, sid)[source]

Bases: object

Check hostname, facter and systemid to get the fqdn, hostname and domain.

Prefer hostname to facter and systemid.

Examples

>>> type(hostname)
<class 'insights.combiners.hostname.Hostname'>
>>> hostname.fqdn
'rhel7.example.com'
>>> hostname.hostname
'rhel7'
>>> hostname.domain
'example.com'
Raises

Exception -- If no hostname can be found in any of the source parsers.

insights.combiners.hostname.hostname(hf, hd, hs, ft, sid)[source]

Warning

This combiner methode is deprecated, please use insights.combiners.hostname.Hostname instead.

Check hostname, facter and systemid to get the fqdn, hostname and domain.

Prefer hostname to facter and systemid.

Examples

>>> hn.fqdn
'rhel7.example.com'
>>> hn.hostname
'rhel7'
>>> hn.domain
'example.com'
Returns

A class with fqdn, hostname and domain attributes.

Return type

insights.combiners.hostname.Hostname

Raises

Exception -- If no hostname can be found in any of the source parsers.

Combiner for httpd configurations

Combiner for parsing part of httpd configurations. It collects all HttpdConf generated from each configuration file and combines them to expose a consolidated configuration tree.

Note

At this point in time, you should NOT filter the httpd configurations to avoid finding directives in incorrect sections.

class insights.combiners.httpd_conf.HttpdConfAll(httpd_conf)[source]

Bases: object

Warning

This combiner class is deprecated, please use insights.combiners.httpd_conf.HttpdConfTree instead.

A combiner for parsing all httpd configurations. It parses all sources and makes a composition to store actual loaded values of the settings as well as information about parsed configuration files and raw values.

Note

ParsedData is a named tuple with the following properties:
  • value - the value of the option.

  • line - the complete line as found in the config file.

  • section - the section type that the option belongs to.

  • section_name - the section name that the option belongs to.

  • file_name - the config file name.

  • file_path - the complete config file path.

ConfigData is a named tuple with the following properties:
  • file_name - the config file name.

  • file_path - the complete config file path.

  • data_dict - original data dictionary from parser.

data

Dictionary of parsed settings in format {option: [ParsedData, ParsedData]}. It stores a list of parsed values, usually only the last value is needed, except situations when directives which can use selective overriding, such as UserDir, are used.

Type

dict

config_data

List of parsed config files in containing ConfigData named tuples.

Type

list

class ConfigData(file_name, file_path, full_data_dict)

Bases: tuple

property file_name
property file_path
property full_data_dict
get_active_setting(directive, section=None)[source]

Returns the parsed data of the specified directive as a list of named tuples.

Parameters
  • directive (str) -- The directive to look for

  • section (str or tuple) --

    The section the directive belongs to

    • str: The section type, e.g. “IfModule”

    • tuple(section, section_name): e.g. (“IfModule”, “prefork”)

    Note::

    section_name can be ignored or can be a part of the actual name.

Returns

When section is not None, returns the list of named tuples ParsedData, in order how they are parsed. If directive or section does not exist, returns empty list.

When section is None, returns the named tuple ParsedData of the directive directly. If directive or section does not exist, returns None.

Return type

(list or named tuple ParsedData)

get_section_list(section)[source]

Returns the specified sections.

Parameters

section (str) -- The section to look for, e.g. “Directory”

Returns

List of tuples, each tuple has three elements - the first being a tuple of the section and section name, the second being the file name of the file where that section resides, the third being the full file path of the file. Therefore, the result looks like this: [((‘VirtualHost’, ‘192.0.2.1’), ‘00-z.conf’, ‘/etc/httpd/conf.d/00-z.conf’)]

If section does not exist, returns empty list.

Return type

(list of tuple)

get_setting_list(directive, section=None)[source]

Returns the parsed data of the specified directive as a list

Parameters
  • directive (str) -- The directive to look for

  • section (str or tuple) --

    The section the directive belongs to

    • str: The section type, e.g. “IfModule”

    • tuple(section, section_name): e.g. (“IfModule”, “prefork”)

    Note::

    section_name can be ignored or can be a part of the actual name.

Returns

When section is not None, returns the list of dict that wraps the section and the directive’s named tuples ParsedData, in order how they are parsed.

When section is None, returns the list of named tuples ParsedData, in order how they are parsed.

If directive or section does not exist, returns empty list.

Return type

(list of dict or named tuple ParsedData)

class insights.combiners.httpd_conf.HttpdConfSclHttpd24Tree(confs)[source]

Bases: insights.core.ConfigCombiner

Exposes httpd configuration Software Collection httpd24 through the parsr query interface. Correctly handles all include directives.

See the insights.core.ConfigComponent class for example usage.

class insights.combiners.httpd_conf.HttpdConfSclJbcsHttpd24Tree(confs)[source]

Bases: insights.core.ConfigCombiner

Exposes httpd configuration Software Collection jbcs-httpd24 through the parsr query interface. Correctly handles all include directives.

See the insights.core.ConfigComponent class for example usage.

class insights.combiners.httpd_conf.HttpdConfTree(confs)[source]

Bases: insights.core.ConfigCombiner

Exposes httpd configuration through the parsr query interface. Correctly handles all include directives.

See the insights.core.ConfigComponent class for example usage.

insights.combiners.httpd_conf.get_tree(root=None)[source]

This is a helper function to get an httpd configuration component for your local machine or an archive. Use it in interactive sessions.

insights.combiners.httpd_conf.in_network(val)

Predicate to check if an ip address is in a given network.

Example

conf[“VirtualHost”, in_network(“128.39.0.0/16”)]

insights.combiners.httpd_conf.is_private = <insights.parsr.query.boolean.Predicate object>

Predicate to check if an ip address is private.

Example

conf[“VirtualHost”, in_network(“128.39.0.0/16”)]

insights.combiners.httpd_conf.parse_doc(content, ctx=None)[source]

Parse a configuration document into a tree that can be queried.

IPCS Semaphores

Combiner for parsing all semaphores. It uses the results of the IpcsS and IpcsSI parsers to collect complete semaphore information,and use PsAuxcww parsers to determine if one semaphore is orphan.

class insights.combiners.ipcs_semaphores.IpcsSemaphore(data)[source]

Bases: object

Class for holding information about one semaphore.

is_orphan = None

Is it an orphan semaphore?

Type

bool

owner = None

Owner of the semaphore.

Type

str

pid_list = None

List of the related PID.

Type

list

semid = None

Semaphore ID.

Type

str

class insights.combiners.ipcs_semaphores.IpcsSemaphores(sem_s, sem_si, ps)[source]

Bases: object

Class for parsing all semaphores. Will generate IpcsSemaphore objects for each semaphores.

Below is the logic to determine if semaphore an orphan:

- PID=0 does not included in the related PID
- Related PID cannot be found in running PIDs

Examples

>>> oph_sem.count_of_all_sems()
5
>>> oph_sem.count_of_all_sems(owner='apache')
4
>>> oph_sem.count_of_orphan_sems()
2
>>> oph_sem.count_of_orphan_sems('apache')
2
>>> oph_sem.get_sem('65536').is_orphan
False
count_of_all_sems(owner=None)[source]

Return the count of all semaphores by default, when owner is provided return the count of semaphores belong to owner.

Parameters

owner (str) -- Owner of semaphores.

Returns

the count of semaphores.

Return type

(int)

count_of_orphan_sems(owner=None)[source]

Return the count of orphan semaphores by default, when owner is provided return the count of orphan semaphores belong to owner.

Parameters

owner (str) -- Owner of semaphores.

Returns

the count of orphan semaphores

Return type

(int)

get_sem(semid)[source]

Return an IpcsSemaphore instance which semid is semid

Returns

the instance of IpcsSemaphore

Return type

(IpcsSemaphore)

orphan_sems(owner=None)[source]

Return all the orphan semaphores by default, when owner is provided return the orphan semaphores belong to owner.

Parameters

owner (str) -- Owner of semaphores.

Returns

the ID list of orphan semaphores

Return type

(list)

IPCS Shared Memory Segments

Combiner for parsing shared memory segments gotten from command ipcs. It uses the results of the IpcsM and IpcsMP parsers to get the size of the shared memory of special PID.

class insights.combiners.ipcs_shared_memory.IpcsSharedMemory(shm, shmp)[source]

Bases: insights.core.LegacyItemAccess

Class for parsing shared memory segments outputted by commands ipcs -m and ipcs -m -p.

Typical output of command ipcs -m is:

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x0052e2c1 0          postgres   600        37879808   26
0x0052e2c2 1          postgres   600        41222144   24

Typical output of command ipcs -m -p is:

------ Shared Memory Creator/Last-op --------
shmid      owner      cpid       lpid
0          postgres   1833       23566
1          postgres   1105       9882

Examples

>>> type(ism)
<class 'insights.combiners.ipcs_shared_memory.IpcsSharedMemory'>
>>> ism.get_shm_size_of_pid('1105')
41222144
get_shm_size_of_pid(pid)[source]

Return the shared memory size of specified pid.

Returns

size of the shared memory, 0 by default.

Return type

(int)

IPv6 - Check whether IPv6 is disabled

This combiner reports whether the user has disabled IPv6 via one of the many means available to do so. At present, only whether IPv6 is disabled on the running system is reported; it provides no information regarding whether it would continue to be after a reboot.

Per https://access.redhat.com/solutions/8709 , IPv6 may be disabled by

RHEL 7:
  • ipv6.disable=1 Kernel command line argument

  • disable_ipv6 option in sysctl

RHEL 6:
  • option ipv6 disable=1 in modprobe.d

  • install ipv6 /bin/true (fake install) in modprobe.d

  • disable_ipv6 option in sysctl

While they aren’t tested explicitly, there are some means by which you can attempt to disable IPv6 that are ineffective, such as setting blacklist ipv6 in modprobe.d; those methods will yield no result from this combiner.

The only requirement of this combiner is Uname, but accurate detection relies on information from combiners marked optional. If, for example, it’s run against a RHEL6 system without information from ModProbe, it will miss any of those disabling options and possibly return a false negative. For that reason, this combiner shouldn’t be relied on to state definitively that IPv6 is enabled or disabled.

Examples

>>> from insights.tests import context_wrap
>>> from insights.parsers.uname import Uname
>>> from insights.parsers.sysctl import Sysctl
>>> from insights.combiners.ipv6 import IPv6
>>> my_uname = '''
...  Linux localhost.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Mon Feb 20 02:37:52 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
... '''.strip()
>>> my_sysctl = '''
... net.ipv6.conf.all.autoconf = 1
... net.ipv6.conf.all.dad_transmits = 1
... net.ipv6.conf.all.disable_ipv6 = 0
... net.ipv6.conf.all.force_mld_version = 0
... net.ipv6.conf.all.force_tllao = 0
... net.ipv6.conf.all.forwarding = 0
... '''.strip()
>>> shared = {Uname: Uname(context_wrap(my_uname)), Sysctl: Sysctl(context_wrap(my_sysctl))}
>>> my_ipv6 = IPv6({},shared)
>>> my_ipv6.disabled()
False
>>> my_ipv6.disabled_by()
set([])
>>> my_sysctl = '''
... net.ipv6.conf.all.autoconf = 1
... net.ipv6.conf.all.dad_transmits = 1
... net.ipv6.conf.all.disable_ipv6 = 1
... net.ipv6.conf.all.force_mld_version = 0
... net.ipv6.conf.all.force_tllao = 0
... net.ipv6.conf.all.forwarding = 0
... '''.strip()
>>> shared[Sysctl] = Sysctl(context_wrap(my_sysctl))
>>> my_ipv6 = IPv6({},shared)
>>> my_ipv6.disabled()
True
>>> my_ipv6.disabled_by()
set(['sysctl'])
class insights.combiners.ipv6.IPv6(uname, mod_probe, lsmod, cmdline, sysctl)[source]

Bases: object

A combiner which detects disabled IPv6 networking.

disabled()[source]

Determine whether IPv6 has been disabled on this system.

Returns

True if a configuration that disables IPv6 was found.

Return type

bool

disabled_by()[source]

Get the means by which IPv6 was disabled on this system.

Returns

Zero or more of cmdline, modprobe_disable, fake_install, or sysctl, depending on which methods to disable IPv6 have been found.

Return type

set

Journald configuration

Combiner for parsing of journald configuration. man journald.conf describes where various journald config files can reside and how they take precedence one over another. The combiner implements the logic and provides an interface for querying active settings.

The journald.conf file is a key=value file with hash comments.

The parsers this combiner uses process only active settings (lines that are not commented out). The resulting settings (after being processed by the precedence evaluation algorithm) are then provided by the get_active_settings_value method and active_settings dictionary and by the get_active_setting_value_and_file_name method and active_settings_with_file_name dictionary.

Options that are commented out are not returned - a rule using this parser has to be aware of which default value is assumed by systemd if the particular option is not specified.

Priority from lowest to highest:

  • built-in defaults (the same as the default commented entries in /etc/systemd/journald.conf)

  • /etc/systemd/journald.conf

  • *.conf in whatever directory in lexicographic order from lowest to highest

  • if two *.conf files with the same name are both in /usr/lib and /etc, the file in /etc wholly overwrites the file in /usr/lib

from man journald.conf in RHEL 7.3:

CONFIGURATION DIRECTORIES AND PRECEDENCE

Default configuration is defined during compilation, so a configuration file is only needed when it is necessary to deviate from those defaults. By default the configuration file in /etc/systemd/ contains commented out entries showing the defaults as a guide to the administrator. This file can be edited to create local overrides.

When packages need to customize the configuration, they can install configuration snippets in /usr/lib/systemd/*.conf.d/. Files in /etc/ are reserved for the local administrator, who may use this logic to override the configuration files installed by vendor packages. The main configuration file is read before any of the configuration directories, and has the lowest precedence; entries in a file in any configuration directory override entries in the single configuration file. Files in the *.conf.d/ configuration subdirectories are sorted by their filename in lexicographic order, regardless of which of the subdirectories they reside in. If multiple files specify the same option, the entry in the file with the lexicographically latest name takes precedence. It is recommended to prefix all filenames in those subdirectories with a two-digit number and a dash, to simplify the ordering of the files.

To disable a configuration file supplied by the vendor, the recommended way is to place a symlink to /dev/null in the configuration directory in /etc/, with the same filename as the vendor configuration file.

Examples

>>> conf = shared[JournaldConfAll]
>>> conf.get_active_setting_value('Storage')
'auto'
>>> 'Storage' in conf.active_settings_with_file_name
True
>>> conf.get_active_setting_value_and_file_name('Storage')
('auto', '/etc/systemd/journald.conf')
class insights.combiners.journald_conf.JournaldConfAll(journal_conf, journal_conf_d, usr_journal_conf_d)[source]

Bases: object

Combiner for accessing files from the parsers EtcJournaldConf, EtcJournaldConfD, UsrJournaldConfD and evaluating effective active settings based on the rules of file priority and file shadowing as described in man journald.conf.

Can be later refactored to a combiner for parsing all configuration files with key=option lines, like journald files.

Rules of evaluation:

  • Files from EtcJournaldConfD wholly shadow/overwrite files from UsrJournaldConfD with identical names.

  • Files ordered by name from lowest priority to highest (a.conf has lower priority than b.conf).

  • Option values overwritten by the file with the highest priority.

  • The one central file has either the lowest priority or the highest priority, based on the central_file_lowest_prio argument.

That is:

  • An entire file in UsrJournaldConfD is overwritten by a same-named file from EtcJournaldConfD.

  • A single option value is overwritten when another file with a higher priority has an option with the same option name.

Example of file precedence:

/etc/systemd/journald.conf:
    key0=value0
    key1=value1

/usr/lib/systemd/journald.conf.d/a.conf:
    key2=value2
    key3=value3
    key4=value4
    key1=value5

/usr/lib/systemd/journald.conf.d/b.conf:
    key5=value6
    key6=value7
    key1=value8
    key2=value9
    key4=value10

/usr/lib/systemd/journald.conf.d/c.conf:
    key7=value11
    key5=value12
    key1=value13

/etc/systemd/journald.conf.d/b.conf:
    key1=value14
    key5=value15

the resulting configuration:
    key0=value0
    key1=value13 # c.conf has highest priority
    key2=value2 # b.conf from /usr is shadowed by b.conf from /etc so value from a.conf is used
    key3=value3
    key4=value4 # b.conf from /usr is shadowed by b.conf from /etc so value from a.conf is used
    key5=value12 # c.conf has higher priority than b.conf
    # key6 doesn't exist because b.conf from /usr is shadowed by b.conf from /etc
    key7=value11
get_active_setting_value(setting_name)[source]

Access active setting value by setting name.

Parameters

setting_name (string) -- Setting name

get_active_setting_value_and_file_name(setting_name)[source]

Access active setting value by setting name. Returns the active setting value and file name of the file in which it is defined. Other files that also specify the setting but are shadowed are ignored and not reported.

Parameters

setting_name (string) -- Setting name

Returns

setting value, file name

Return type

tuple[str, str]

krb5 configuration

The krb5 files are normally available to rules as a list of Krb5Configuration objects.

class insights.combiners.krb5.AllKrb5Conf(krb5configs)[source]

Bases: insights.core.LegacyItemAccess

Combiner for accessing all the krb5 configuration files, the format is dict. There may be multi files for krb5 configuration, and the main config file is krb5.conf. In the situation that same section is both in krb5.conf and other configuration files, section in krb5.conf is the available setting. Data from parser krb5 is list of dict(s), this combiner will parse this list and return a dict which containing all valid data.

Sample files:

/etc/krb5.conf:

    includedir /etc/krb5.conf.d/
    include /etc/krb5test.conf
    module /etc/krb5test.conf:residual

    [logging]
        default = FILE:/var/log/krb5libs.log
        kdc = FILE:/var/log/krb5kdc.log

/etc/krb5.d/krb5_more.conf:

    [logging]
        default = FILE:/var/log/krb5.log
        kdc = FILE:/var/log/krb5.log
        admin_server = FILE:/var/log/kadmind.log

    [realms]
        dns_lookup_realm = false
        default_ccache_name = KEYRING:persistent:%{uid}

Examples

>>> all_krb5 = shared[AllKrb5Conf]
>>> all_krb5.include
['/etc/krb5test.conf']
>>> all_krb5.sections()
['logging', 'realms']
>>> all_krb5.options('logging')
['default', 'kdc', 'admin_server']
>>> all_krb5['logging']['kdc']
'FILE:/var/log/krb5kdc.log'
>>> all_krb5.has_option('logging', 'admin_server')
True
>>> all_krb5['realms']['dns_lookup_realm']
'false'
includedir

The directory list that krb5.conf includes via includedir directive

Type

list

include

The configuration file list that krb5.conf includes via include directive

Type

list

module

The module list that krb5.conf specifed via ‘module’ directive

Type

list

has_option(section, option)[source]

Check for the existence of a given option in a given section. Return True if the given option is present, and False if not present.

has_section(section)[source]

Indicate whether the named section is present in the configuration. Return True if the given section is present, and False if not present.

options(section)[source]

Return a list of option names for the given section name.

sections()[source]

Return a list of section names.

Limits configuration

The limits files are normally available to rules as a list of LimitsConf objects. This combiner turns those into one set of data, and provides a find_all() method to search the rules from all the files.

class insights.combiners.limits_conf.AllLimitsConf(limits)[source]

Bases: object

Combiner for accessing all the limits configuration files.

domains

the set of domains found in all data files.

Type

set

limits

a list of the original LimitsConf parser results.

Type

list

rules

the entire list of rules.

Type

list

find_all(**kwargs)[source]

Find all the rules that match the given parameters. We cheat a bit here and combine the results from the find_all() method from the original parsers. Otherwise we’d have to reimplement the _matches method from the LimitsConf class.

Examples

>>> data = limits
>>> results = data.find_all(domain='nproc')
>>> len(results)
1
>>> results[0]['domain']
'nproc'
Parameters

**kwargs (dict) -- key-value pairs for the search data.

Returns

a list of the rules matching the given keywords, as

determined by the _matches() method in the LimitsConf class.

Return type

(list)

LogrotateConfAll - Combiner for logrotate configuration

Combiner for accessing all the logrotate configuration files. It collects all LogrotateConf generated from each single logrotate configuration file.

There may be multiple logrotate configuration, and the main configuration file is /etc/logrotate.conf. Only the options defined in this file are global options, and all other options (if there are) will be discarded.

class insights.combiners.logrotate_conf.LogRotateConfTree(confs)[source]

Bases: insights.core.ConfigCombiner

Exposes logrotate configuration through the parsr query interface.

See the insights.core.ConfigComponent class for example usage.

class insights.combiners.logrotate_conf.LogrotateConfAll(lrt_conf)[source]

Bases: object

Class for combining all the logrotate configuration files.

Sample files:

# /etc/logrotate.conf:
    compress
    rotate 7

    /var/log/messages {
        rotate 5
        weekly
        postrotate
                    /sbin/killall -HUP syslogd
        endscript
    }

# /etc/logrotate.d/httpd
    "/var/log/httpd/access.log" /var/log/httpd/error.log {
        rotate 5
        mail www@my.org
        size=100k
        sharedscripts
        postrotate
                    /sbin/killall -HUP httpd
        endscript
    }

# /etc/logrotate.d/newscrit
    /var/log/news/*.crit  {
        monthly
        rotate 2
        olddir /var/log/news/old
        missingok
        postrotate
                    kill -HUP `cat /var/run/inn.pid`
        endscript
        nocompress
    }

Examples

>>> all_lrt.global_options
['compress', 'rotate']
>>> all_lrt['rotate']
'7'
>>> '/var/log/httpd/access.log' in all_lrt.log_files
True
>>> all_lrt['/var/log/httpd/access.log']['rotate']
'5'
>>> all_lrt.configfile_of_logfile('/var/log/news/olds.crit')
'/etc/logrotate.d/newscrit'
>>> all_lrt.options_of_logfile('/var/log/httpd/access.log')['mail']
'www@my.org'
data

All parsed options and log files are stored in this dictionary

Type

dict

global_options

List of global options defined in /etc/logrotate.conf

Type

list

log_files

List of log files in all logrotate configuration files

Type

list

configfile_of_logfile(log_file)[source]

Get the configuration file path in which the log_file is configured.

Parameters

log_file (str) -- The log files need to check.

Returns

The configuration file path of log_file. None when no such log_file.

Return type

dict

options_of_logfile(log_file)[source]

Get the options of log_file.

Parameters

log_file (str) -- The log files need to check.

Returns

Dictionary contains the options of log_file. None when no such log_file.

Return type

dict

insights.combiners.logrotate_conf.get_tree(root=None)[source]

This is a helper function to get a logrotate configuration component for your local machine or an archive. It’s for use in interactive sessions.

insights.combiners.logrotate_conf.parse_doc(content, ctx=None)[source]

Parse a configuration document into a tree that can be queried.

Lvm - Combiner for lvm information

This shared combiner for LVM parsers consolidates all of the information for the following information:

  • LVS

  • PVS

  • VGS

The parsers gather this information from multiple locations such as Insights data and SOS Report data and combines the data. Sample input data and examples are shown for LVS, with PVS and VGS being similar.

Sample input data for LVS commands as parsed by the parsers:

# Output of the command:
# /sbin/lvs -a -o +lv_tags,devices --config="global{locking_type=0}"
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert LV Tags Devices
root rhel -wi-ao---- 17.47g                                                             /dev/vda2(512)
swap rhel -wi-ao----  2.00g                                                             /dev/vda2(0)

# Output of the command:
# /sbin/lvs --nameprefixes --noheadings --separator='|' -a -o lv_name,vg_name,lv_size,region_size,mirror_log,lv_attr,devices,region_size --config="global{locking_type=0}"
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
LVM2_LV_NAME='root'|LVM2_VG_NAME='rhel'|LVM2_LV_SIZE='17.47g'|LVM2_REGION_SIZE='0 '|LVM2_MIRROR_LOG=''|LVM2_LV_ATTR='-wi-ao----'|LVM2_DEVICES='/dev/vda2(512)'|LVM2_REGION_SIZE='0 '
LVM2_LV_NAME='swap'|LVM2_VG_NAME='rhel'|LVM2_LV_SIZE='2.00g'|LVM2_REGION_SIZE='0 '|LVM2_MIRROR_LOG=''|LVM2_LV_ATTR='-wi-ao----'|LVM2_DEVICES='/dev/vda2(0)'|LVM2_REGION_SIZE='0 '

Because logical volume names may be duplicated on different volume groups, the key used for the logical volume information is a named tuple of type LvVgName. Physical volumes and volume groups do not have the same limitation so the key used for that information is simply the string name of the physical device or volume group.

Examples

>>> lvm_info = shared[Lvm]
>>> lvm_info.logical_volumes[LvVgName(LV='root', VG='rhel')]
{
    'Log': '', 'LPerms': None, 'Health': None, 'MaxSync': None, 'Pool_UUID': None, 'DevOpen': None, 'SkipAct': None,
    'Parent': None, 'Descendants': None, 'WhenFull': None, 'Lock_Args': None, 'CacheReadMisses': None, 'Host': None,
    'CacheWriteHits': None, 'Active': None, 'Path': None, 'LV_UUID': None, 'Data': None, 'LV_Tags': None, 'Pool': None,
    'CacheDirtyBlocks': None, 'InitImgSync': None, 'Region': '0', 'LiveTable': None, 'MinSync': None,
    'Devices': '/dev/vda2(512)', 'ActLocal': None, 'Time': None, 'Cpy%Sync': None, 'Modules': None, 'Data_UUID': None, 'Origin': None,
    'Move': None, 'Origin_UUID': None, 'Converting': None, 'LSize': '17.47g', '#Seg': None, 'Ancestors': None, 'Layout': None,
    'Meta%': None, 'Min': None, 'Data%': None, 'AllocLock': None, 'CacheWriteMisses': None, 'AllocPol': None,
    'CacheTotalBlocks': None, 'MergeFailed': None, 'Mismatches': None, 'WBehind': None, 'ActExcl': None, 'ActRemote': None,
    'OSize': None, 'KMin': None, 'LV': 'root', 'InactiveTable': None, 'Move_UUID': None, 'Maj': None, 'Role': None, 'KMaj': None,
    'Convert': None, 'LProfile': None, 'Attr': '-wi-ao----', 'VG': 'rhel', 'KRahead': None, 'Rahead': None, 'Log_UUID': None,
    'MSize': None, 'Merging': None, 'DMPath': None, 'Meta_UUID': None, 'SnapInvalid': None, 'ImgSynced': None,
    'CacheReadHits': None, 'Meta': None, 'Snap%': None, 'Suspended': None, 'FixMin': None, 'CacheUsedBlocks': None, 'SyncAction': None
}
>>> lvm_info.logical_volumes[LvVgName('root','rhel')]['LSize']
'17.47g'
>>> lvm_info.logical_volume_names
{LvVgName(LV='root', VG='rhel'), LvVgName(LV='swap', VG='rhel')}
>>> lvm_info.filter_logical_volumes(lv_filter='root')
{LvVgName(LV='root', VG='rhel'): {
    'Log': '', 'LPerms': None, 'Health': None, 'MaxSync': None, 'Pool_UUID': None, 'DevOpen': None, 'SkipAct': None,
    'Parent': None, 'Descendants': None, 'WhenFull': None, 'Lock_Args': None, 'CacheReadMisses': None, 'Host': None,
    'CacheWriteHits': None, 'Active': None, 'Path': None, 'LV_UUID': None, 'Data': None, 'LV_Tags': None, 'Pool': None,
    'CacheDirtyBlocks': None, 'InitImgSync': None, 'Region': '0', 'LiveTable': None, 'MinSync': None,
    'Devices': '/dev/vda2(512)', 'ActLocal': None, 'Time': None, 'Cpy%Sync': None, 'Modules': None, 'Data_UUID': None, 'Origin': None,
    'Move': None, 'Origin_UUID': None, 'Converting': None, 'LSize': '17.47g', '#Seg': None, 'Ancestors': None, 'Layout': None,
    'Meta%': None, 'Min': None, 'Data%': None, 'AllocLock': None, 'CacheWriteMisses': None, 'AllocPol': None,
    'CacheTotalBlocks': None, 'MergeFailed': None, 'Mismatches': None, 'WBehind': None, 'ActExcl': None, 'ActRemote': None,
    'OSize': None, 'KMin': None, 'LV': 'root', 'InactiveTable': None, 'Move_UUID': None, 'Maj': None, 'Role': None, 'KMaj': None,
    'Convert': None, 'LProfile': None, 'Attr': '-wi-ao----', 'VG': 'rhel', 'KRahead': None, 'Rahead': None, 'Log_UUID': None,
    'MSize': None, 'Merging': None, 'DMPath': None, 'Meta_UUID': None, 'SnapInvalid': None, 'ImgSynced': None,
    'CacheReadHits': None, 'Meta': None, 'Snap%': None, 'Suspended': None, 'FixMin': None, 'CacheUsedBlocks': None, 'SyncAction': None
}}
class insights.combiners.lvm.Lvm(lvs, lvs_headings, pvs, pvs_headings, vgs, vgs_headings)[source]

Bases: object

Class implements shared combiner for LVM information.

class LvVgName(LV, VG)

Bases: tuple

Named tuple used as key for logical volumes.

property LV
property VG
filter_logical_volumes(lv_filter, vg_filter=None)[source]

dict: Returns dictionary of logical volume information having the lv_filter in the logical volume and if specified vg_filter in the volume group.

filter_physical_volumes(pv_filter)[source]

dict: Returns dictionary of physical volume information with keys containing pv_filter.

filter_volume_groups(vg_filter)[source]

dict: Returns dictionary of volume group information with keys containing vg_filter.

property logical_volume_names

Returns a set of tuple keys from the logical volume information.

Type

set

logical_volumes = None

Contains a dictionary of logical volume data with keys from the original output. The key is a tuple of the logical volume name and the volume group name. This tuple avoids the case where logical volume names are the same across volume groups.

Type

dict

physical_volumes = None

Contains a dictionary of physical volume data with keys from the original output.

Type

dict

property volume_group_names

Returns a set of keys from the volume group information.

Type

set

volume_groups = None

Contains a dictionary of volume group data with keys from the original output.

Type

dict

class insights.combiners.lvm.LvmAll(lvsall, pvsall, vgsall)[source]

Bases: insights.combiners.lvm.Lvm

A Lvm like shared combiner for processing LVM information including all rejected and accepted devices

logical_volumes = None

Contains a dictionary of logical volume data with keys from the original output. The key is a tuple of the logical volume name and the volume group name. This tuple avoids the case where logical volume names are the same across volume groups.

Type

dict

physical_volumes = None

Contains a dictionary of physical volume data with keys from the original output.

Type

dict

volume_groups = None

Contains a dictionary of volume group data with keys from the original output.

Type

dict

insights.combiners.lvm.get_shared_data(component)[source]

Returns the actual list of component data based on how data is stored in component, either from the data attribute or from the data[‘content’] attribute.

Returns

List of component data.

Return type

list

insights.combiners.lvm.merge_lvm_data(primary, secondary, name_key)[source]

Returns a dictionary containing the set of data from primary and secondary where values in primary will always be returned if present, and values in secondary will only be returned if not present in primary, or if the value in primary is None.

Sample input Data:

primary = [
    {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'name_key': 'xyz'},
    {'a': None, 'b': 12, 'c': 13, 'd': 14, 'name_key': 'qrs'},
    {'a': None, 'b': 12, 'c': 13, 'd': 14, 'name_key': 'def'},
]
secondary = [
    {'a': 31, 'e': 33, 'name_key': 'xyz'},
    {'a': 11, 'e': 23, 'name_key': 'qrs'},
    {'a': 1, 'e': 3, 'name_key': 'ghi'},
]
Returns

Dictionary of key value pairs from obj1 and obj2:

{
    'xyz': {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 33, 'name_key': 'xyz'},
    'qrs': {'a': 11, 'b': 12, 'c': 13, d: 14, e: 23, 'name_key': 'qrs'},
    'def': {'a': None, 'b': 12, 'c': 13, 'd': 14, 'name_key': 'def'},
    'ghi': {'a': 1, 'e': 3, 'name_key': 'ghi'}
}

Return type

dict

insights.combiners.lvm.set_defaults(lvm_data)[source]

dict: Sets all existing null string values to None.

insights.combiners.lvm.to_name_key_dict(data, name_key)[source]

Iterates a list of dictionaries where each dictionary has a name_key value that is used to return a single dictionary indexed by those values.

Returns

Dictionary keyed by name_key values having the information

contained in the original input list data.

Return type

dict

NormalMD5 Combiner for the NormalMD5 Parser

Combiner for the insights.parsers.md5check.NormalMD5 parser.

This parser is multioutput, one parser instance for each file md5sum. Ths combiner puts all of them back together and presents them as a dict where the keys are the filenames and the md5sums are the values.

This class inherits all methods and attributes from the dict object.

Examples

>>> type(md5sums)
<class 'insights.combiners.md5check.NormalMD5'>
>>> sorted(md5sums.keys())
['/etc/localtime1', '/etc/localtime2']
>>> md5sums['/etc/localtime2']
'd41d8cd98f00b204e9800998ecf8427e'
class insights.combiners.md5check.NormalMD5(md5_checksums)[source]

Bases: dict

Combiner for the NormalMD5 parser.

Mlx4Port Combiner for the Mlx4Port Parser

Combiner for the insights.parsers.mlx4_port.Mlx4Port parser.

This parser is multioutput, one parser instance for each port file. This combiner puts all of them back together and presents them as a dict where the keys are the port names, and the contents of the port name file are the lines in each file stored as a list.

This class inherits all methods and attributes from the dict object.

Examples

>>> type(mlx4port)
<class 'insights.combiners.mlx4_port.Mlx4Port'>
>>> mlx4port['mlx4_port1']
['ib']
>>> sorted(mlx4port.keys())
['mlx4_port1', 'mlx4_port2']
class insights.combiners.mlx4_port.Mlx4Port(mlx4_port)[source]

Bases: dict

Combiner for the mlx4_port parser.

ModInfo

The ModInfo combiner gathers all the ModInfoEach parsers into a dictionary indexed by the module name.

class insights.combiners.modinfo.ModInfo(mi_all, mi_each)[source]

Bases: dict

Combiner for accessing all the modinfo outputs.

Examples

>>> type(modinfo_obj)
<class 'insights.combiners.modinfo.ModInfo'>
>>> type(modinfo_obj['i40e'])
<class 'insights.parsers.modinfo.ModInfoEach'>
>>> modinfo_obj['i40e'].module_name
'i40e'
>>> modinfo_obj['i40e'].module_name
'i40e'
>>> modinfo_obj['i40e']['retpoline']
'Y'
>>> modinfo_obj['i40e'].module_version
'2.3.2-k'
>>> modinfo_obj['i40e'].module_path
'/lib/modules/3.10.0-993.el7.x86_64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.xz'
>>> "i40e" in modinfo_obj.retpoline_y
True
>>> "bnx2x" in modinfo_obj.retpoline_y
False
>>> "bnx2x" in modinfo_obj.retpoline_n
True
Raises

SkipComponent -- When content is empty.

retpoline_y

A set of names of the modules with the attribute “retpoline: Y”.

Type

set

retpoline_n

A set of names of the modules with the attribute “retpoline: N”.

Type

set

property data

Dict with the module name as the key and the module details as the value.

Type

(dict)

Modprobe configuration

The modprobe configuration files are normally available to rules as a list of ModProbe objects. This combiner turns those into one set of data, preserving the original file name that defined modprobe configuration line using a tuple.

class insights.combiners.modprobe.AllModProbe(modprobe)[source]

Bases: insights.core.LegacyItemAccess

Combiner for accessing all the modprobe configuration files in one structure.

It’s important for our reporting and information purposes to know not only what the configuration was but where it was defined. Therefore, the format of the data in this combiner is slightly different compared to the ModProbe parser. Here, each ‘value’ is actually a 2-tuple, with the actual data first and the file name from whence the value came second. This does mean that you need to pull the value out of each item - e.g. using a list comprehension - but it means that every item is associated with the file it was defined in.

In line with the ModProbe configuration parser, the actual value is usually a list of the space-separated parts on the line, and the definitions for each module are similarly kept in a list, which makes

Thanks to the LegacyItemAccess class, this can also be treated as a dictionary for look-ups of data in the data attribute.

data

The combined data structures, with each item as a 2-tuple, as described above.

Type

dict

bad_lines

The list of unparseable lines from all files, with each line as a 2-tuple as described above.

Type

list

Sample data files:

/etc/modprobe.conf:
    # watchdog drivers
    blacklist i8xx_tco

    # Don't install the Firewire ethernet driver
    install eth1394 /bin/true

/etc/modprobe.conf.d/no_ipv6.conf:
    options ipv6 disable=1
    install ipv6 /bin/true

Examples

>>> all_modprobe = shared[AllModProbe]
>>> all_modprobe['alias']
[]
>>> all_modprobe['blacklist']
{'i8xx_tco': ModProbeValue(True, '/etc/modprobe.conf')}
>>> all_modprobe['install']
{'eth1394': ModProbeValue(['/bin/true'], '/etc/modprobe.conf'),
 'ipv6': ModProbeValue(['/bin/true'], '/etc/modprobe.conf.d/no_ipv6.conf')}
class insights.combiners.modprobe.ModProbeValue(value, source)

Bases: tuple

A value from a ModProbe source

property source
property value

Combined NFS exports

The NFS exports files are normally available to rules from both a single NFSExports object and zero or more NFSExportsD objects. This combiner turns those into one set of data.

Examples

>>> type(all_nfs)
<class 'insights.combiners.nfs_exports.AllNFSExports'>
>>> all_nfs.files  # List of files exporting NFS shares
['/etc/exports', '/etc/exports.d/mnt.exports']
>>> '/home/example' in all_nfs.exports  # All exports stored by path
True
>>> sorted(all_nfs.exports['/home/example'].keys())  # Each path is a dictionary of host specs and flags.
['@group', 'ins1.example.com', 'ins2.example.com']
>>> all_nfs.exports['/home/example']['ins2.example.com']  # Each host contains a list of flags.
['rw', 'sync', 'no_root_squash']
>>> '/home/example' in all_nfs.ignored_exports  # Ignored exports are remembered within one file
True
>>> list(all_nfs.ignored_exports['/home/example'].keys())  # Each ignored export is then stored by source file...
['/etc/exports']
>>> list(all_nfs.ignored_exports['/home/example']['/etc/exports'].keys())  # ... and then by host spec...
['ins2.example.com']
>>> all_nfs.ignored_exports['/home/example']['/etc/exports']['ins2.example.com']  # ... holding the values that were duplicated
['rw', 'sync', 'no_root_squash']
>>> '/home/insights/shared/rw'  in all_nfs.ignored_exports  # Ignored exports are remembered across files
True
class insights.combiners.nfs_exports.AllNFSExports(nfsexports, nfsexportsd)[source]

Bases: object

Combiner for accessing all the NFS export configuration files.

Exports are allowed to be listed multiple times, with all duplicate host after the first causing exportfs to emit a warning and ignore the host. So we combine the raw lines and ignored exports into structures listing the source file for each

files

the list of source files that contained NFS export definitions.

Type

list

exports

the NFS exports stored by export path, with each path storing a dictionary of host flag lists.

Type

dict of dicts

ignored_exports

A dictionary of exported paths that have host definitions that conflicted with a previous definition, stored by export path and then path of the file that defined it.

Type

dict

raw_lines

A dictionary of raw lines that define each exported path, with the lines stored by defining file.

Type

dict of dicts

NginxConfTree - Combiner for nginx configuration

This module models nginx configuration as a tree. It correctly handles include directives by splicing individual document trees into their parents until one document tree is left.

A DSL is provided to query the tree through a select function or brackets []. The brackets allow a more conventional lookup feel but aren’t quite as powerful as using select directly.

class insights.combiners.nginx_conf.EmptyQuotedString(chars)[source]

Bases: insights.parsr.Parser

class insights.combiners.nginx_conf.NginxConfTree(confs)[source]

Bases: insights.core.ConfigCombiner

Exposes nginx configuration through the parsr query interface.

See the insights.core.ConfigComponent class for example usage.

insights.combiners.nginx_conf.get_tree(root=None)[source]

This is a helper function to get an nginx configuration component for your local machine or an archive. It’s for use in interactive sessions.

nmcli_dev_show command

As there are three different file paths in different sos packages, create this combiner to fix this issue.

class insights.combiners.nmcli_dev_show.AllNmcliDevShow(nmclidevshow, nmclidevshowsos)[source]

Bases: dict

Combiner to combine return values from parser NmcliDevShow into one dict

Examples

>>> allnmclidevshow['eth0']['TYPE']
'ethernet'
>>> allnmclidevshow.connected_devices
['eth0']
property connected_devices

The list of devices who’s state is connected and managed by NetworkManager

Type

(list)

property data

Dict with the device name as the key and NmcliDevShow details as the value.

Type

(dict)

PackageProvidesHttpdAll - Combiner for packages which provide httpd

Combiner for collecting all the running httpd command and the corresponding RPM package name which is parsed by the PackageProvidesHttpd parser.

class insights.combiners.package_provides_httpd.PackageProvidesHttpdAll(package_provides_httpd)[source]

Bases: insights.core.LegacyItemAccess

This combiner will receive a list of parsers named PackageProvidesHttpd, one for each running instance of httpd and each parser instance will contain the command information and the RPM package information. It works as a dict with the httpd command information as the key and the corresponding RPM package information as the value.

Examples

>>> sorted(packages.running_httpds)
['/opt/rh/httpd24/root/usr/sbin/httpd', '/usr/sbin/httpd']
>>> packages.get_package("/usr/sbin/httpd")
'httpd-2.4.6-88.el7.x86_64'
>>> packages.get("/opt/rh/httpd24/root/usr/sbin/httpd")
'httpd24-httpd-2.4.34-7.el7.x86_64'
>>> packages["/usr/sbin/httpd"]
'httpd-2.4.6-88.el7.x86_64'
get_package(httpd_command)[source]

Returns the installed httpd package that provides the specified httpd_command.

Parameters

httpd_command (str) -- The specified httpd command, e.g. found in ps command.

Returns

The package that provides the httpd command.

Return type

(str)

property packages

Returns the list of corresponding httpd RPM packages which are running on the system.

property running_httpds

Returns the list of httpd commands which are running on the system.

PackageProvidesJavaAll - Combiner for packages which provide java

Combiner for collecting all the java command and the corresponding package name which is parsed by the PackageProvidesJava parser.

class insights.combiners.package_provides_java.PackageProvidesJavaAll(package_provides_java)[source]

Bases: insights.core.LegacyItemAccess

Combiner for collecting all the java command and the corresponding package name which is parsed by the PackageProvidesJava parser. It works as a dict with the java command as the key and the corresponding package name as the value.

Examples

>>> PACKAGE_COMMAND_MATCH_1 = '''/usr/lib/jvm/jre/bin/java java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'''
>>> PACKAGE_COMMAND_MATCH_2 = '''/usr/lib/jvm/java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64/bin/java java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'''
>>> pack1 = PackageProvidesJava(context_wrap(PACKAGE_COMMAND_MATCH_1))
>>> pack2 = PackageProvidesJava(context_wrap(PACKAGE_COMMAND_MATCH_2))
>>> shared = [{PackageProvidesJavaAll: [pack1, pack2]}]
>>> packages = shared[PackageProvidesJavaAll]
>>> packages.running_javas
['/usr/lib/jvm/jre/bin/java',
 '/usr/lib/jvm/java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64/bin/java']
>>> packages.get_package("/usr/lib/jvm/jre/bin/java")
'java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'
>>> packages.get("/usr/lib/jvm/jre/bin/java")
'java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'
>>> packages["/usr/lib/jvm/jre/bin/java"]
'java-1.8.0-openjdk-headless-1.8.0.141-3.b16.el6_9.x86_64'
get_package(java_command)[source]

Returns the installed java package that provides the specified java_command.

Parameters

java_command (str) -- The specified java command, e.g. found in ps command.

Returns

The package that provides the java command.

Return type

(str)

property running_javas

Returns the list of java commands which are running on the system.

PS

This combiner provides information about running processes based on the ps command. More specifically this consolidates data from insights.parsers.ps.PsEo, insights.parsers.ps.PsAuxcww, insights.parsers.ps.PsEf, insights.parsers.ps.PsAux, insights.parsers.ps.PsAuxww and insights.parsers.ps.PsAlxwww parsers (in that specific order).

Note

The final dataset can vary depending on availability of the parsers for a given ExecutionContext and added filters. The underlying filterable datasources for this combiner can be filtered by passing insights.combiners.ps.Ps to insights.core.filters.add_filter() function along with a filter pattern. Please see insights.core.filters for more information on filtering.

Examples

>>> ps_combiner.pids
[1, 2, 3, 8, 9, 10, 11, 12]
>>> '[kthreadd]' in ps_combiner.commands
True
>>> '[kthreadd]' in ps_combiner
True
>>> ps_combiner[2] == {
... 'PID': 2,
... 'USER': 'root',
... 'UID': 0,
... 'PPID': 0,
... '%CPU': 0.0,
... '%MEM': 0.0,
... 'VSZ': 0.0,
... 'RSS': 0.0,
... 'TTY': '?',
... 'STAT': 'S',
... 'START': '2019',
... 'TIME': '1:04',
... 'COMMAND': '[kthreadd]',
... 'COMMAND_NAME': '[kthreadd]',
... 'ARGS': '',
... 'F': '1',
... 'PRI': 20,
... 'NI': '0',
... 'WCHAN': 'kthrea'
... }
True
class insights.combiners.ps.Ps(ps_alxwww, ps_auxww, ps_aux, ps_ef, ps_auxcww, ps_eo)[source]

Bases: object

Ps combiner consolidates data from the parsers in insights.parsers.ps module.

property commands

Returns the set of full command strings for each command including optional path and arguments, unless underlying parser contains command names only.

Returns

the set with command strings.

Return type

set

property pids

Returns the list of running process IDs (integers).

Returns

the PIDs from the PID column.

Return type

list

property processes

Returns the list of dictionaries, where each item in the list represents a process and the keys in each dictionary are the column headers.

Returns

the list of running processes.

Return type

list

search(**kwargs)[source]

Search the process list for matching rows based on key-value pairs.

This uses the insights.parsers.keyword_search() function for searching; see its documentation for usage details. If no search parameters are given, no rows are returned.

Returns

A list of dictionaries of processes that match the given search criteria.

Return type

list

Examples

>>> ps_combiner.search(COMMAND__contains='[rcu_bh]') == [
... {'PID': 9, 'USER': 'root', 'UID': 0, 'PPID': 2, '%CPU': 0.1, '%MEM': 0.0,
...  'VSZ': 0.0, 'RSS': 0.0, 'TTY': '?', 'STAT': 'S', 'START': '2019', 'TIME': '0:00',
...  'COMMAND': '[rcu_bh]', 'COMMAND_NAME': '[rcu_bh]', 'ARGS': '', 'F': '1', 'PRI': 20,
...  'NI': '0', 'WCHAN': 'rcu_gp'}
... ]
True
>>> ps_combiner.search(USER='root', COMMAND='[kthreadd]') == [
... {'PID': 2, 'USER': 'root', 'UID': 0, 'PPID': 0, '%CPU': 0.0, '%MEM': 0.0,
...  'VSZ': 0.0, 'RSS': 0.0, 'TTY': '?', 'STAT': 'S', 'START': '2019', 'TIME': '1:04',
...  'COMMAND': '[kthreadd]', 'COMMAND_NAME': '[kthreadd]', 'ARGS': '', 'F': '1', 'PRI': 20,
...  'NI': '0', 'WCHAN': 'kthrea'}
... ]
True

Red Hat Release

Combiner for Red Hat Release information. It uses the results of the Uname parser and the RedhatRelease parser to determine the release major and minor version. Uname is the preferred source of data. The Red Hat Release is in obtained from the system in the form major.minor. For example, for a Red Hat Enterprise Linux 7.2 system, the release would be major = 7, minor = 2 and rhel = ‘7.2’.

class insights.combiners.redhat_release.RedHatRelease(uname, rh_rel)[source]

Bases: object

Combiner class to check uname and redhat-release for RHEL major/minor version. Prefer uname to redhat-release.

major

The major RHEL version.

Type

int

minor

The minor RHEL version.

Type

int

rhel

The RHEL version, e.g. ‘7.2’, ‘7.5-0.14’

Type

str

rhel6

The RHEL version when it’s RHEL6, otherwise None

Type

str

rhel7

The RHEL version when it’s RHEL7, otherwise None

Type

str

rhel8

The RHEL version when it’s RHEL8, otherwise None

Type

str

Raises

SkipComponent -- If the version can’t be determined even though a Uname or RedhatRelease was provided.

Examples

>>> rh_rel.rhel
'7.2'
>>> rh_rel.major
7
>>> rh_rel.minor
2
>>> rh_rel.rhel6 is None
True
>>> rh_rel.rhel7
'7.2'
>>> rh_rel.rhel8 is None
True
class insights.combiners.redhat_release.Release(major, minor)

Bases: tuple

namedtuple: Type for storing the release information.

property major
property minor
insights.combiners.redhat_release.redhat_release(rh_release, un)[source]

Warning

This combiner methode is deprecated, please use insights.combiners.redhat_release.RedHatRelease instead.

Combiner method to check uname and redhat-release for rhel major/minor version.

Prefer uname to redhat-release.

Returns

A named tuple with the following items:
  • major: integer

  • minor: integer

Return type

Release

Raises

SkipComponent -- If the version can’t be determined even though a Uname or RedhatRelease was provided.

Examples

>>> rh_release.major
7
>>> rh_release.minor
2
>>> rh_release
Release(major=7, minor=2)

Red Hat Subscription Manager Release

Combiner provides the Red Hat Subscription Manager release information from the parsers insights.parsers.rhsm_releasever.RhsmReleaseVer and insights.parsers.subscription_manager_release.SubscriptionManagerReleaseShow.

class insights.combiners.rhsm_release.RhsmRelease(rhsm_release, sm_release)[source]

Bases: object

Combiner for parsers RhsmReleaseVer and SubscriptionManagerReleaseShow.

Examples

>>> type(rhsm_release)
<class 'insights.combiners.rhsm_release.RhsmRelease'>
>>> rhsm_release.set == '7.6'
True
>>> rhsm_release.major
7
>>> rhsm_release.minor
6
major = None

Major version of the release

Type

int

minor = None

Minor version of the release

Type

int

set = None

Release version string returned from the parsers

Type

str

Sap

This combiner combines the result of insights.parsers.saphostctrl.SAPHostCtrlInstances` and :class:`insights.parsers.lssap.Lssap to get the available SAP instances. Prefer the SAPHostCtrlInstances to Lssap.

class insights.combiners.sap.SAPInstances(name, hostname, sid, type, number, fqdn, version)

Bases: tuple

namedtuple: Type for storing the SAP instance.

property fqdn
property hostname
property name
property number
property sid
property type
property version
class insights.combiners.sap.Sap(hostname, insts, lssap)[source]

Bases: dict

Combiner for combining the result of insights.parsers.lssap.Lssap generated by command lssap and insights.parsers.saphostctrl.SAPHostCtrlInstances generated by command saphostctrl.

Prefer SAPHostCtrlInstances to Lssap.

Examples

>>> type(saps)
<class 'insights.combiners.sap.Sap'>
>>> 'D16' in saps
True
>>> saps['D16'].number
'16'
>>> saps.sid('HDB16')
'HA2'
>>> saps.hostname('HDB16')
'lu0417'
>>> len(saps.business_instances)
3
>>> saps.is_hana
True
>>> saps.is_netweaver
True
>>> saps.is_ascs
False
all_instances

List of all the SAP instances listed by the command.

Type

list

function_instances

List of functional SAP instances E.g. Diagnostics Agents SMDA97/SMDA98

Type

list

business_instances

List of business SAP instances E.g. HANA, NetWeaver, ASCS, or others

Type

list

local_instances

List of all SAP instances running on this host

Type

list

FUNC_INSTS = ('SMDA',)

Tuple of the prefix string of the functional SAP instances

Type

tuple

property data

Dict with the instance name as the key and instance details as the value.

Type

dict

hostname(instance)[source]

str: Returns the hostname of the instance.

property is_ascs

Is any SAP System Central Services instance detected?

Type

bool

property is_hana

Is any SAP HANA instance detected?

Type

bool

property is_netweaver

Is any SAP NetWeaver instance detected?

Type

bool

number(instance)[source]

str: Returns the systeme number of the instance.

sid(instance)[source]

str: Returns the sid of the instance.

type(instance)[source]

str: Returns the type code of the instance.

version(instance)[source]

str: Returns the version of the instance.

Satellite Version

The following modules are included:

SatelliteVersion - Version of Satellite Server

Combiner to get Satellite Server version information.

CapsuleVersion - Version of Satellite Capsule (>=6.2)

Combiner to get Satellite Capsule version information. ONLY Satellite Capsule 6.2 and newer are supported.

class insights.combiners.satellite_version.CapsuleVersion(rpms)[source]

Bases: object

Check the parser insights.parsers.installed_rpms.InstalledRpms for satellite capsule version information.

Note

ONLY Satellite Capsule 6.2 and newer are supported.

Below is the logic to determine the satellite version:

Check the version of satellite/satellite-capsule directly:
- https://access.redhat.com/solutions/1392633

                        Sat 6.0.x   Sat 6.1.x   Sat 6.2.x
    satellite-capsule   -           -           6.2.x
full

the full version format like version-release.

Type

str

version

the satellite version do not includes release.

Type

str

release

the release string in the version.

Type

str

major

the major version.

Type

int

minor

the minor version.

Type

int

Raises

SkipComponent -- When it’s not a Satellite Capsule machine or the Satellite Capsule version cannot be determined according to current information.

Examples

>>> cap_ver.full == 'satellite-capsule-6.2.0.11-1.el7sat'
True
>>> cap_ver.major
6
>>> cap_ver.minor
2
>>> cap_ver.version
'6.2.0.11'
>>> cap_ver.release
'1.el7sat'
class insights.combiners.satellite_version.SatelliteVersion(rpms, sat6_ver)[source]

Bases: object

Check the parsers insights.parsers.satellite_version.Satellite6Version and insights.parsers.installed_rpms.InstalledRpms for satellite version information.

Below is the logic to determine the satellite version:

1. For Satellite 6.1:

    a. Check the version information in below files at first
       - https://access.redhat.com/solutions/1392633
       File: /usr/share/foreman/lib/satellite/version.rb

    b. Check the version of package foreman, candlepin and katello, E.g.
       - https://access.redhat.com/articles/1343683

                    Sat 6.0.8   Sat 6.1.10  Sat 6.1.11
       foreman      1.6.0.53    1.7.2.61    1.7.2.62
       candlepin    0.9.23      0.9.49.16   0.9.49.19
       katello      1.5.0       2.2.0       2.2.0

2. For Satellite 6.2 and newer:

   Check the version of satellite package directly:
   - https://access.redhat.com/solutions/1392633

                            Sat 6.0.x   Sat 6.1.x   Sat 6.2.x
        satellite           -           -           6.2.x

3. For Satellite 5.x
   - https://access.redhat.com/solutions/1224043
     NOTE: Because of satellite-branding is not deployed in Satellite
           5.0~5.2, and satellite-schema can also be used for checking
           the version, here checked satellite-schema instead of
           satellite-branding.

   Check the version of package satellite-schema directly:

                                Sat 5.0~5.2     Sat 5.3 ~
        rhn-satellite-schema    ok              -
        satellite-schema        -               ok
full

the full version format like version-release.

Type

str

version

the satellite version do not includes release.

Type

str

release

the release string in the version.

Type

str

major

the major version.

Type

int

minor

the minor version.

Type

int

Raises

SkipComponent -- When it’s not a Satellite machine or the Satellite version cannot be determined according to current information.

Examples

>>> sat_ver.full == 'satellite-6.2.0.11-1.el7sat'
True
>>> sat_ver.major
6
>>> sat_ver.minor
2
>>> sat_ver.version
'6.2.0.11'
>>> sat_ver.release
'1.el7sat'

SELinux

Combiner for more complex handling of SELinux being disabled by any means available to the users. It uses results of insights.parsers.sestatus.SEStatus, and insights.parsers.selinux_config.SelinuxConfig parsers and insights.combiners.grub_conf.GrubConf combiner.

It contains a dictionary problems in which it stores detected problems with keys as follows and values are parsed lines with detected problem:

  • sestatus_disabled - SELinux is disabled on runtime.

  • sestatus_not_enforcing - SELinux is not in enforcing mode.

  • grub_disabled - SELinux is set in Grub to be disabled.

  • grub_not_enforcing - SELinux is set in Grub to not be in enforcing mode.

  • selinux_conf_disabled - SELinux is set in configuration file to be disabled.

  • sestatus_not_enforcing - SELinux is set in configuration file to not be in enforcing mode.

Examples

>>> selinux = shared[SELinux]
>>> selinux.ok()
False
>>> selinux.problems
{'grub_disabled': ['/vmlinuz-2.6.32-642.el6.x86_64 selinux=0 ro root= ...'],
 'selinux_conf_disabled': 'disabled',
 'sestatus_not_enforcing': 'permissive'}
class insights.combiners.selinux.SELinux(se_status, selinux_config, grub_conf)[source]

Bases: object

A combiner for detecting that SELinux is enabled and running and also enabled at boot time.

ok()[source]

Checks if there are any problems with SELinux configuration.

Returns

bool: True if SELinux is enabled and functional, false otherwise.

Services - check ChkConfig and systemd UnitFiles

This combiner provides information about whether a given service is enabled, using parsers for chkconfig (for RHEL 5 and 6) and systemd list-unit-files (for RHEL 7 and above).

Examples

>>> svcs = shared[Services]
>>> svcs.is_on('atd') # Can be 'atd' or 'atd.service'.
True
>>> svcs.is_on('systemd-journald.service')
True
>>> 'atd' in svcs
True
>>> svcs.service_line('atd')
'atd.service                                 enabled'
>>> 'disabled_service' in svcs
False
>>> 'nonexistent_service' in svcs
False
class insights.combiners.services.Services(chk_config, unit_files)[source]

Bases: object

A combiner for working with enabled services, independent of which version of RHEL is in use.

The interface closely follows the models of ChkConfig and UnitFiles:

  • is_on(service_name) and the service_name in Services method return whether the service given is present and enabled.

  • service_line(service_name) returns the actual line that contained the service name.

is_on(service_name)[source]

Checks if the service is enabled on the system.

Parameters

service_name (str) -- service name (with or without ‘.service’ suffix).

Returns

True if service is enabled, False otherwise.

Return type

bool

service_line(service_name)[source]

Returns the relevant line from the service listing.

Parameters

service_name (str) -- service name (with or without ‘.service’ suffix).

Returns

True if service is enabled, False otherwise.

Return type

str

Simultaneous Multithreading (SMT) combiner

Combiner for Simultaneous Multithreading (SMT). It uses the results of the following parsers: insights.parsers.smt.CpuCoreOnline, insights.parsers.smt.CpuSiblings.

class insights.combiners.smt.CpuTopology(cpu_online, cpu_siblings)[source]

Bases: object

Class for collecting the online/siblings status for all CPU cores.

Sample output of the CpuCoreOnline parser is:

[[Core 0: Online], [Core 1: Online], [Core 2: Online], [Core 3: Online]]

Sample output of the CpuSiblings parser is:

[[Core 0 Siblings: [0, 2]], [Core 1 Siblings: [1, 3]], [Core 2 Siblings: [0, 2]], [Core 3 Siblings: [1, 3]]]
cores

List of all cores.

Type

list of dictionaries

all_solitary

True, if hyperthreading is not used.

Type

bool

Examples

>>> type(cpu_topology)
<class 'insights.combiners.smt.CpuTopology'>
>>> cpu_topology.cores == [{'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}, {'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}]
True
>>> cpu_topology.all_solitary
False
online(core_id)[source]

Returns bool value obtained from “online” file for given core_id.

siblings(core_id)[source]

Returns list of siblings for given core_id.

Tmpfilesd configuration

The tmpfilesd files are normally available to rules as a list of TmpFilesd objects. This combiner turns those into one set of data, and provides a find_file() method to search for that filename among all the files.

class insights.combiners.tmpfilesd.AllTmpFiles(tmpfiles)[source]

Bases: object

Combiner for accessing all the tmpfilesd configuration files. Configuration files can be found in three directories: /usr/lib/tmpfiles.d, /run/tmpfiles.d, and /etc/tmpfiles.d. Each directory overrides the settings in the previous directory. This combiner checks for and accounts for this behavior.

files

the set of files found in all data files.

Type

set

active_rules

a dictionary of rules using the config file as the key

Type

dict

file_paths

a list of the file paths for the configurations.

Type

list

find_file(path)[source]

Find all the rules matching a given file. Uses the rules dictionary to search so duplicate files are alreayd removed.

Examples

>>> data = shared[AllTmpFiles]
>>> results = data.find_file('/tmp/sap.conf')
>>> len(results)
1
>>> results
{'/etc/tmpfiles.d/sap.conf': {'path': '/tmp/sap.conf', 'mode': '644', 'type': 'x', 'age': None,
 'gid': None, 'uid': None, 'argument': None}}
Parameters

path (str) -- path to be searched for among the rules.

Returns

a dictionary of rules where the path is found using the config file path as the key.

Return type

(dict)

Uptime

Combiner for uptime information. It uses the results of the Uptime parser and the Facter parser to get the uptime information. Uptime is the preferred source of data.

Examples

>>> ut = shared[uptime]
>>> ut.updays
21
>>> ht.uptime
03:45
class insights.combiners.uptime.Uptime(currtime, updays, uphhmm, users, loadavg, uptime)

Bases: tuple

namedtuple: Type for storing the uptime information.

property currtime
property loadavg
property updays
property uphhmm
property uptime
property users
insights.combiners.uptime.uptime(ut, facter)[source]

Check uptime and facts to get the uptime information.

Prefer uptime to facts.

Returns

A named tuple with currtime, updays, uphhmm, users, loadavg and uptime components.

Return type

insights.combiners.uptime.Uptime

Raises

Exception -- If no data is available from both of the parsers.

UserNamespaces - Check whether user namespaces are enabled

This combiner reports whether user namespaces are enabled for the running system per the kernel command line, and if grub info is available, for which boot configurations they’re enabled.

The user namespaces feature was introduced in kernel version 3.8, first shipped in RHEL 7.x. It is built into the kernel, but requires an option to enable:

user_namespaces.enable=1

There are a few checks which are presently left to callers:
  • enabled() doesn’t check whether the kernel supports user namespaces, only that the command line argument to enable them is present.

  • There is currently no attempt to relate the current kernel to a grub entry, which may be useful to know, if e.g. grub configuration has been changed, but the system needs a reboot.

class insights.combiners.user_namespaces.UserNamespaces(cmdline, grub2)[source]

Bases: object

A combiner which determines if user namespaces are enabled.

enabled()[source]

Determine whether user namespaces are enabled or not.

Returns

True if user namespaces are enabled, false if they aren’t, or if user namespaces aren’t supported by this kernel.

Return type

bool

enabled_configs()[source]

Get boot configs for which user namespaces are enabled.

Returns

A list of grub menu entries in which user namespaces are enabled. An empty list if user namespaces aren’t supported or grub data isn’t available.

Return type

list

VirtWhat

Combiner to check if the host is running on a virtual or physical machine. It uses the results of the DMIDecode and VirtWhat parsers. Prefer VirtWhat to DMIDecode.

Examples

>>> vw = shared[VirtWhat]
>>> vw.is_virtual
True
>>> vw.is_physical
False
>>> vw.generic
'kvm'
>>> vw.amended_generic
'rhev'
>>> 'aws' in vw
False
class insights.combiners.virt_what.VirtWhat(dmi, vw)[source]

Bases: object

A combiner for checking if this machine is virtual or physical by checking virt-what or dmidecode command.

Prefer virt-what to dmidecode

is_virtual

It’s running in a virtual machine?

Type

bool

is_physical

It’s running in a physical machine?

Type

bool

generic

The type of the virtual machine. ‘baremetal’ if physical machine.

Type

str

specifics

List of the specific information.

Type

list

amended_generic

The type of the virtual machine. ‘baremetal’ if physical machine. Added to address an issue with virt_what/dmidecode when identifying ‘ovirt’ vs ‘rhev’. Will match the generic attribute in all other cases.

Type

str

virt-who configuration

virt-who can accept configuration from several sources listed below in order of precedence.

Configuration sources of virt-who:

1. command line             # Ignored
2. environment variables    # Ignored
3. /etc/sysconfig/virt-who  # VirtWhoSysconfig
4. /etc/virt-who.d/*.conf   # VirtWhoConf
5. /etc/virt-who.conf       # VirtWhoConf
class insights.combiners.virt_who_conf.AllVirtWhoConf(svw, vwc)[source]

Bases: object

Combiner for accessing part of valid virt-who configurations

Sample content of /etc/sysconfig/virt-who:

# Start virt-who on background, perform doublefork and monitor for virtual guest
# events (if possible). It is NOT recommended to turn off this option for
# starting virt-who as service.
VIRTWHO_BACKGROUND=1
# Enable debugging output.
VIRTWHO_DEBUG=1
# Send the list of guest IDs and exit immediately.
VIRTWHO_ONE_SHOT=0
# Acquire and send list of virtual guest each N seconds, 0 means default
# configuration.
VIRTWHO_INTERVAL=3600
# Virt-who subscription manager backend. Enable only one option from the following:
# Report to Subscription Asset Manager (SAM) or the Red Hat Customer Portal
# Report to Sattellite version 6
VIRTWHO_SATELLITE6=1

Sample content of /etc/virt-who.conf:

[global]
debug=False
oneshot=True
[defaults]
owner=test

Sample content of /etc/virt-who.d/satellite_esx.conf:

[esx_satellite]
type=esx
server=10.0.0.1
owner=Satellite
env=Satellite

Examples

>>> shared = {VirtWhoSysconfig: vw_sys, VirtWhoConf: [vw_conf, vwd_conf]}
>>> all_vw_conf = AllVirtWhoConf(None, shared)
>>> all_vw_conf.background
True
>>> all_vw_conf.oneshot
False  # `/etc/sysconfig/virt-who` has high priority
>>> all_vw_conf.interval
3600
>>> all_vw_conf.sm_type
'sat6'
>>> all_vw_conf.hypervisors
[{'name': 'esx_satellite', 'server': '10.0.0.1',
  'owner': 'Satellite', 'env': 'Satellite', type: 'esx'}]
>>> all_vw_conf.hypervisor_types
['esx']
background

is virt-who running as a service

Type

boolean

oneshot

is virt-who running in one-shot mode or not

Type

boolean

interval

how often to check connected hypervisors for changes (seconds)

Type

int

sm_type

what subscription manager will virt-who report to

Type

str

hypervisors

list of dict of each connected hypervisors

Type

list

hypervisor_types

list of the connected hypervisors’ type

Type

list

X86PageBranch - combiner for x86 kernel features:

x86 kernel features includes:
  • PTI (Page Table Isolation)

  • IBPB (Indirect Branch Prediction Barrier)

  • IBRS (Indirect Branch Restricted Speculation)

This combiner reads information from debugfs:

Examples

>>> type(dv)
<class 'insights.combiners.x86_page_branch.X86PageBranch'>
>>> dv.pti
1
>>> dv.ibpb
3
>>> dv.ibrs
2
>>> dv.retp
0
Attributes:

pti (int): The result parsed of ‘/sys/kernel/debug/x86/pti_enabled’

ibpb (int): The result parsed of ‘/sys/kernel/debug/x86/ibpb_enabled’

ibrs (int): The result parsed of ‘/sys/kernel/debug/x86/ibrs_enabled’

retp (int): The result parsed of ‘/sys/kernel/debug/x86/retp_enabled’

class insights.combiners.x86_page_branch.X86PageBranch(pti_enabled, ibpb_enabled, ibrs_enabled, retp_enabled)[source]

Bases: object

This combiner provides an interface to the three X86 Page Table/Branch Prediction parsers. If retp_enabled is not available, self.retp is None.

Shared Components Catalog

Contents:

IsOpenStackCompute

The IsOpenStackCompute component uses PsAuxcww parser to determine OpenStack Compute node. It checks if ‘nova-compute’ process exist, if not raises SkipComponent so that the dependent component will not fire. Can be added as a dependency of a parser so that the parser only fires if the IsIsOpenStackCompute dependency is met.

class insights.components.openstack.IsOpenStackCompute(ps)[source]

Bases: object

The IsOpenStackCompute component uses PsAuxcww parser to determine OpenStack Compute node. It checks if nova-compute process exist, if not raises SkipComponent.

Raises

SkipComponent -- When nova-compute process does not exist.

IsRhel6, IsRhel7 and IsRhel8

An IsRhel* component is valid if the insights.combiners.redhat_release.RedHatRelease combiner indicates the major RHEL version represented by the component. Otherwise, it raises a insights.core.dr.SkipComponent to prevent dependent components from executing.

In particular, an IsRhel* component can be added as a dependency of a parser to limit it to a given version.

class insights.components.rhel_version.IsRhel6(rhel)[source]

Bases: object

This component uses RedHatRelease combiner to determine RHEL version. It checks if RHEL6, if not RHEL6 it raises SkipComponent.

Raises

SkipComponent -- When RHEL version is not RHEL6.

class insights.components.rhel_version.IsRhel7(rhel)[source]

Bases: object

This component uses RedHatRelease combiner to determine RHEL version. It checks if RHEL7, if not RHEL7 it raises SkipComponent.

Raises

SkipComponent -- When RHEL version is not RHEL7.

class insights.components.rhel_version.IsRhel8(rhel)[source]

Bases: object

This component uses RedhatRelease combiner to determine RHEL version. It checks if RHEL8, if not RHEL8 it raises SkipComponent.

Raises

SkipComponent -- When RHEL version is not RHEL8.

Openshift 4 Analysis

OpenShift 4 can generate diagnostic archives with a component called the insights-operator. They are automatically uploaded to Red Hat for analysis.

The openshift-must-gather CLI tool produces more comprehensive archives than the operator. insights-core recognizes them as well.

If you have an insights-operator or must-gather archive, you can write rules for it by using the insights.ocp.ocp() component, which gives you a top level view of all of the collected cluster configuration in a form that’s easy to navigate and query.

You can access the entire configuration iteractively with insights inspect insights.ocp.ocp <archive> or insights ocpshell <archive>.

Top level OpenShift 4 component

The conf() component recognizes insights-operator and must-gather archives.

insights.ocp.conf(io, mg)[source]

The conf component parses all configuration in an insights-operator or must-gather archive and returns an object that is part of the parsr common data model. It can be navigated and queried in a standard way. See the tutorial for details.

Documentation Guidelines

Shared Parsers and Combiners that are developed for the insights-core component of Insights are documented via comments in the code. This makes it easier to produce documentation that is consistent and up-to-date. The insights-core project utilizes Sphinx and reStructuredText for documentation generation. Sphinx can create documentation in multiple output formats, and documentation that can be easily published on websites like Read The Docs. There are a few simple steps that should be followed by developers when creating or modifying code to be merged into the insights-core project. First, provide useful comments to allow a user of your parser to understand what it does and how to use it. Second follow the style chosen for the insights-core project. And Third, test your docs by generating them and making sure that they are correct.

This document demonstrates a parser, but combiners may be documented following the same guidelines and examples.

Goal of Documentation in Code

The goal of these guidelines is to provide a standard for documentation of insights-core code. Having standard for the code documentation (docstrings) makes it more likely that the code will be used and helps to reduce the number of problems encountered by developers. Using a standard format for docstrings will also insure that the project documentation generated from the code will be useful and current with the code.

When adding comments to the code it is desirable to make it easy for a developer to access. This means putting the information at the top, in the module section, and only moving the documentation down into each parser when it is specific to the parser. For instance if there is only one parser in a module then almost all of the documentation will be in the module section. If there are multiple parsers that are very similar then most of the documentation will be in the module section and only the unique details will be in each parser’s class.

Look at the example code in this article and also review the source files for these parsers and see how the documentation has been organized.

Example Parser

The insights-core project follows the Google Docstring Style for docstring comments. The following code provides an example of how docstring comments should be used in code contributed to the insights-core project:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
lspci - Command
===============

This module provides plugins access to the PCI device information gathered from
the ``/usr/sbin/lspci`` command.

Typical output of the ``lspci`` command is::

    00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)
    00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
    03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)
    0d:00.0 System peripheral: Ricoh Co Ltd PCIe SDXC/MMC Host Controller (rev 07)

The data is exposed via the ``obj.lines`` attribute which is a list containing
each line in the output.  The data may also be filtered using the
``obj.get("filter string")`` method.  This method will return a list of lines
containing only "filter string".  The ``in`` operator may also be used to test
whether a particular string is in the ``lspci`` output.  Other methods/operators
are also supported, see the :py:class:`insights.core.LogFileOutput` class for more information.

Examples:
    >>> pci_info.get("Intel Corporation")
    ['00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)', '00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)', '03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)']
    >>> len(pci_info.get("Network controller"))
    1
    >>> "Centrino Advanced-N 6205" in pci_info
    True
    >>> "0d:00.0" in pci_info
    True
"""
from .. import LogFileOutput, parser
from insights.specs import Specs


@parser(Specs.lspci)
class LsPci(LogFileOutput):
    """Parses output of the ``lspci`` command."""
    pass

One thing to note here is that the output of each example code line is tested literally against the output given in the documentation. This means you cannot split lines, and a dictionary will almost certainly not be listed in the order you give it.

Docstring Details

Google Docstring Style is used for specific elements of the docstring, but generally reStructuredText is used for all formatting. The following subsections describe details of the docstrings in the example code.

Title

1
2
3
"""
lspci - Command
===============

The docstring module begins at the first line of the file using three double quotes. The second line is the name of the module and a descriptive phrase. In this case the file is lspci.py, the module is lspci and it is a command. An example of a file parser would be file fstab.py, module name fstab and descriptive phrase’File /etc/fstab’. The module name line is followed by a line of = characters that is the same length as the entire module line. A blank line follows the module information.

Description

 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
This module provides plugins access to the PCI device information gathered from
the ``/usr/sbin/lspci`` command.

Typical output of the ``lspci`` command is::

    00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)
    00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
    03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)
    0d:00.0 System peripheral: Ricoh Co Ltd PCIe SDXC/MMC Host Controller (rev 07)

The data is exposed via the ``obj.lines`` attribute which is a list containing
each line in the output.  The data may also be filtered using the
``obj.get("filter string")`` method.  This method will return a list of lines
containing only "filter string".  The ``in`` operator may also be used to test
whether a particular string is in the ``lspci`` output.  Other methods/operators
are also supported, see the :py:class:`insights.core.LogFileOutput` class for more information.

Next comes the description of the module. Since this description is the first thing a developer will see when viewing the documentation it is important that the description is clear, concise and useful. Include elements of the module that would not be obvious from looking at the code. This description should provide an overview that complements detail shown in the Examples section. If there are multiple parsers in the module, this section should provide a brief description of each parser. If parser input is similar for each parser then a code samples can be shown in the module description and/or in the Examples. If there are important details in the output for each parser then put that information in the class docstrings instead. You may use multiple Examples sections in the module description if necessary to fully demonstrate usage of the parser.

Notes/References

22
23
24
25
Note:
    The examples in this module may be executed with the following command:

    ``python -m insights.parsers.lspci``

Module notes and/or references are not necessary unless there is information that should be included to aid a developer in understanding the parser. In this particular case this information is only provided as an aid to the reader of this sample code that the Examples section is executable using doctest. It is not recommended to include this note in any contributed code regardless of whether the code is doctest compatible.

Examples

27
28
29
30
31
32
33
34
35
36
Examples:
    >>> pci_info.get("Intel Corporation")
    ['00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)', '00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)', '03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)']
    >>> len(pci_info.get("Network controller"))
    1
    >>> "Centrino Advanced-N 6205" in pci_info
    True
    >>> "0d:00.0" in pci_info
    True
"""

This section of the documentation is the most important section because of the information it conveys to the reader. Make sure to include examples that show use of the parser to access the facts provided by the parser. You can ensure that the examples are accurate by executing them in the Python interactive shell. If you implement an Examples section including input data as shown in the above code, you can use the doctest utility to execute/test your example documentation. It is not necessary to include the input in both the comments and the examples. Simply refer to the input samples provide in the Examples section from the comments.

Testing your examples

To test this documentation automatically, this code should go in the associated tests/test_lspci.py file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
from insights.parsers import lspci
from insights.tests import context_wrap
import doctest

LSPCI_DOCS_EXAMPLE = '''
00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)
0d:00.0 System peripheral: Ricoh Co Ltd PCIe SDXC/MMC Host Controller (rev 07)
'''

def test_lspci_documentation():
    env = {
        'lspci': lspci.LsPci(context_wrap(LSPCI_DOCS_EXAMPLE)),
    }
    failed, total = doctest.testmod(lspci, globs=env)
    assert failed == 0

This causes the tests to fail if the documentation examples fail for any reason. If that occurs the tests will output detailed information about problems in execution or the differences between expected and actual output.

Testing Your Docstring

Note

The documentation build is not supported for versions of Python older than 2.7. At least one of the modules needed to render the Jupyter notebooks is not supported. Please build documents using a version of Python that is at least 2.7 or greater.

Once you have implemented a parser with the recommended documentation style you will need to include it in the insights-core documentation. You can do this by creating a file in the directory insights-core/docs/shared_parsers_catalog/ that has the same name as your parser module name, except with a .rst extension instead of a .py extension. For example if your parser module is named your_parser.py then create a file insights-core/docs/shared_parsers_catalog/your_parser.rst and include the following three lines in the file:

.. automodule:: insights.parsers.your_parser
   :members:
   :show-inheritance:

Once you have created this file, switch to the directory insights-core/docs and type the following commands to create the HTML documentation:

$ make clean
$ make html_debug

If you have errors in your comments you may see them in the output of the make command. Sphinx will only report errors if it cannot parse the comments. If you notice a message similar to the following you may safely ignore it:

"Didn't find BlockIDInfo.data in insights-core.parser.blkid"

Once the make command executes without any error messages the next step is to review the generated HTML and ensure that it looks correct. The generated HTML is located in insights-core/docs/_build/html/. You may view the files in a browser such as Firefox by executing the following command from the html directory:

$ firefox index.html

If you prefer to view the HTML in a web browser you may also start a basic web server in the html directory by executing the following command to run a web server on port 8000:

$ python -m SimpleHTTPServer 8000

Once you have verified that the documentation was created correctly, check in your code and the .rst file and submit a pull request.

Rendered HTML

The following show how the lspci module documentation is rendered as HTML.

LSPCI Parser Module Web Page

References

Components Cross-Reference

Specs Dependents

Parser Dependents/Dependencies

Combiner Dependents/Dependencies

Embedded Content

insights-core separates the return of results from their rendering. This separation allows applications using insights-core to produce output appropriate to the application. So, for example, the Red Hat Insights product utilizes several content artifacts (general descriptions, specific descriptions, resolutions, etc.) while an internal diagnostic system may have a single description. Each application may also use different rendering or templating systems for their UI layer. Finally, internationalization of responses may be required.

Because of this separation, the methods that return results (insights.core.plugins.make_pass and insights.core.plugins.make_fail) should not provide formatting themselves. Instead, keyword arguments (kwargs) are used to pass information from the plugin to the caller for interpretation.

However, this separation adds unneeded complexity in the case of creating rules for individual, command line use. For this use-case, a conventional approach using a simple string or dictionary is available to embed content into the rule code.

CONTENT Attribute

To embed content within a rule, create a CONTENT attribute on the rule module. This attribute can be a string or a dictionary.

CONTENT as a String

When CONTENT is a string, it is interpreted as a jinja2 template, and the kwargs of the make_* functions are interpolated into it. This would take place for all responses in the module. If you want to scope content to a particular rule, set the content named argument in the @rule declaration to the desired jinja2 string.

CONTENT as a Dictionary

When CONTENT is a dictionary, there are three possible ways to specify template strings with results. A template can be associated with an error_key (the first argument of the make_pass or make_fail functions,) or with the make_pass and make_fail commands themselves, or both.

Dictionary keys which are references to make_pass or make_fail can be used to specify jinja2 template strings. These would apply to any make_* results in the module.

CONTENT = {
        make_pass: "Good bash version: {{bash}}",
        make_fail: "Bad bash version: {{bash}}"
}

Alternatively, one can use the error_key returned by the respective make_* functions to specify a jinja2 template string.

CONTENT = {
     ERROR_KEY_GOOD_BASH: "Good bash version: {{bash}}",
     ERROR_KEY_BAD_BASH: "Bad bash version: {{bash}}"
}

Finally, one can use the error_key to specify a make_* set of jinja2 strings through an inner dictionary.

CONTENT = {
    ERROR_KEY_BASH: {
            make_pass: "Good bash version: {{bash}}",
            make_fail: "Bad bash version: {{bash}}"
    }
    ERROR_KEY_TUNED: {
            make_pass: "Good tuned version: {{tuned}}",
            make_fail: "Bad tuned version: {{tuned}}"
    }
}

This allows one to use a single error_key with a pass/fail pair.

The Examples section provides additional examples of each type of CONTENT usage.

make_metadata Limitation

A limitation of the CONTENT attribute occurs when adding embedded content for the output of the make_metadata function. Since this method does not take an error_key, one must use the string version of the CONTENT attribute instead of the dictionary version.

In the case that you need multiple format strings in the module, you can use the content argument on the @rule decorator to specify the specific formatting for the rule.

Optional Dependencies

To use the CONTENT formatting feature, you will need to install the optional jinja2 module. In addition, when running insights on the command line, you may want to install the colorama package for colorized output. For example,

pip install colorama jinja2

Examples

A single string can be used for all results from a file. That is, the CONTENT attribute is applied without regard to the returned ERROR_KEY.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 from insights import rule, make_pass, make_fail
 from insights.parsers.installed_rpms import InstalledRpm, InstalledRpms

 CONTENT = "Bash Bug Check: {{bash}}"

 @rule(InstalledRpms)
 def check_bash_bug(rpms):
     bug_version = InstalledRpm.from_package('bash-4.4.14-1.any')
     fix_version = InstalledRpm.from_package('bash-4.4.18-1.any')
     current_version = rpms.get_max('bash')
     if bug_version <= current_version < fix_version:
         return make_fail('BASH_BUG', bash=current_version.nvr)
     else:
         return make_pass('BASH_BUG', bash=current_version.nvr)

The CONTENT string will be used for both the make_fail (line 12) and make_pass (line 14) classes, substituting the value of the bash kwarg (that is, current_version.nvr.) In this case the string acts as a label, and the fail or pass classification indicates if the version an issue or not. Putting the above in a file, bash_bug.py and running on a system with a version outside the “bug” range results in

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 % insights-run -p bash_bug
 ---------
 Progress:
 ---------
 P

 ---------------
 Rules Executed
 ---------------
 bash_bug.check_bash_bug - [PASS]
 -----------------------------------------
 Bash Bug Check: bash-4.4.23-1.fc28

 ...

For a system with the bug, the output would be

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 % insights-run -p bash_bug
 ---------
 Progress:
 ---------
 R

 ---------------
 Rules Executed
 ---------------
 bash_bug.check_bash_bug - [FAIL]
 -----------------------------------------
 Bash Bug Check: bash-4.4.15-1.fc28

 ...

To make the distinction more explicit, or to return different output in the case of a pass or a fail, we use a dictionary for the CONTENT attribute.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
 from insights import rule, make_pass, make_fail
 from insights.parsers.installed_rpms import InstalledRpm, InstalledRpms

 CONTENT = {
     make_fail: "Bash bug found! Version: {{bash}}",
     make_pass: "Bash bug not found: {{bash}}."
 }

 @rule(InstalledRpms)
 def check_bash_bug(rpms):
     bug_version = InstalledRpm.from_package('bash-4.4.14-1.any')
     fix_version = InstalledRpm.from_package('bash-4.4.18-1.any')
     current_version = rpms.get_max('bash')
     if bug_version <= current_version < fix_version:
         return make_fail('BASH_BUG', bash=current_version.nvr)
     else:
         return make_pass('BASH_BUG', bash=current_version.nvr)

With this version, the “pass” use case would generate output such as

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 % insights-run -p bash_bug
 ---------
 Progress:
 ---------
 P

 ---------------
 Rules Executed
 ---------------
 bash_bug.check_bash_bug - [PASS]
 -----------------------------------------
 Bash bug not found: bash-4.4.23-1.fc28.

 ...

and the fail case would output

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 % insights-run -p bash_bug
 ---------
 Progress:
 ---------
 R

 ---------------
 Rules Executed
 ---------------
 bash_bug.check_bash_bug - [FAIL]
 -----------------------------------------
 Bash bug found! Version: bash-4.4.15-1.fc28.

 ...

If you had multiple error keys and needed to distinguish between the content for them, instead of using the response classes as the CONTENT keys, you would use the error key values. If you needed to distinguish between the pass and failure states of a single key, use a dictionary with the response class as the keys.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 from insights import rule, make_pass, make_fail
 from insights.parsers.installed_rpms import InstalledRpm, InstalledRpms

 CONTENT = {
     # for any response with error key of 'SUPER_BASH_BUG'
     'SUPER_BASH_BUG': "Super Bash bug found! Version: {{bash}}",

     # distinguish between the response types of the 'BASH_BUG' error key.
     'BASH_BUG': {
         make_fail:"Bash bug found! Version: {{bash}}",
         make_pass: "Bash bug not found! Version: {{bash}}"
     }
 }

 @rule(InstalledRpms)
 def check_bash_bug(rpms):
     super_bug_version = InstalledRpm.from_package('bash-4.4.12-1.any')
     bug_version = InstalledRpm.from_package('bash-4.4.14-1.any')
     fix_version = InstalledRpm.from_package('bash-4.4.18-1.any')
     current_version = rpms.get_max('bash')
     if super_bug_version == current_version:
         return make_fail('SUPER_BASH_BUG', bash=current_version.nvr)

     if bug_version <= current_version < fix_version:
         return make_fail('BASH_BUG', bash=current_version.nvr)
     else:
         return make_pass('BASH_BUG', bash=current_version.nvr)

Tools

Insights Cat

The cat module allows you to execute an insights datasource and write its output to stdout. A string representation of the datasource is written to stderr before the output.

Options:

-c CONFIG --config  CONFIG      Configure components
-p PLUGINS --plugins PLUGINS    Comma-separated  list without spaces of package(s) or module(s) containing plugins.
-q --quite                      Only show commands or paths.
--no-header                     Don’t print command or path headers
-D --debug                      Show debug level information
spec                            Spec to dump
archive                         Archive or directory to analyze

Examples:

Outputs the information collected by the SPEC insights.specs.default.DefaultSpecs.redhat_release, including the header describing the SPEC type and value.

1
 insights-cat redhat_release

Outputs the information collected by the SPEC insights.specs.default.DefaultSpecs.redhat_release with no header. This is appropriate when you want to collect the data in the form as seen by a parser.

1
 insights-cat --no-header redhat_release

Outputs the information collected by the SPEC using the configuration information provided in configfile.yaml. See :doc:CONFIG(5)  <./config> for more information on the specifics of the configuration file options and format.

1
 insights-cat -c configfile.yaml redhat_release

The -D option will produce a trace of the operations performed by insights-core as the SPEC is executed. The SPEC data will be output following all of the debugging output.

1
 insights-cat -D redhat_release

The -p option allows inclusion of additional modules by insights-core. By default the insights-core cat command will only load the insights-core modules. In this example the file examples/rules/stand_alone.py includes a spec Specs.hosts. This command will execute the hosts spec in the examples file and not the insights spec hosts.The -D option will show each module as it is loaded and the actual spec used for the command.

1
 insights-cat -D -p examples.rules.stand_alone examples.rules.stand_alone.Specs.hosts

Multiple modules can be loaded with the -p option by separating them with commas.

1
 insights-cat -D -p module1,module2,module3 spec_name

The -q switch will inhibit output of the command or file, and only show the spec type and the command to be executed or file to be collected. Use this switch when you are interested in the details of the spec and don’t care about the data.

1
 insights-cat -q installed_rpms

More insights-cat examples can be fund here insights.tools.cat

Insights Info

Allow users to interrogate components.

Options:

-c COMPONENTS  --components COMPONENTS     Comma separated list of components to get info about
-i --info                                  Comma separated list to get dependency info about
-p PLUGINS --preload PLUGINS               Comma separated list of packages or modules to preload
-d --pydoc                                 Show pydoc for the given object. E.g.: insights-info -d insights.rule
-k --pkg-query EXPRESSION                  Expression to select rules by package.
-s --source                                Show source for the given object. E.g.:
                                           Insights-info  -s Insights.core.plugins.rule
-S --specs                                 Show specs for the given  name. E.g.: insights-info -S uname
-t TYPES --types TYPES                     Filter results based on component type; e.g. 'rule,parser'.
                                           Names without  '.'  are assumed to be in Insights.core.plugins.
\-\-tags EXPRESSION                        An expression for selecting which loaded rules to run based on their tags.
-v --verbose                               Print component  dependencies.

Examples:

Search for all datasources that might handle foo, bar, or baz files or commands along with all components that could be activated if they were present and valid.

1
insights-info foo bar baz

Search for all datasources that might handle foo, bar, or baz files or commands along with all components that could be activated if they were present and valid.

Display dependency information about the hosts datasource.

1
insights-info -i insights.specs.Specs.hosts

Display the pydoc information about the Hosts parser.

1
insights-info -d insights.parsers.hosts.Hosts

Insights Inspect

The inspect module allows you to execute an insights component (parser, combiner, rule or datasource) dropping you into an ipython session where you can inspect the outcome.

Options:

-c CONFIG --config  CONFIG      Configure components
-D --debug                      Show debug level information
component                       Component to inspect
archive                         Archive or directory to analyze

Examples:

Creates an ipython session to explore the Hostname parser

1
   insights-inspect insights.parsers.hostname.Hostname

Creates an ipython session to explore the hostname combiner

1
   insights-inspect insights.combiners.hostname.hostname

Creates an ipython session to explore the hostname spec

1
insights-inspect insights.specs.Specs.hostname

Creates an ipython session to explore the bash_version rule

1
insights-inspect examples.rules.bash_version.report

More insights-inspect examples can be found here insights.tools.insights_inspect

Component Configuration

Insights core components can be loaded, configured, and run with a yaml file passed to insights-run -c <config.yaml>.

The file contains three top level keys: default_component_enabled, packages, and configs.

default_component_enabled

default_component_enabled controls the enabled state of all components that are loaded either by default or with entries in the packages section. Defaults to true. If false, enable components by giving them entries in the configs section with an enabled key set to true.

packages

packages is a simple list of packages or modules to recursively load. The following example will load all modules beneath insights.parsers

packages:
    - insights.parsers

configs

configs contains a list of dictionaries, each of which holds some configuration for a given component or set of components.

Each entry has the following keys.

  • name: the prefix to the name of a component or set of components to which this configuration should apply. For example, a name of insights would apply the configuration to all components with names beginning with insights. insights.combiners would scope the configuration to just the components beneath the combiners package. You can specify the exact name of a component if you wish. For example, name: examples.rules.stand_alone.report applies to just that report component in the stand_alone module.

  • enabled: can be true or false. Defaults to the value of default_component_enabled.

  • timeout: will set the timeout of any command-based datasource defined using the helper classes like simple_command and foreach_command.

  • tags: a list of strings you want to associate with the component. If the component has default tags already, they are overridden by what you specify here.

  • metadata: an arbitrary dictionary of data to associate with the component. If the component is a class and has class level attributes with the same names as the metadata keys, those attributes will be updated to the corresponding values. If the component is a function or doesn’t have a corresponding attribute for a metadata key, it can still access the metadata dictionary with meta = dr.get_metadata(component). This provides a way to configure components and parameterize rules with the configuration file instead of code changes.

Entries are applied from top to bottom, so you can have overall configuration for many components at the top and then specialize particular ones further down.

Examples

Load and run the stand_alone rules module

packages:
    - examples.rules.stand_alone

Load and run all of the example rules

packages:
    - examples.rules

Load and run all of the example rules with default_component_enabled set to false

Notice you have to explicitly enable all dependencies if default_component_enabled is false. We’d use this if generating a configuration to send to a client to execute so we have complete control over what gets run.

Note that we can even specify a particular class containing datasources to enable (insights.specs.Specs).

default_component_enabled: false

packages:
    - examples.rules

configs:
    - name: examples.rules
      enabled: true

    - name: insights.parsers
      enabled: true

    - name: insights.specs.Specs
      enabled: true

    - name: insights.specs.default
      enabled: true

    - name: insights.core.spec_factory
      enabled: true

Parameterize a rule

Say we had a rule like this

# example.py

from insights import dr, rule, make_fail, make_pass
from insights.parsers.df import DiskFree_AL


@rule(DiskFree_AL)
def report(disks):
    meta = dr.get_metadata(report)
    threshold = meta.get("threshold", 85.0)

    bad = {}
    for disk in disks:
        try:
            used, total = float(disk.used), float(disk.total)
            if total > 0.0:
                usage = (used / total) * 100
                if usage >= threshold:
                    bad[disk.mounted_on] = usage
        except:
            pass
    if bad:
        return make_fail("DISK_FULL_WARNING", bad=bad)
    return make_pass("DISK_FULL_WARNING")

we could parameterize threshold like this:

packages:
 - example

configs:
    - name: example.report
    - metadata:
        threshold: 50.0

Parameterize a class

Say we had a class like this

# example.py

from insights import parser, Parser, rule, make_pass
from insights.specs import Specs


@parser(Specs.hostname)
class Hostname(Parser):
    upcase = False

    def parse_content(self, content):
        self.data = []
        for line in content:
            self.data.append((line.upper() if self.upcase else line).strip())


@rule(Hostname)
def show_hostname(hn):
    return make_pass("HOSTNAME", host=hn.data[0])

we can set the upcase class attribute from the yaml like this

packages:
 - example

configs:
    - name: example.Hostname
    - metadata:
        upcase: true

Jupyter Notebook Examples

These jupyter notebooks are located in insights-core/docs/notebooks/ and may be executed by typing jupyter notebook from the insights-core directory and navigating to the docs/notebooks directory.

Configuration Trees

Most configurations can be modeled as trees where a node has a name, some optional attributes, and some optional children. This includes systems that use yaml, json, and ini as well as systems like httpd, nginx, multipath, logrotate and many others that have custom formats. Many also have a primary configuration file with supplementary files included by special directives in the main file.

We have developed parsers for common configuration file formats as well as the custom formats of many systems. These parsers all construct a tree of the same primitive building blocks, and their combiners properly handle include directives. The final configuration for a given system is a composite of the primary and supplementary configuration files.

Since the configurations are parsed to the same primitives to build their trees, we can navigate them all using the same API.

This tutorial will focus on the common API for accessing config trees. It uses httpd configuration as an example, but the API is exactly the same for other systems.

[1]:
from __future__ import print_function
import sys
sys.path.insert(0, "../..")
[2]:
from insights.combiners.httpd_conf import get_tree
from insights.parsr.query import *

conf = get_tree()

conf now contains the consolidated httpd configuration tree from my machine. The API that follows is exactly the same for nginx, multipath, logrotate, and ini parsers. Yaml and Json parsers have a .doc attribute that exposes the same API. They couldn’t do so directly for backward compatibility reasons.

Basic Navigation

The configuration can be treated in some sense like a dictionary:

[3]:
conf["Alias"]
[3]:
Alias: /icons/ /usr/share/httpd/icons/
Alias: /.noindex.html /usr/share/httpd/noindex/index.html
[4]:
conf["Directory"]
[4]:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /var/www]
    AllowOverride: None
    Require: all granted

[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

[Directory /var/www/cgi-bin]
    AllowOverride: None
    Options: None
    Require: all granted

[Directory /usr/share/httpd/icons]
    Options: Indexes MultiViews FollowSymlinks
    AllowOverride: None
    Require: all granted

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

[Directory /usr/share/httpd/noindex]
    AllowOverride: None
    Require: all granted
[5]:
conf["Directory"]["Options"]
[5]:
Options: Indexes FollowSymLinks
Options: None
Options: Indexes MultiViews FollowSymlinks
Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec

Notice that the first pair of brackets are a query against the first level of the configuration tree. conf["Alias"] returns all of the “Alias” nodes. conf["Directory"] returns all of the “Directory” nodes.

A set of brackets after another set means to chain the queries using previous query results as the starting point. So, conf["Directory"]["Options"] first finds all of the “Directory” nodes, and then those are queried for their “Options” directives.

Complex Queries

In addition to simple queries that match node names, more complex queries are supported. For example, to get the “Directory” node for “/”, we can do the following:

[6]:
conf["Directory", "/"]
[6]:
[Directory /]
    AllowOverride: none
    Require: all denied

The comma constructs a tuple, so conf["Directory", "/"] and conf[("Directory", "/")] are equivalent. The first element of the tuple exactly matches the node name, and subsequent elements exactly match any of the node’s attributes. Notice that this is still a query, and the result behaves like a list:

[7]:
conf["Directory", "/", "/var/www"]
[7]:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /var/www]
    AllowOverride: None
    Require: all granted

That’s asking for Directory nodes with any attribute exactly matching any of “/” or “/var/www”. These can be chained with more brackets just like the simpler queries shown earlier.

Predicates

In addition to exact matches, predicates can be used to better express what you want:

[8]:
conf["Directory", startswith("/var/www")]
[8]:
[Directory /var/www]
    AllowOverride: None
    Require: all granted

[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

[Directory /var/www/cgi-bin]
    AllowOverride: None
    Options: None
    Require: all granted
[9]:
conf[contains("Icon")]
[9]:
AddIconByEncoding: (CMP,/icons/compressed.gif) x-compress x-gzip
AddIconByType: (TXT,/icons/text.gif) text/*
AddIconByType: (IMG,/icons/image2.gif) image/*
AddIconByType: (SND,/icons/sound2.gif) audio/*
AddIconByType: (VID,/icons/movie.gif) video/*
AddIcon: /icons/binary.gif .bin .exe
AddIcon: /icons/binhex.gif .hqx
AddIcon: /icons/tar.gif .tar
AddIcon: /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv
AddIcon: /icons/compressed.gif .Z .z .tgz .gz .zip
AddIcon: /icons/a.gif .ps .ai .eps
AddIcon: /icons/layout.gif .html .shtml .htm .pdf
AddIcon: /icons/text.gif .txt
AddIcon: /icons/c.gif .c
AddIcon: /icons/p.gif .pl .py
AddIcon: /icons/f.gif .for
AddIcon: /icons/dvi.gif .dvi
AddIcon: /icons/uuencoded.gif .uu
AddIcon: /icons/script.gif .conf .sh .shar .csh .ksh .tcl
AddIcon: /icons/tex.gif .tex
AddIcon: /icons/bomb.gif core.
AddIcon: /icons/back.gif ..
AddIcon: /icons/hand.right.gif README
AddIcon: /icons/folder.gif ^^DIRECTORY^^
AddIcon: /icons/blank.gif ^^BLANKICON^^
DefaultIcon: /icons/unknown.gif
[10]:
conf[contains("Icon"), contains("zip")]
[10]:
AddIconByEncoding: (CMP,/icons/compressed.gif) x-compress x-gzip
AddIcon: /icons/compressed.gif .Z .z .tgz .gz .zip

Predicates can be combined with boolean logic. Here are all the top level nodes with “Icon” in the name and attributes that contain “CMP” and “zip”. Note the helper any_ (there’s also an all_) that means any attribute must succeed.

[11]:
conf[contains("Icon"), any_(contains("CMP")) & any_(contains("zip"))]
[11]:
AddIconByEncoding: (CMP,/icons/compressed.gif) x-compress x-gzip

Here are the entries with all attributes not starting with “/”

[12]:
conf[contains("Icon"), all_(~startswith("/"))]
[12]:
AddIconByEncoding: (CMP,/icons/compressed.gif) x-compress x-gzip
AddIconByType: (TXT,/icons/text.gif) text/*
AddIconByType: (IMG,/icons/image2.gif) image/*
AddIconByType: (SND,/icons/sound2.gif) audio/*
AddIconByType: (VID,/icons/movie.gif) video/*

Several predicates are provided: startswith, endswith, contains, matches, lt, le, gt, ge, and eq. They can all be negated with ~ (not) and combined with & (boolean and) and | (boolean or).

It’s also possible to filter results based on whether they’re a Section or a Directive.

[13]:
conf.find(startswith("Directory"))
[13]:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /var/www]
    AllowOverride: None
    Require: all granted

[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

DirectoryIndex: index.html

[Directory /var/www/cgi-bin]
    AllowOverride: None
    Options: None
    Require: all granted

[Directory /usr/share/httpd/icons]
    Options: Indexes MultiViews FollowSymlinks
    AllowOverride: None
    Require: all granted

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

[Directory /usr/share/httpd/noindex]
    AllowOverride: None
    Require: all granted
[14]:
query = startswith("Directory")
print("Directives:")
print(conf.find(query).directives)
print()
print("Sections:")
print(conf.find(query).sections)
print()
print("Chained filtering:")
print(conf.find(query).sections["Options"])
Directives:
DirectoryIndex: index.html

Sections:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /var/www]
    AllowOverride: None
    Require: all granted

[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

[Directory /var/www/cgi-bin]
    AllowOverride: None
    Options: None
    Require: all granted

[Directory /usr/share/httpd/icons]
    Options: Indexes MultiViews FollowSymlinks
    AllowOverride: None
    Require: all granted

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

[Directory /usr/share/httpd/noindex]
    AllowOverride: None
    Require: all granted


Chained filtering:
Options: Indexes FollowSymLinks
Options: None
Options: Indexes MultiViews FollowSymlinks
Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec

Notice that conf[startswith("Dir")].sections is not the same as conf.sections[startswith("Dir")]. The first finds all the top level nodes that start with “Dir” and then filters those to just the sections. The second gets all of the top level sections and then searches their children for nodes starting with “Dir.”

[15]:
print("Top level Sections starting with 'Dir':")
print(conf[startswith("Dir")].sections)
print()
print("Children starting with 'Dir' of any top level Section:")
print(conf.sections[startswith("Dir")])
Top level Sections starting with 'Dir':
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /var/www]
    AllowOverride: None
    Require: all granted

[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

[Directory /var/www/cgi-bin]
    AllowOverride: None
    Options: None
    Require: all granted

[Directory /usr/share/httpd/icons]
    Options: Indexes MultiViews FollowSymlinks
    AllowOverride: None
    Require: all granted

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

[Directory /usr/share/httpd/noindex]
    AllowOverride: None
    Require: all granted


Children starting with 'Dir' of any top level Section:
DirectoryIndex: index.html
Ignoring Case

All of the predicates parsr.query defines take an ignore_case keywork parameter. They also have versions with an i prefix that pass ignore_case=True for you. So startswith("abc", ignore_case=True) is the same as istartswith("abc"), etc.

It’s not possible to ignore case with simple dictionary like access unless you use a predicate: conf[ieq("ifmodule")] gets all top level elements with a name equal to any case variant of “ifmodule” whereas conf["ifmodule"] is a strict case match.

Attribute Access

If you don’t have any predicates for a node, and its name doesn’t conflict with an attribute of the underlying object (which should be rare), you can use attribute access to query for it.

[16]:
conf.doc.Directory.Require
[16]:
Require: all denied
Require: all granted
Require: all granted
Require: all granted
Require: all granted
Require: method GET POST OPTIONS
Require: all granted

Query by Children

If you want to query for nodes based on values of their children, you can use a where clause. It has a few different modes of use.

The first is the same name and value queries as before:

[17]:
conf.doc.Directory.where("Require", "denied")
[17]:
[Directory /]
    AllowOverride: none
    Require: all denied

The second is by using the make_child_query helper that lets you combine multiple “top level” queries that include name and value queries.

[18]:
from insights.parsr.query import make_child_query as q

conf.doc.Directory.where(q("Require", "denied") | q("AllowOverride", "FileInfo"))
[18]:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

Note you can continue the traversal after a where:

[19]:
res = conf.doc.Directory.where(q("Require", "denied") | q("AllowOverride", "FileInfo"))
res.Options
[19]:
Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec

The name and value queries inside of q can contain all of the predicates we’ve seen before, and q instances can be combined with & and | and negated with ~.

If you need to compare multiple attributes with each other or with other parts of the config structure, you can pass a function or lambda to where, and it will used to test each entry at the current level.

[20]:
conf.doc.Directory.where(lambda d: "denied" in d.Require.value or "FileInfo" in d.AllowOverride.value)
[20]:
[Directory /]
    AllowOverride: none
    Require: all denied

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS

Truth and Iteration

Nodes are “truthy” depending on whether they have children. They’re also iterable and indexable.

[21]:
res = conf["Blah"]
print("Boolean:", bool(res))
print("Length:", len(res))
print()
print("Iteration:")
for c in conf["Directory"]:
    print(c.value)
print()
print("Indexing:")
print(conf["Directory"][0].value)
print(conf["Directory"][first].value)
print(conf["Directory"][-1].value)
print(conf["Directory"][last].value)

Boolean: False
Length: 0

Iteration:
/
/var/www
/var/www/html
/var/www/cgi-bin
/usr/share/httpd/icons
/home/*/public_html
/usr/share/httpd/noindex

Indexing:
/
/
/usr/share/httpd/noindex
/usr/share/httpd/noindex

This is also true of conf itself:

[22]:
sorted(set(c.name for c in conf))
[22]:
['AddDefaultCharset',
 'AddIcon',
 'AddIconByEncoding',
 'AddIconByType',
 'Alias',
 'DNSSDEnable',
 'DefaultIcon',
 'Directory',
 'DocumentRoot',
 'EnableSendfile',
 'ErrorLog',
 'Files',
 'Group',
 'HeaderName',
 'IfModule',
 'IndexIgnore',
 'IndexOptions',
 'Listen',
 'LoadModule',
 'LocationMatch',
 'LogLevel',
 'ReadmeName',
 'ServerAdmin',
 'ServerRoot',
 'User']

Attributes

The individual results in a result set have a name, value, attributes, children, an immediate parent, a root, and context for their enclosing file that includes its path and their line within it. The code below shows different attributes available on individual entries and results.

[23]:
root = conf.find("ServerRoot")[0]
print("Node name:", root.name)
print("Value:", root.value) # gets the value if the entry has only one. raises and exception if it has more than 1
print("Values:", conf.find("Options").values) # same as above except collects values of current results.
print()
print("Unique Values:", conf.find("Options").unique_values) # same as above except values are unique.
print()
print("Attributes:", root.attrs) # an entry may have multiple values
print("Children:", len(root.children)) #
print("Parent:", conf.find("Options")[0].parent.name)
print("Parents:", conf.find("Options").parents.values) # go up one level from the results.
print("Root:", "conf.find('LogFormat')[0].root # Omitted due to size") # root of current entry.
print("Roots:", "conf.find('LogFormat').roots # Omitted due to size")  # all roots of current results.
print("File: ", root.file_path) # path of the backing file. Not always available.
print("Original Line:", root.line) # raw line from the original source. Not always available.
print("Line Number:", root.lineno) # line number in source of the element. Not always available.
Node name: ServerRoot
Value: /etc/httpd
Values: ['Indexes FollowSymLinks', 'None', 'Indexes MultiViews FollowSymlinks', 'MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec', '-Indexes']

Unique Values: ['-Indexes', 'Indexes FollowSymLinks', 'Indexes MultiViews FollowSymlinks', 'MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec', 'None']

Attributes: ['/etc/httpd']
Children: 0
Parent: Directory
Parents: ['/var/www/html', '/var/www/cgi-bin', '/usr/share/httpd/icons', '/home/*/public_html', '^/+$']
Root: conf.find('LogFormat')[0].root # Omitted due to size
Roots: conf.find('LogFormat').roots # Omitted due to size
File:  /etc/httpd/conf/httpd.conf
Original Line: ServerRoot "/etc/httpd"
Line Number: 31
[24]:
port = conf.find("Listen").value
print(port)
print(type(port))
80
<class 'int'>

There’s also a .values property that will accumulate all of the attributes of multiple children that match a query. Multiple attributes from a single child are converted to a single string.

[25]:
conf["Directory"]["Options"].values
[25]:
['Indexes FollowSymLinks',
 'None',
 'Indexes MultiViews FollowSymlinks',
 'MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec']

Useful functions

In addition to brackets, config trees support other functions for querying and navigating.

find

find searches the entire tree for the query you provide and returns a Result of all elements that match.

[26]:
conf.find("ServerRoot")
[26]:
ServerRoot: /etc/httpd
[27]:
conf.find("Alias")
[27]:
Alias: /icons/ /usr/share/httpd/icons/
Alias: /.noindex.html /usr/share/httpd/noindex/index.html
[28]:
conf.find("LogFormat")
[28]:
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

If you want the first or last match, access them with brackets as you would a list:

[29]:
print(conf.find("Alias")[0])
print(conf.find("Alias")[-1])
Alias: /icons/ /usr/share/httpd/icons/
Alias: /.noindex.html /usr/share/httpd/noindex/index.html
[30]:
r = conf.find("Boom")
print(type(r))
print(r)
<class 'insights.parsr.query.Result'>

Find takes an addition parameter, roots, which defaults to False. If it is False, the matching entries are returned. If set to True, the unique set of ancestors of all matching results are returned.

[31]:
print('conf.find("LogFormat"):')
print(conf.find("LogFormat"))
print()
print('conf.find("LogFormat").parents:')
print(conf.find("LogFormat").parents)
conf.find("LogFormat"):
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

conf.find("LogFormat").parents:
[IfModule log_config_module]
    LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
    LogFormat: %h %l %u %t "%r" %>s %b common

    [IfModule logio_module]
        LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

    CustomLog: logs/ssl_request_log %t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x  %r  %b
    CustomLog: logs/access_log combined

[IfModule logio_module]
    LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

[32]:
conf.find(("IfModule", "logio_module"), "LogFormat")
[32]:
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio
[33]:
conf.find("IfModule", ("LogFormat", "combinedio"))
[33]:
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio
select

select is the primitive query function on which everything else is built. Its parameters operate just like find, and by default it queries like a find that only searches from the top of the configuration tree instead of walking subtrees.

To support the other cases, it takes two keyword arguments. deep=True causes it to search subtrees (default is deep=False). roots=True causes it to return the unique, top level nodes containing a match. This is true even when deep=True. If roots=False, it returns matching leaves instead of top level roots.

  • conf.find(*queries) = conf.select(*queries, deep=True, roots=False)

  • conf[query] = conf.select(query, deep=False, roots=False)

[34]:
print(conf.select("Alias"))
print()
print(conf.select("LogFormat") or "Nothing")
print(conf.select("LogFormat", deep=True))
print(conf.select("LogFormat", deep=True, roots=False))
print()
print(conf.select("LogFormat", deep=True, roots=False)[0])
print(conf.select("LogFormat", deep=True, roots=False)[-1])
Alias: /icons/ /usr/share/httpd/icons/
Alias: /.noindex.html /usr/share/httpd/noindex/index.html

Nothing
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio
upto

Sometimes you’ve navigated down to a piece of data and want to work back up the tree from what you’ve found. You can do that by chaining .parent attributes or going all the way to the top with .roots. But what if your target isn’t at the top but is also several ancestors away. You can pass a query to .upto that will work its way up the parents of the current results and stop when it finds those that match.

[35]:
conf.find(("Options", "Indexes")).upto("Directory")
[35]:
[Directory /var/www/html]
    Options: Indexes FollowSymLinks
    AllowOverride: None
    Require: all granted

[Directory /usr/share/httpd/icons]
    Options: Indexes MultiViews FollowSymlinks
    AllowOverride: None
    Require: all granted

[Directory /home/*/public_html]
    AllowOverride: FileInfo AuthConfig Limit Indexes
    Options: MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
    Require: method GET POST OPTIONS
get_crumbs

What if you’ve issued a search and found something interesting but want to refine your query so you can see where the hits are within the structure? get_crumbs will show you the unique paths down the tree to your current results.

[36]:
print(conf.find("LogFormat"))
print()
print(conf.find("LogFormat").get_crumbs())
print()
print(conf.doc.IfModule.IfModule.LogFormat)
print()
print(conf.doc.IfModule.LogFormat)
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common
LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

['IfModule.IfModule.LogFormat', 'IfModule.LogFormat']

LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" %I %O combinedio

LogFormat: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" combined
LogFormat: %h %l %u %t "%r" %>s %b common

Custom Predicates

It’s easy to create your own predicates to use with config trees. They come in parameterized and unparameterized types and can be used against names or attributes. If used in a name position, they’re passed the node’s name. If used in an attribute position, they’re passed the node’s attributes one at a time. If the predicate raises an exception because an attribute is of the wrong type, it’s considered False for that attribute. Note that other attribute of the node can still cause a True result.

[37]:
from insights.parsr.query.boolean import pred, pred2

is_ifmod = pred(lambda x: x == "IfModule")
is_user_mod = pred(lambda x: "user" in x)
divisible_by = pred2(lambda in_val, divisor: (in_val % divisor) == 0)
[38]:
print("Num IfModules:", len(conf[is_ifmod]))
print("User mod checks:", len(conf.find(("IfModule", is_user_mod))))
print("Div by 10?", conf["Listen", divisible_by(10)] or "No matches")
print("Div by 3?", conf["Listen", divisible_by(3)] or "No matches")
Num IfModules: 8
User mod checks: 1
Div by 10? Listen: 80
Div by 3? No matches

Insights Core Datasource Registry

This notebook shows the recommended method of writing datasources.

It also shows how to register datasources with Insights Core to provide alternative methods of collection while still taking advantage of our parser and combiner catalog.

It assumes familiarity with datasources from the Standard Components section of the Insights Core Tutorial.

[1]:
import sys
sys.path.insert(0, "../..")
[2]:
from insights import run
from insights.core import dr
from insights.core.spec_factory import simple_file, simple_command

Fixing datasource names

The simplest way to define a datasource is with a helper class from insights.core.spec_factory.

However, if you use one of these to define a datasource at the module level, you’ll notice that it doesn’t have a very useful name.

[3]:
hosts = simple_file("/etc/hosts")
print(dr.get_name(hosts))
insights.core.spec_factory.simple_file

We can fix that by including it in a subclass of insights.core.spec_factory.SpecSet.

This is the recommended way of writing datasources.

[4]:
from insights.core.spec_factory import SpecSet

class MySpecs(SpecSet):
    hosts = simple_file("/etc/hosts")

print(dr.get_name(MySpecs.hosts))
__main__.MySpecs.hosts

Making datasources dynamic

What if you have datasources on which many downstream components depend, and you want to provide different ways of collecting the data they represent? Maybe you want to execute a command in one context but read from a file in another. Parsers depend on a single datasource, and jamming multiple collection methods into a single implementation isn’t attractive.

Instead, you can define a subclass of insights.core.spec_factory.SpecSet that has insights.core.spec_factory.RegistryPoint instances instead of regular datasources. Then you can provide implementations for the registry points in the form of datasources that are members of subclasses of your original class. This keeps the alternative implementations cleanly separated while allowing parsers to depend on a single component.

Note that this doesn’t work like normal class inheritance, although it uses the class inheritance mechanism.

[5]:
from insights.core.spec_factory import RegistryPoint
from insights.core.context import ExecutionContext, HostContext

# We'll use HostContext and OtherContext as our alternatives.

class OtherContext(ExecutionContext):
    pass
[6]:
# Define the components that your downstream components should depend on.

class TheSpecs(SpecSet):
    hostname = RegistryPoint()
    fstab = simple_file("/etc/fstab", context=HostContext)


# Provide different implementations for hostname by subclassing TheSpecs and
# giving the datasources names that match their corresponding registry points.

class HostSpecs(TheSpecs):
    hostname = simple_command("/usr/bin/hostname", context=HostContext)

class OtherSpecs(TheSpecs):
    hostname = simple_file("/etc/hostname", context=OtherContext)

# Note that we don't and actually can't provide an alternative for TheSpecs.fstab
# since it's not a RegistryPoint.

Downstream components should depend on TheSpecs.hostname, and the implementation that actually runs and backs that component will depend on the context in which you run.

[7]:
results = run(TheSpecs.hostname, context=HostContext)
print(results[TheSpecs.hostname])
CommandOutputProvider("'/usr/bin/hostname'")
[8]:
results = run(TheSpecs.hostname, context=OtherContext)
print(results[TheSpecs.hostname])
TextFileProvider("'/etc/hostname'")
[9]:
results = run(TheSpecs.fstab, context=HostContext)
print(results[TheSpecs.fstab])
TextFileProvider("'/etc/fstab'")

RegistryPoint instances in SpecSet subclasses are converted to special datasources that simply check their dependencies and return the last one that succeeds. So, TheSpecs.hostname is just a datasource. When HostSpecs subclasses TheSpecs, the class machinery recognizes that HostSpecs.hostname is a datasource and is named the same as a RegistryPoint in an immediate super class. When that happens, the datasource of the subclass is added as a dependency of the datasource in the superclass.

If the datasources in each subclass depend on different contexts, only one of them will fire. That’s why when we ran with HostContext, the command was run, but when we ran with OtherContext, the file was collected.

Notice that the TheSpecs.fstab datasource can be run, too. If a subclass had provided a datasource of the same name, it would not have been registered with the super class but would instead have stayed local to that subclass.

Note also that the datasources in the alternative implementation classes aren’t special in any other way. You can run them directly, and components can depend on them if you want, although if you’re providing them as an implementation to a registry point, components really should depend on that instead of a particular implementation.

What happens if you have multiple subclass implementations for a given registry point, and more than one of them depends on the same context? In that case, the last one to be registered for that context is the one that runs.

Registering implementations for standard datasources

Providing alternative implementations for the standard Insights Core datasources is easy. The datasources on which the core parsers depend are all defined as RegistryPoints on the Specs class in insights.specs.

[10]:
from insights.specs import Specs

class UseThisInstead(Specs):
    hostname = simple_file("/etc/hostname", context=OtherContext)

results = run(Specs.hostname, context=OtherContext)
print(results[Specs.hostname])
print(results.get(Specs.hosts))
TextFileProvider("'/etc/hostname'")
None

Notice that Specs.hosts didn’t run! That’s because we haven’t loaded the module containing the default implementations, and we’ve only provided an implementation for Specs.hostname. Also, none of the defaults depend on OtherContext anyway.

What if you want to use the default datasources but only want to override a few of them, even for the same context?

Create a subclass that does exactly that:

[11]:
from pprint import pprint
from insights.specs import default  # load the default implementations

# Note that the default context is HostContext unless otherwise specified
# with a context= keyword argument.
class SpecialSpecs(Specs):
    hostname = simple_file("/etc/hostname")

results = run(Specs.hostname)

# show that the default didn't run
pprint(results[Specs.hostname])
pprint(results[SpecialSpecs.hostname])
pprint(results.get(default.DefaultSpecs.hostname, None))
print

results = run(Specs.hosts)

#show that the default ran
pprint(results[Specs.hosts])
pprint(results[default.DefaultSpecs.hosts])
TextFileProvider("'/etc/hostname'")
TextFileProvider("'/etc/hostname'")
None

TextFileProvider("'/etc/hosts'")
TextFileProvider("'/etc/hosts'")

If multiple datasources provide implementations for the same registry point and depend on the same context, then the last implementation to load is the one that is executed under that context.

Diagnostic Walkthrough

A simple use-case for troubleshooting is the identification of installed software on the system. In this example, we will examine checking a system for the usage of bash based on data from the rpm command. This “walkthrough” will avoid going into details. Instead, it will simply lay out how the use-case could be handled using insights-core. More detailed tutorials can be found in the docs.

We’ll assume we have insights-core already installed following the instructions on the README.rst. Next we need to import the necessary modules.

[1]:
import sys
sys.path.insert(0, "../..")
[2]:
from insights import rule, make_fail, make_pass, run
from insights.parsers.installed_rpms import InstalledRpms

The first import line has the most common components used when creating rules. The rule decorator marks a function that encodes logic to be applied by the framework and the required or optional components it needs to execute. @rule decorated functions use make_fail or make_pass to return results. The run method executes the system, simplifying usage of insights-core for small, standalone scripts and from the python interpreter.

We also import the InstalledRpms parser. This is a class that structures the results of the rpm -qa command.

Next, we create our “rule” function.

[3]:
@rule(InstalledRpms)
def report(rpms):
    rpm = rpms.get_max("bash")
    if rpm:
        return make_pass("BASH_INSTALLED", version=rpm.nvr)
    return make_fail("BASH_INSTALLED")

Here, the report method will let us know if the bash package is installed, and if so, the latest version encountered. The name of the function isn’t important. The @rule decorator defines the report function as a “rule” component type and indicates that it depends on the InstalledRpms parser. This parser will be passed into the function as the first argument.

The rest of the report function is fairly easy to understand, noting that the get_max function returns the maximum version encountered of the package specified, or None if the package is not found.

Let’s try running this function using the run method.

[4]:
results = run(report)
results[report]
[4]:
{'pass_key': 'BASH_INSTALLED',
 'type': 'pass',
 'version': u'bash-4.4.23-1.fc28'}

The run command executed the framework collecting rpm information from my system, parsing it using the InstalledRpms class, and then running the report function. It found that bash was installed.

The results are keyed by function (report in this case). Multiple functions can be executed, each with its own response.

The InstalledRpms class has structured the results of the rpm -qa command, parsing the rows from the command output. That is, each package NVR is separated into its own fields. One consequence of this is that the package name is distinct. When we look for bash, the parser doesn’t match, for example, bash-completion (also on my system.) It also means the version information is understood. So, we can do things like check a range of versions.

First, let’s define our range using the bash NVRs we care about. We’ll imagine there’s a particular bug that affects bash starting in 4.4.16-1 and is fixed in 4.4.22-1.

[5]:
from insights.parsers.installed_rpms import InstalledRpm

lower = InstalledRpm.from_package("bash-4.4.16-1.fc27")
upper = InstalledRpm.from_package("bash-4.4.22-1.fc27")

Now, we’ll modify the report function to check ranges.

[6]:
@rule(InstalledRpms)
def report(rpms):
    rpm = rpms.get_max("bash")
    if rpm and rpm >= lower and rpm < upper:
        return make_fail("BASH_AFFECTED", version=rpm.nvr)
    elif rpm:
        return make_pass("BASH_AFFECTED", version=rpm.nvr)
    else:
        return make_pass("NO_BASH")

Now we can run this as before.

[7]:
results = run(report)
results[report]
[7]:
{'pass_key': 'BASH_AFFECTED', 'type': 'pass', 'version': u'bash-4.4.23-1.fc28'}

A few notes about this example:

  • The code here could be packaged up in a script, along with other rules, to be easily reused.

  • The rule can be executed against a live host, sosreport, Red Hat Insights archive, or a directory formed from an expanded archive.

  • While we defined only a rule, we could also define other components like the command to be run and a parser to structure the content. The stand_alone.py is a simple example containing these three components.

The code above, (and this notebook) can be executed if insights-core (and jupyter-notebook) is installed. So feel free to run and experiment with the example.

Red Hat Insights Core

Insights Core is a framework for collecting and processing data about systems. It allows users to write components that collect and transform sets of raw data into typed python objects, which can then be used in rules that encapsulate knowledge about them.

To accomplish this the framework uses an internal dependency engine. Components in the form of class or function definitions declare dependencies on other components with decorators, and the resulting graphs can be executed once all components you care about have been loaded.

This is an introduction to the dependency system followed by a summary of the standard components Insights Core provides.

Components

To make a component, we first have to create a component type, which is a decorator we’ll use to declare it.

[1]:
import sys
sys.path.insert(0, "../..")
from insights.core import dr
[2]:
# Here's our component type with the clever name "component."

# Insights Core provides several types that we'll come to later.
class component(dr.ComponentType):
    pass
How do I use it?
[3]:
import random

# Make two components with no dependencies
@component()
def rand():
    return random.random()

@component()
def three():
    return 3

# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
    return x * y
[4]:
# Now that we have a few components defined, let's run them.

from pprint import pprint

# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker. The broker is like a fancy dictionary that keeps up with
# the state of an evaluation.
broker = dr.run()
pprint(broker.instances)
{<function three at 0x7f1004003488>: 3,
 <function mul_things at 0x7f1004003668>: 2.7284186441162825,
 <function rand at 0x7f10077ec758>: 0.9094728813720943}

Component Types

We can define components of different types by creating different decorators.

[5]:
class stage(dr.ComponentType):
    pass
[6]:
@stage(mul_things)
def spam(m):
    return int(m)
[7]:
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))

print
print "Stages"
pprint(broker.get_by_type(stage))
All Instances
{<function three at 0x7f1004003488>: 3,
 <function mul_things at 0x7f1004003668>: 2.2051337009964818,
 <function spam at 0x7f10046eb7d0>: 2,
 <function rand at 0x7f10077ec758>: 0.7350445669988273}

Components
{<function three at 0x7f1004003488>: 3,
 <function mul_things at 0x7f1004003668>: 2.2051337009964818,
 <function rand at 0x7f10077ec758>: 0.7350445669988273}

Stages
{<function spam at 0x7f10046eb7d0>: 2}

Component Invocation

You can customize how components of a given type get called by overriding the invoke method of your ComponentType class. For example, if you want your components to receive the broker itself instead of individual arguments, you can do the following.

[8]:
class thing(dr.ComponentType):
    def invoke(self, broker):
        return self.component(broker)

@thing(rand, three)
def stuff(broker):
    r = broker[rand]
    t = broker[three]
    return r + t
[9]:
broker = dr.run()
print broker[stuff]
3.81538716938

Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.

Exception Handling

When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We’ll come to that later.

[10]:
@stage()
def boom():
    raise Exception("Boom!")

broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t
Exception('Boom!',)

Traceback (most recent call last):
  File "../../insights/core/dr.py", line 952, in run
    result = DELEGATES[component].process(broker)
  File "../../insights/core/dr.py", line 673, in process
    return self.invoke(broker)
  File "../../insights/core/dr.py", line 653, in invoke
    return self.component(*args)
  File "<ipython-input-10-bc534c6da647>", line 3, in boom
    raise Exception("Boom!")
Exception: Boom!

No handlers could be found for logger "insights.core.dr"

Missing Dependencies

A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing required dependencies. The second is a list of all dependencies of which at least one was required.

[11]:
@stage("where's my stuff at?")
def missing_stuff(s):
    return s

broker = dr.run()
print broker.missing_requirements[missing_stuff]
(["where's my stuff at?"], [])
[12]:
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
    return a + b + c + d + e + f

broker = dr.run()
print broker.missing_requirements[missing_more_stuff]
(['a', 'b'], [['e', 'f']])

Notice that the first elements in the dependency list after @stage are simply “a” and “b”, but the next two elements are themselves lists. This means that at least one element of each list must be present. The first “any” list has [rand, “d”], and rand is available, so it resolves. However, neither “e” nor “f” are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second “any” list.

SkipComponent

Components that raise dr.SkipComponent won’t have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.

Optional Dependencies

There’s an “optional” keyword that takes a list of components that should be run before the current one. If they throw exceptions or don’t run for some other reason, execute the current component anyway and just say they were None.

[13]:
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
    return (int(r*10.0) < 5.0, t)

broker = dr.run()
print broker[is_greater_than_ten]
(True, None)

Automatic Dependencies

The definition of a component type may include requires and optional attributes. Their specifications are the same as the requires and optional portions of the component decorators. Any component decorated with a component type that has requires or optional in the class definition will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.

This functionality should almost never be used because it makes it impossible to tell that the component has implied dependencies.

[14]:
class mything(dr.ComponentType):
    requires = [rand]

@mything()
def dothings(r):
    return 4 * r

broker = dr.run(broker=broker)

pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))
0.2325843178706255
set([<function rand at 0x7f10077ec758>])

Metadata

Component types and components can define metadata in their definitions. If a component’s type defines metadata, that metadata is inherited by the component, although the component may override it.

[15]:
class anotherthing(dr.ComponentType):
    metadata={"a": 3}

@anotherthing(metadata={"b": 4, "c": 5})
def four():
    return 4

dr.get_metadata(four)
[15]:
{'a': 3, 'b': 4, 'c': 5}

Component Groups

So far we haven’t said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.

All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when specified.

[16]:
class grouped(dr.ComponentType):
    group = "grouped"

@grouped()
def five():
    return 5

b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)
{<function five at 0x7f100402c398>: 5}

If a group isn’t specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn’t provided.

It’s also possible to override the group of an individual component by using the group keyword in its decorator.

run_incremental

Since hundreds or even thousands of dependencies can be defined, it’s sometimes useful to separate them into graphs that don’t share any components and execute those graphs one at a time. In addition to the run function, the dr module provides a run_incremental function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.

run_all

The run_all function is similar to run_incremental since it breaks a graph up into independently executable subgraphs before running them. However, it returns a list of the brokers instead of yielding one at a time. It also has a pool keyword argument that accepts a concurrent.futures.ThreadPoolExecutor, which it will use to run the independent subgraphs in parallel. This can provide a significant performance boost in some situtations.

Inspecting Components

The dr module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.

[17]:
from insights.core import dr

@stage()
def six():
    return 6

@stage(six)
def times_two(x):
    return x * 2

# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (times_two):", dr.get_base_module_name(times_two)

print "\nComponent Type (times_two):", dr.get_component_type(times_two)

print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))

print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))

print "\nDependents (rand): "
pprint(dr.get_dependents(rand))

print "\nGroup (six):", dr.get_group(six)

print "\nMetadata (four): ",
pprint(dr.get_metadata(four))

# prints the full module name of the component
print "\nModule Name (times_two):", dr.get_module_name(times_two)

# prints the module name joined to the component name by a "."
print "\nName (times_two):", dr.get_name(times_two)

print "\nSimple Name (times_two):", dr.get_simple_name(times_two)

Module (times_two): __main__

Component Type (times_two): <class '__main__.stage'>

Dependencies (times_two):
set([<function six at 0x7f100402caa0>])

Dependency Graph (stuff):
{<function three at 0x7f1004003488>: set([]),
 <function stuff at 0x7f1004003500>: set([<function three at 0x7f1004003488>,
                                          <function rand at 0x7f10077ec758>]),
 <function rand at 0x7f10077ec758>: set([])}

Dependents (rand):
set([<function stuff at 0x7f1004003500>,
     <function mul_things at 0x7f1004003668>,
     <function missing_more_stuff at 0x7f100402c140>,
     <function is_greater_than_ten at 0x7f100402c410>,
     <function dothings at 0x7f100402c500>])

Group (six): 0

Metadata (four): {'a': 3, 'b': 4, 'c': 5}

Module Name (times_two): __main__

Name (times_two): __main__.times_two

Simple Name (times_two): times_two

Loading Components

If you have components defined in a package and the root of that path is in sys.path, you can load the package and all its subpackages and modules by calling dr.load_components. This way you don’t have to load every component module individually.

# recursively load all packages and modules in path.to.package
dr.load_components("path.to.package")

# or load a single module
dr.load_components("path.to.package.module")

Now that you know the basics of Insights Core dependency resolution, let’s move on to the rest of Core that builds on it.

Standard Component Types

The standard component types provided by Insights Core are datasource, parser, combiner, rule, condition, and incident. They’re defined in insights.core.plugins.

Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.

For more information on parser, combiner, and rule development, please see our component developer tutorials.

Datasource

A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we’ve streamlined the process of creating them.

Datasources are defined either with the @datasource decorator or with helper functions from insights.core.spec_factory.

The spec_factory module has a handful of functions for defining common datasource types. - simple_file - glob_file - simple_command - listdir - foreach_execute - foreach_collect - first_file - first_of

All datasources defined helper functions will depend on a ExecutionContext of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.

For now, we’ll use a HostContext. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in insights.core.contexts.

All file collection datasources depend on any context that provides a path to use as root unless a particular context is specified. In other words, some datasources will activate for multiple contexts unless told otherwise.

simple_file

simple_file reads a file from the file system and makes it available as a TextFileProvider. A TextFileProvider instance contains the path to the file and its content as a list of lines.

[18]:
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import (simple_file,
                                        glob_file,
                                        simple_command,
                                        listdir,
                                        foreach_execute,
                                        foreach_collect,
                                        first_file,
                                        first_of)

release = simple_file("/etc/redhat-release")
hostname = simple_file("/etc/hostname")

ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx

broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content
/etc/redhat-release ['Fedora release 28 (Twenty Eight)']
/etc/hostname ['alonzo']
glob_file

glob_file accepts glob patterns and evaluates at runtime to a list of TextFileProvider instances, one for each match. You can pass glob_file a single pattern or a list (or set) of patterns. It also accepts an ignore keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don’t want.

[19]:
host_stuff = glob_file("/etc/host*", ignore="(allow|deny)")
broker = dr.run(broker=broker)
print broker[host_stuff]
[TextFileProvider("'/etc/host.conf'"), TextFileProvider("'/etc/hostname'"), TextFileProvider("'/etc/hosts'")]
simple_command

simple_command allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.

It and other command datasources return a CommandOutputProvider instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the keep_rc=True keyword, and the command output as a list of lines.

simple_command also accepts a timeout keyword, which is the maximum number of seconds the system should attempt to execute the command before a CalledProcessError is raised for the component.

A default timeout for all commands can be set on the initial ExecutionContext instance with the timeout keyword argument.

If a timeout isn’t specified in the ExecutionContext or on the command itself, none is used.

[20]:
uptime = simple_command("/usr/bin/uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)
('/usr/bin/uptime', None, None, [u' 13:49:11 up 30 days, 20:38,  1 user,  load average: 0.98, 0.67, 0.68'])
listdir

listdir lets you get the contents of a directory.

[21]:
interfaces = listdir("/sys/class/net")
broker = dr.run(broker=broker)
pprint(broker[interfaces])
['docker0',
 'enp0s31f6',
 'lo',
 'vboxnet0',
 'vboxnet1',
 'virbr0',
 'virbr0-nic',
 'wlp3s0']
foreach_execute

foreach_execute allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.

The timeout description provided in the simple_command section applies here to each seperate invocation.

[22]:
ethtool = foreach_execute(interfaces, "ethtool %s")
broker = dr.run(broker=broker)
pprint(broker[ethtool])
[CommandOutputProvider("'ethtool docker0'"),
 CommandOutputProvider("'ethtool enp0s31f6'"),
 CommandOutputProvider("'ethtool lo'"),
 CommandOutputProvider("'ethtool vboxnet0'"),
 CommandOutputProvider("'ethtool vboxnet1'"),
 CommandOutputProvider("'ethtool virbr0'"),
 CommandOutputProvider("'ethtool virbr0-nic'"),
 CommandOutputProvider("'ethtool wlp3s0'")]

Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.

foreach_collect

foreach_collect works similarly to foreach_execute, but instead of running commands with interpolated arguments, it collects files at paths with interpolated arguments. Also, because it is a file collection, it doesn’t not have execution related keyword arguments

first_file

first_file takes a list of paths and returns a TextFileProvider for the first one it finds. This is useful if you’re looking for a single file that might be in different locations.

first_of

first_of is a way to express that you want to use any datasource from a list of datasources you’ve already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.

For example, the way you collect installed rpms directly from a machine differs from how you would collect them from a docker image. Ultimately, downstream components don’t care: they just want rpm data.

You could do the following. Notice that host_rpms and docker_installed_rpms implement different ways of getting rpm data that depend on different contexts, but the final installed_rpms datasource just references whichever one ran.

[23]:
from insights.specs.default import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider

rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format

host_rpms = simple_command(cmd, context=HostContext)

@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
    root = ctx.root
    cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
    result = ctx.shell_out(cmd)
    return CommandOutputProvider(cmd, ctx, content=result)

installed_rpms = first_of([host_rpms, docker_installed_rpms])

broker = dr.run(broker=broker)
pprint(broker[installed_rpms])
CommandOutputProvider("'/usr/bin/rpm -qa --qf \'\\{"name":"%{NAME}","epoch":"%{EPOCH}","version":"%{VERSION}","release":"%{RELEASE}","arch":"%{ARCH}","installtime":"%{INSTALLTIME:date}","buildtime":"%{BUILDTIME}","vendor":"%{VENDOR}","buildhost":"%{BUILDHOST}","sigpgp":"%{SIGPGP:pgpsig}"\\}\n\''")
What datasources does Insights Core provide?

To see a list of datasources we already collect, have a look in insights.specs.

Parsers

Parsers are the next major component type Insights Core provides. A Parser depends on a single datasource and is responsible for converting its raw content into a structured object.

Let’s build a simple parser.

[24]:
from insights.core import Parser
from insights.core.plugins import parser

@parser(hostname)
class HostnameParser(Parser):
    def parse_content(self, content):
        self.host, _, self.domain = content[0].partition(".")

broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host
Host: alonzo

Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.

Our hostname parser is pretty simple, but it’s easy to see how parsing things like rpm data or configuration files could get complicated.

Speaking of rpms, hopefully it’s also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.

What about parser dependencies that produce lists of components?

Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It’s important to keep this in mind when developing components that depend on parsers.

This is also why exceptions raised by components are stored as lists by component instead of single values.

Here’s a simple parser that depends on the ethtool datasource.

[25]:
@parser(ethtool)
class Ethtool(Parser):
    def parse_content(self, content):
        self.link_detected = None
        self.device = None
        for line in content:
            if "Settings for" in line:
                self.device = line.split(" ")[-1].strip(":")
            if "Link detected" in line:
                self.link_detected = line.split(":")[-1].strip()

broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
    print "Device:", eth.device
    print "Link? :", eth.link_detected, "\n"
Device: docker0
Link? : no

Device: enp0s31f6
Link? : no

Device: lo
Link? : yes

Device: vboxnet0
Link? : no

Device: vboxnet1
Link? : no

Device: virbr0
Link? : no

Device: virbr0-nic
Link? : no

Device: wlp3s0
Link? : yes

We provide curated parsers for all of our datasources. They’re in insights.parsers.

Combiners

Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.

As an example of standardizing interfaces, chkconfig and service commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service’s status, not about how a particular program exposes it. A combiner can depend on both chkconfig and service parsers (like this, so only one of them is required: @combiner([[chkconfig, service]])) and provide a unified interface to the data.

As an example of a higher level view of several related components, imagine a combiner that depends on various ethtool and other network information gathering parsers. It can compile all of that information behind one view, exposing a range of information about devices, interfaces, iptables, etc. that might otherwise be scattered across a system.

We provide a few common combiners. They’re in insights.combiners.

Here’s an example combiner that tries a few different ways to determine the Red Hat release information. Notice that its dependency declarations and interface are just like we’ve discussed before. If this was a class, the __init__ function would be declared like def __init__(self, rh_release, un).

from collections import namedtuple
from insights.core.plugins import combiner
from insights.parsers.redhat_release import RedhatRelease as rht_release
from insights.parsers.uname import Uname

@combiner([rht_release, Uname])
def redhat_release(rh_release, un):
    if un and un.release_tuple[0] != -1:
        return Release(*un.release_tuple)

    if rh_release:
        return Release(rh_release.major, rh_release.minor)

    raise Exception("Unabled to determine release.")
Rules

Rules depend on parsers and/or combiners and encapsulate particular policies about their state. For example, a rule might detect whether a defective rpm is installed. It might also inspect the lsof parser to determine if a process is using a file from that defective rpm. It could also check network information to see if the process is a server and whether it’s bound to an internal or external IP address. Rules can check for anything you can surface in a parser or a combiner.

Rules use the make_fail, make_pass, or make_info helpers to create their return values. They take one required parameter, which is a key identifying the particular state the rule wants to highlight, and any number of required parameters that provide context for that state.

[26]:
from insights.core.plugins import rule, make_fail, make_pass

ERROR_KEY = "IS_LOCALHOST"

@rule(HostnameParser)
def report(hn):
    return make_pass(ERROR_KEY) if "localhost" in hn.host else make_fail(ERROR_KEY)


brok = dr.Broker()
brok[HostContext] = HostContext()

brok = dr.run(broker=brok)
pprint(brok.get(report))

{'error_key': 'IS_LOCALHOST', 'type': 'rule'}
Conditions and Incidents

Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.

Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be “Does the kdump configuration contain a ‘net’ target type?” or “Is the operating system Red Hat Enterprise Linux 7?”

Incidents, on the other hand, typically are specific types of warning or error messages from log type files.

Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.

Observers

Insights Core allows you to attach functions to component types, and they’ll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.

Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc.

[27]:
def observer(c, broker):
    if c not in broker:
        return

    value = broker[c]
    pprint(value)

broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)
<__main__.HostnameParser object at 0x7f0ff4cc9ed0>
<insights.parsers.mount.Mount object at 0x7f0ff5267290>
<insights.parsers.yum.YumRepoList object at 0x7f0ff51df4d0>
[<__main__.Ethtool object at 0x7f0ff4c82d50>,
 <__main__.Ethtool object at 0x7f0ff56ac810>,
 <__main__.Ethtool object at 0x7f0ff4c82e50>,
 <__main__.Ethtool object at 0x7f0ff4c82fd0>,
 <__main__.Ethtool object at 0x7f0ff4c601d0>,
 <__main__.Ethtool object at 0x7f0ff4c60310>,
 <__main__.Ethtool object at 0x7f0ff4c60350>,
 <__main__.Ethtool object at 0x7f0ff4c60510>]
<insights.parsers.installed_rpms.InstalledRpms object at 0x7f0ff51f6190>

Filtering of Data in Insights Parsers and Rules

In this tutorial we will investigate filters in insights-core, what they are, how they affect your components and how you can use them in your code. Documentation on filters can be found in the insights-core documentation.

The primary purposes of filters are:

  1. to prevent the collection of sensitive information while enabling the collection of necessary information for analysis, and;

  2. to reduce the amount of information collected.

Filters are typically added in rule modules since the purpose of a rule is to analyze particular information and identify a problem, potential problem or fact about the system. A filter may also be added in a parse modules if it is required to enable parsing of the data. We will discuss this further when we look at the example. Filters added by rules and parsers are applied when the data is collected from a system. They are combined so that if they are added from multiple rules and parsers, each rule will receive all information that was collected by all filters for a given source. An example will help demonstrate this.

Suppose you write some rules that needs information from /var/log/messages. This file could be very large and contain potentially sensitive information, so it is not desirable to collect the entire file. Let’s say rule_a needs messages that indicate my_special_process has failed to start. And another rule, rule_b needs messages that indicate that my_other_process had the errors MY_OTHER_PROCESS: process locked or MY_OTHER_PROCESS: memory exceeded. Then the two rules could add the following filters to ensure that just the information they need is collected:

rule_a:

add_filter(Specs.messages, 'my_special_process')

rule_b:

add_filter(Specs.messages, ['MY_OTHER_PROCESS: process locked',
                            'MY_OTHER_PROCESS: memory exceeded'])

The effect of this would be that when /var/log/messages is collected, the filters would be applied and only the lines containing the strings 'my_special_process', 'MY_OTHER_PROCESS: process locked', or 'MY_OTHER_PROCESS: memory exceeded' would be collected. This significantly reduces the size of the data and the chance that sensitive information in /var/log/messages might be collected.

While there are significant benefits to filtering, you must be aware that a datasource is being filtered or your rules could fail to identify a condition that may be present on a system. For instance suppose a rule rule_c also needs information from /var/log/messages about process_xyz. If rule_c runs with other rules like rule_a or rule_b then it would never see lines containing "process_xyz" appearing in /var/log/messages unless it adds a new filter. When any rule or parser adds a filter to a datasource, that data will be filtered for all components, not just the component adding the filter. Because of this it is important to understand when a datasource is being filtered so that your rule will function properly and include its own filters if needed.

Exploring Filters

Unfiltered Data

Suppose we want to write a rule that will evaluate the contents of the configuration file death_star.ini to determine if there are any vulnerabilities. Since this is a new data source that is not currently collected by insights-core we’ll need to add three elements to collect, parse and evaluate the information.

[1]:
""" Some imports used by all of the code in this tutorial """
import sys
sys.path.insert(0, "../..")
from __future__ import print_function
import os
from insights import run
from insights.specs import SpecSet
from insights.core import IniConfigFile
from insights.core.plugins import parser, rule, make_fail
from insights.core.spec_factory import simple_file

First we’ll need to add a specification to collect the configuration file. Note that for purposes of this tutorial we are collecting from a directory where this notebook is located. Normally the file path would be an absolute path on your system or in an archive.

[2]:
class Specs(SpecSet):
    """
    Define a new spec to collect the file we need.
    """
    death_star_config = simple_file(os.path.join(os.getcwd(), 'death_star.ini'), filterable=True)

Next we’ll need to add a parser to parse the file being collected by the spec. Since this file is in INI format and insights-core provides the IniConfigFile parser, we can just use that to parse the file. See the parser documentation to find out what methods that parser provides.

[3]:
@parser(Specs.death_star_config)
class DeathStarCfg(IniConfigFile):
    """
    Define a new parser to parse the spec. Since the spec is a standard INI format we
    can use the existing IniConfigFile parser that is provided by insights-core.

    See documentation here:
    https://insights-core.readthedocs.io/en/latest/api_index.html#insights.core.IniConfigFile
    """
    pass

Finally we can write the rule that will examine the contents of the parsed configuration file to determine if there are any vulnerabilities. In this INI file we can find the vulnerabilities by searching for keywords to find one that contains the string vulnerability. If any vulnerabilities are found the rule should return information in the form of a response that documents the vulnerabilities found, and tags them with the key DS_IS_VULNERABLE. If no vulnerabilities are found the rule should just drop out, effectively returning None.

[4]:
@rule(DeathStarCfg)
def ds_vulnerable(ds_cfg):
    """
    Define a new rule to look for vulnerable conditions that may be
    included in the INI file.  If found report them.
    """
    vulnerabilities = []
    for section in ds_cfg.sections():
        print("Section: {}".format(section))
        for item_key in ds_cfg.items(section):
            print("    {}={}".format(item_key, ds_cfg.get(section, item_key)))
            if 'vulnerability' in item_key:
                vulnerabilities.append((item_key, ds_cfg.get(section, item_key)))

    if vulnerabilities:
        return make_fail('DS_IS_VULNERABLE', vulnerabilities=vulnerabilities)

Before we run the rule, lets look at the contents of the configuration file. It is in the format of a typical INI file and contains some interesting information. In particular we see that it does contain a keyword that should match the string we are looking for in the rule, “major_vulnerability=ray-shielded particle exhaust vent”. So we expect the rule to return results.

[5]:
!cat death_star.ini
[global]
logging=debug
log=/var/logs/sample.log

# Keep this info secret
[secret_stuff]
username=dvader
password=luke_is_my_son

[facts]
major_vulnerability=ray-shielded particle exhaust vent

[settings]
music=The Imperial March
color=black

Lets run our rule and find out. To run the rule we’ll use the insights.run() function and as the argument pass in our rule object (note this is not a string but the actual object). The results returned will be an insights.dr.broker object that contains all sorts of information about the execution of the rule. You can explore more details of the broker in the Insights Core Tutorial notebook.

The print statements in our rule provide output as it loops through the configuration file.

[6]:
results = run(ds_vulnerable)
Section: global
    logging=debug
    log=/var/logs/sample.log
Section: secret_stuff
    username=dvader
    password=luke_is_my_son
Section: facts
    major_vulnerability=ray-shielded particle exhaust vent
Section: settings
    color=black
    music=The Imperial March

Now we are ready to look at the results. The results are stored in results[ds_vulnerable] where the rule object ds_vulnerable is the key into the dictionary of objects that your rule depended upon to execute, such as the parser DeathStarCfg and the spec Spec.death_star_config. You can see this by looking at those objects in results.

[7]:
type(results[Specs.death_star_config])
[7]:
insights.core.spec_factory.TextFileProvider
[8]:
type(results[DeathStarCfg])
[8]:
__main__.DeathStarCfg
[9]:
type(results[ds_vulnerable])
[9]:
insights.core.plugins.make_fail

Now lets look at the rule results to see if they match what we expected.

[10]:
results[ds_vulnerable]
[10]:
{'error_key': 'DS_IS_VULNERABLE',
 'type': 'rule',
 'vulnerabilities': [(u'major_vulnerability',
   u'ray-shielded particle exhaust vent')]}

Success, it worked as we expected finding the vulnerability. Now lets look at how filtering can affect the rule results.

Filtering Data

When we looked at the contents of the file you may have noticed some other interesting information such as this:

# Keep this info secret
[secret_stuff]
username=dvader
password=luke_is_my_son

As a parser writer, if you know that a file could contain sensitive information, you may choose to filter it in the parser module to avoid collecting it. Usernames, passwords, hostnames, security keys, and other sensitive information should not be collected. In this case the username and password are in the configuration file, so we should add a filter to this parser to prevent them from being collected.

How do we add a filter and avoid breaking the parser? Each parser is unique, so the parser writer must determine if a filter is necessary, and how to add a filter that will allow the parser to function with a minimal set of data. For instance a Yaml or XML parser might have a difficult time parsing a filtered Yaml or XML file.

For our example, we are using an INI file parser. INI files are structured with sections which are identified as a section name in square brackets like [section name], followed by items like name or name=value. One possible way to filter an INI file is to add the filter "[" which will collect all lines with sections but no items. This can be successfully parsed by the INI parser, so that is how we’ll filter out this sensitive information in our configuration file. We’ll rewrite the parser adding the add_filter(Specs.death_star_config, '[') to filter all lines except those with a '[' string.

[11]:
from insights.core.filters import add_filter

add_filter(Specs.death_star_config, '[')

@parser(Specs.death_star_config)
class DeathStarCfg(IniConfigFile):
    """
    Define a new parser to parse the spec. Since the spec is a standard INI format we
    can use the existing IniConfigFile parser that is provided by insights-core.

    See documentation here:
    https://insights-core.readthedocs.io/en/latest/api_index.html#insights.core.IniConfigFile
    """
    pass

Now lets run the rule again and see what happens. Do you expect the same results we got before?

[12]:
results = run(ds_vulnerable)
results.get(ds_vulnerable, "No results")        # Use .get method of dict so we can provide default other than None
Section: global
Section: secret_stuff
Section: facts
Section: settings
[12]:
'No results'

Is that what you expected? Notice the output from the print statements in the rule, only the section names are printed. That is the result of adding the filter, only lines with '[' (the sections) are collected and provided to the parser. This means that the lines we were looking for in the rule are no longer there, and that it appears our rule didn’t find any vulnerabilities. Next we’ll look at how to fix our rule to work with the filtered data.

Adding Filters to Rules

We can add filters to a rule just like we added a filter to the parser, using the add_filter() method. The add_filter method requires a spec and a string or list/set of strings. In this case our rule is looking for the string 'vulnerability' so we just need to add that to the filter.

Alternatively, filters can be added by specifying a parser or combiner in the add_filter() method instead of a spec. In that scenario, the dependency tree will be traversed to locate underlying datasources that are filterable (filterable parameter is equal to True). And the specified filters will be added to those datasouces. In our example, we can filter the underlying Specs.death_star_config datasource by adding the add_filter(DeathStarCfg, 'vulnerability') statement. This is especially useful when you are working with a combiner that consolidates data from multiple parsers, which in turn depend on multiple datasources. Adding a filter to a combiner would allow for consistent filtering of data across all applicable datasources.

[13]:
add_filter(Specs.death_star_config, 'vulnerability')

@rule(DeathStarCfg)
def ds_vulnerable(ds_cfg):
    """
    Define a new rule to look for vulnerable conditions that may be
    included in the INI file.  If found report them.
    """
    vulnerabilities = []
    for section in ds_cfg.sections():
        print("Section: {}".format(section))
        for item_key in ds_cfg.items(section):
            print("    {}={}".format(item_key, ds_cfg.get(section, item_key)))
            if 'vulnerability' in item_key:
                vulnerabilities.append((item_key, ds_cfg.get(section, item_key)))

    if vulnerabilities:
        return make_fail('DS_IS_VULNERABLE', vulnerabilities=vulnerabilities)

Now lets run the rule again and see what happens.

[14]:
results = run(ds_vulnerable)
results.get(ds_vulnerable, "No results")        # Use .get method of dict so we can provide default other than None
Section: global
Section: secret_stuff
Section: facts
    major_vulnerability=ray-shielded particle exhaust vent
Section: settings
[14]:
{'error_key': 'DS_IS_VULNERABLE',
 'type': 'rule',
 'vulnerabilities': [(u'major_vulnerability',
   u'ray-shielded particle exhaust vent')]}

Now look at the output from the print statements in the rule, the item that was missing is now included. By adding the string required by our rule to the spec filters we have successfully included the data needed by our rule to detect the problem. Also, by adding the filter to the parser we have eliminated the sensitive information from the input.

Determining if a Spec is Filtered

When you are developing your rule, you may want to add some code, during development, to check if the spec you are using is filtered. This can be accomplished by looking at the spec in insights/specs/**init**.py. Each spec is defined here as a RegistryPoint() type. If the spec is filtered it will have the parameter filterable=True, for example the following indicates that the messages log (/var/log/messages) will be filtered:

messages = RegistryPoint(filterable=True)

If you need to use a parser that relies on a filtered spec then you need to add your own filter to ensure that your rule will receive the data necessary to evaluate the rule conditions. If you forget to add a filter to your rule, if you include integration tests for your rule, pytest will indicate an exception like the following warning you that the add_filter is missing:

telemetry/rules/tests/integration.py:7:
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

component = <function report at 0x7fa843094e60>, input_data = <InputData {name:test4-00000}>, expected = None

    def run_test(component, input_data, expected=None):
        if filters.ENABLED:
            mod = component.__module__
            sup_mod = '.'.join(mod.split('.')[:-1])
            rps = _get_registry_points(component)
            filterable = set(d for d in rps if dr.get_delegate(d).filterable)
            missing_filters = filterable - ADDED_FILTERS.get(mod, set()) - ADDED_FILTERS.get(sup_mod, set())
            if missing_filters:
                names = [dr.get_name(m) for m in missing_filters]
                msg = "%s must add filters to %s"
>               raise Exception(msg % (mod, ", ".join(names)))
E               Exception: telemetry.rules.plugins.kernel.overcommit must add filters to insights.specs.Specs.messages

../../insights/insights-core/insights/tests/__init__.py:114: Exception

If you see this exception when you run tests then it means you need to include add_filter to your rule.

Turning Off Filtering Globally

There are often times that you would want or need to turn off filtering in order to perform testing or to fully analyze some aspects of a system and diagnose problems. Also if you are running locally on a system you might want to collect all data unfiltered. You can to this by setting the environment variable INSIGHTS_FILTERS_ENABLED=False prior to running insights-core. This won’t work inside this notebook unless you follow the directions below.

[15]:
"""
This code will disable all filtering if it is run as the first cell when the notebook
is opened.  After the notebook has been started you will need to click on the Kernel
menu and then the restart item, and then run this cell first before all others.
You would need to restart the kernel and then not run this cell to prevent disabling
filters.
"""
import os
os.environ['INSIGHTS_FILTERS_ENABLED'] = 'False'
[16]:
results = run(ds_vulnerable)
results.get(ds_vulnerable, "No results")        # Use .get method of dict so we can provide default other than None
Section: global
Section: secret_stuff
Section: facts
    major_vulnerability=ray-shielded particle exhaust vent
Section: settings
[16]:
{'error_key': 'DS_IS_VULNERABLE',
 'type': 'rule',
 'vulnerabilities': [(u'major_vulnerability',
   u'ray-shielded particle exhaust vent')]}

Debugging Components

If you are writing component code you may sometimes not see any results even though you expected them and no errors were displayed. That is because insights-core is catching the exceptions and saving them. In order to see the exceptions you can use the following method to display the results of a run and any errors that occurrerd.

[17]:
def show_results(results, component):
    """
    This function will show the results from run() where:

        results = run(component)

    run will catch all exceptions so if there are any this
    function will print them out with a stack trace, making
    it easier to develop component code.
    """
    if component in results:
        print(results[component])
    else:
        print("No results for: {}".format(component))

    if results.exceptions:
        for comp in results.exceptions:
            print("Component Exception: {}".format(comp))
            for exp in results.exceptions[comp]:
                print(results.tracebacks[exp])

Here’s an example of this function in use

[18]:
@rule(DeathStarCfg)
def bad_rule(cfg):
    # Force an error here
    infinity = 1 / 0
[19]:
results = run(bad_rule)
No handlers could be found for logger "insights.core.dr"
[20]:
show_results(results, bad_rule)
No results for: <function bad_rule at 0x7f8c02e99d50>
Component Exception: <function bad_rule at 0x7f8c02e99d50>
Traceback (most recent call last):
  File "../../insights/core/dr.py", line 962, in run
    result = DELEGATES[component].process(broker)
  File "../../insights/core/plugins.py", line 303, in process
    r = self.invoke(broker)
  File "../../insights/core/plugins.py", line 64, in invoke
    return super(PluginType, self).invoke(broker)
  File "../../insights/core/dr.py", line 661, in invoke
    return self.component(*args)
  File "<ipython-input-18-0450035609f8>", line 4, in bad_rule
    infinity = 1 / 0
ZeroDivisionError: integer division or modulo by zero

[ ]:

MAN Pages

CONFIG(5)

NAME

config - configuration file for insights-core tools

SYNOPSIS

filename.yaml

DESCRIPTION

The insights-core tools allow a configuration options file to be specified that provides more detailed control over the execution of insights-core by each tool. Any filename may be used, but the format of the file must be YAML.

EXAMPLES

A complete example of a configuration file in YAML format:

# disable everything by default
# defaults to false if not specified.
default_component_enabled: false

# packages and modules to load
packages:
    - insights.specs.default
    - another.plugins.module

# configuration of loaded components. names are prefixes, so any component with
# a fully qualified name that starts with a key will get the associated
# configuration applied. Can specify timeout, which will apply to command
# datasources. Can specify metadata, which must be a dictionary and will be
# merged with the components' default metadata.
configs:
    - name: another.plugins.module
      enabled: true

    - name: insights.specs.Specs
      enabled: true

    - name: insights.specs.default.DefaultSpecs
      enabled: false

    - name: insights.parsers.hostname
      enabled: true

    - name: insights.parsers.facter
      enabled: true

    - name: insights.parsers.systemid
      enabled: true

    - name: insights.combiners.hostname
      enabled: true

# needed because some specs aren't given names before they're used in DefaultSpecs
    - name: insights.core.spec_factory
      enabled: true

INSIGHTS-CAT(1)

NAME

insights-cat - execute insights datasources on a system or archives

SYNOPSIS

insights-cat [OPTIONS] SPEC [ARCHIVE]

DESCRIPTION

The insights-cat command provides a tool to investigate information as it is collected by insights-core. When the insights client runs it uses the datasources (often referred to as a SPEC) to collect information from a system or an archive.

The SPECs for system collection are located in insights.specs.default. Insights-cat executes the SPEC to either collect from the system, or if ARCHIVE is provided it will collect from the archive. Archive datasources are documented in insights.specs.insights_archive, insights.specs.sos_report and insights.specs.jdr_archives.

OPTIONS

-c CONFIG --config CONFIG

Configure components.

-D --debug

Show debug level information.

-h --help

Show the command line help and exit.

--no-header

Don’t print command or path headers.

-p PLUGINS --plugins PLUGINS

Comma-separated list without spaces of package(s) or module(s) containing plugins.

-q --quiet

Only show commands or paths.

EXAMPLES

insights-cat redhat_release

Outputs the information collected by the SPEC insights.specs.default.DefaultSpecs.redhat_release, including the header describing the SPEC type and value.

insights-cat --no-header redhat_release

Outputs the information collected by the SPEC insights.specs.default.DefaultSpecs.redhat_release with no header. This is appropriate when you want to collect the data in the form as seen by a parser.

insights-cat -c configfile.yaml redhat_release

Outputs the information collected by the SPEC using the configuration information provided in configfile.yaml. See CONFIG(5) for more information on the specifics of the configuration file options and format.

insights-cat -D redhat_release

The -D option will produce a trace of the operations performed by insights-core as the SPEC is executed. The SPEC data will be output following all of the debugging output.

insights-cat -D -p examples.rules.stand_alone examples.rules.stand_alone.Specs.hosts

The -p option allows inclusion of additional modules by insights-core. By default the insights-core cat command will only load the insights-core modules. In this example the file examples/rules/stand_alone.py includes a spec Specs.hosts. This command will execute the hosts spec in the examples file and not the insights spec hosts.

The -D option will show each module as it is loaded and the actual spec used for the command.

insights-cat -D -p module1,module2,module3 spec_name

Multiple modules can be loaded with the -p option by separating them with commas.

insights-cat -q installed_rpms

The -q switch will inhibit output of the command or file, and only show the spec type and the command to be executed or file to be collected. Use this switch when you are interested in the details of the spec and don’t care about the data.

INSIGHTS-INFO(1)

NAME

insights-info - interrogate components to get info (source, docs, dependencies, etc.)

SYNOPSIS

insights-info [OPTIONS] SPEC [ARCHIVE]

DESCRIPTION

The insights-info command provides a tool to interrogate insights-core components and any components added using the command options. Insights-info can be used to display embedded documentation and source code for components as well as showing the a component’s dependency information.

OPTIONS

-c COMPONENTS --components COMPONENTS

Comma separated list of components that have already executed. Names without ‘.’ are assumed to be in insights.specs.Specs.

-d COMPONENT --pydoc COMPONENT

Show pydoc for the given object. E.g.: insights-info -d insights.rule.

-h --help

Show the command line help and exit.

-i COMPONENT --info COMPONENT

Comma separated list of components to get dependency info about.

-k --pkg-query

Expression to select rules by package.

-p PLUGINS --preload PLUGINS

Comma separated list of packages or modules to preload.

-s COMPONENT --source COMPONENT

Show source for the given component. E.g.: insights-info -s insights.parsers.redhat_release

-S NAME --specs NAME

Show specs for the given name. E.g.: insights-info -S uname

-t TYPES --types TYPES

Filter results based on component type; e.g. ‘rule,parser’. Names without ‘.’ are assumed to be in insights.core.plugins.

--tags EXPRESSION

An expression for selecting which loaded rules to run based on their tags.

-v --verbose

Print component dependencies.

EXAMPLES

insights-info foo bar baz

Search for all datasources that might handle foo, bar, or baz files or commands along with all components that could be activated if they were present and valid.

insights-info -i insights.specs.Specs.hosts

Display dependency information about the hosts datasource.

insights-info -d insights.parsers.hosts.Hosts

Display the pydoc information about the Hosts parser.

INSIGHTS-INSPECT(1)

NAME

insights-inspect - execute an insights component into an iPython session

SYNOPSIS

insights-inspect [OPTIONS] COMPONENT [ARCHIVE]

DESCRIPTION

The insights-inspect command provides a tool to execute a component in insights, and then load the component into an iPython session so that it can be inspected and manipulated. The COMPONENT can be anything in the dependency tree including a datasource, parser, combiner, and rule.

Insights-inspect executes the COMPONENT and collects data from the system, or if ARCHIVE is provided it will collect data from the archive. Archive datasources are documented in insights.specs.insights_archive, insights.specs.sos_report and insights.specs.jdr_archives.

OPTIONS

-c CONFIG --config CONFIG

Configure components.

-D --debug

Show debug level information.

-h --help

Show the command line help and exit.

EXAMPLES

insights-inspect insights.specs.Specs.redhat_release

Executes insights-core and opens an iPython session with a datasource object populated for insights.specs.Specs.redhat_release and all objects that the datasource depends upon. The example session in iPython would look like this:

Enter 'redhat_release.' and tab to get a list of properties
Example:
In [1]: redhat_release.<property_name>
Out[1]: <property value>

To exit iPython enter 'exit' and hit enter or use 'CTL D'

Python 3.6.8 (default, Jan 27 2019, 09:00:23)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: redhat_release
Out[1]: TextFileProvider("'/etc/redhat-release'")

In [2]: redhat_release.content
Out[2]: ['Fedora release 29 (Twenty Nine)']
insights-inspect insights.parsers.hostname.Hostname

Executes insights-core and opens an iPython session with a parser object populated for insights.parsers.hostname.Hostname and all objects that the parser depends upon. The example session in iPython would look like this:

IPython Console Usage Info:

Enter 'Hostname.' and tab to get a list of properties
Example:
In [1]: Hostname.<property_name>
Out[1]: <property value>

To exit iPython enter 'exit' and hit enter or use 'CTL D'

Python 3.6.8 (default, Jan 27 2019, 09:00:23)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: Hostname
Out[1]: <insights.parsers.hostname.Hostname at 0x7f64e81fef60>

In [2]: Hostname.fqdn
Out[2]: 'myhostname.mydomainname.com'

In [3]: Hostname.domain
Out[3]: 'mydomainname.com'
insights-inspect insights.plugins.always_runs.report

Executes insights-core and opens an iPython session with a rule object populated for insights.plugins.always_runs.report() and all objects that the rule depends upon. The example session in iPython would look like this:

IPython Console Usage Info:

Enter 'report.' and tab to get a list of properties
Example:
In [1]: report.<property_name>
Out[1]: <property value>

To exit iPython enter 'exit' and hit enter or use 'CTL D'

Python 3.6.8 (default, Jan 27 2019, 09:00:23)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: report
Out[1]: {'kernel': 'this is junk', 'type': 'pass', 'pass_key': 'ALWAYS_FIRES'}
insights-inspect -c configfile.yaml insights.specs.Specs.redhat_release

Inspects the information collected by the COMPONENT using the configuration information provided in configfile.yaml. See CONFIG(5) for more information on the specifics of the configuration file options and format.

insights-inspect -D insights.specs.Specs.redhat_release

The -D option will produce a trace of the operations performed by insights-core as the COMPONENT is executed. The COMPONENT data will be output following all of the debugging output.

INSIGHTS-RUN(1)

NAME

insights-run - execute insights-core with a set of components on a system or archives

SYNOPSIS

insights-run [OPTIONS] [ARCHIVE]

DESCRIPTION

The insights-run command provides a tool to execute a set of components including datasources (SPECs), parsers, combiners and rules on a host system, or on one or more archive files.

OPTIONS

-b “spec=input_file[,spec=input_file,…]” --bare “spec=input_file[,spec=input_file,…]”

Specify that a particular input file should be used for a spec. This allows you to use specific files for input to a run. For example to use your own messages.log file as input instead of the messages.log file in an archive:

insights-run -b "messages=$HOME/data/messages.log" -p myinsights.myrules

The short name can be used for insights-core specs. If custom specs are used you must specify the full module path for the spec:

-b "myinsights.myspecs.specs.custom_messages=$HOME/data/messages.log"

When -b is used, [ARCHIVE] is ignored.

-c CONFIG --config CONFIG

Configure components.

--context CONTEXT

Execution Context. Defaults to HostContext if an archive isn’t passed. See Contexts for additional information.

-D --debug

Show debug level information.

-d --dropped

Show collected files that weren’t processed.

-F --fail-only

Show FAIL results only. Conflict with ‘-m’ or ‘-f’, will be dropped when using them together.

-f FORMAT --format FORMAT

Output format to an alternative format. The default format is ‘text’. Alternative formats are ‘_json’, ‘_yaml’ and ‘_markdown’.

-h --help

Show the command line help and exit.

-i INVENTORY --inventory INVENTORY

Ansible inventory file for cluster analysis. See INVENTORY(5) for more information about the options for format of the inventory file.

-k --pkg-query

Expression to select rules by package.

-m --missing

Show missing requirements.

-p PLUGINS --plugins PLUGINS

Comma-separated list without spaces of package(s) or module(s) containing plugins.

-s --syslog

Sends all log results to syslog. This is normally used when insights-core is run in other applications, or in a non-interactive process.

-t --tracebacks

Show stack traces when there are errors in components.

--tags EXPRESSION

An expression for selecting which loaded rules to run based on their tags.

-v --verbose

Verbose output.

EXAMPLES

insights-run -p examples.rules

Runs all of the rules that are implemented in the module example.rules and sub-modules by executing all required datasources against the local host system.

insights-run -p examples.rules insights-archive.tar.gz

Runs all of the rules that are implemented in the module example.rules and sub-modules by executing all required datasources against the insights archive.

insights-run -p examples.rules sosreport.tar.xz

Runs all of the rules that are implemented in the module example.rules and sub-modules by executing all required datasources against the sosreport.

INVENTORY(5)

NAME

inventory - inventory configuration file for insights-run execution

SYNOPSIS

filename.yaml

DESCRIPTION

The inventory file provides configuration of information for a collection of systems. The format of the file is Ansible Host File format.

EXAMPLES

A complete example of a configuration file in YAML format:

# Insights core cluster rules topology file
# in Ansible host file format
# See: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#hosts-and-groups
#
# machine id's are listed under each [category] of systems
[master]
11111111


[infra]
2222222

[nodes]
3333333
4444444

Indices and tables