pysys.basetest module¶
Contains the BaseTest
class that is subclassed by each individual testcase,
provides most of the assertion methods, and itself subclasses pysys.process.user.ProcessUser
.
For more information see the pysys.basetest.BaseTest
API documentation.
-
class
pysys.basetest.
BaseTest
(descriptor, outsubdir, runner)[source]¶ Bases:
pysys.process.user.ProcessUser
The base class for all PySys testcases.
BaseTest is the parent class of all PySys system testcases. The class provides utility functions for cross-platform process management and manipulation, test timing, and test validation. Any PySys testcase should inherit from the base test and provide an implementation of the abstract
execute
method defined in this class. Child classes can also overide thesetup
,cleanup
andvalidate
methods of the class to provide custom setup and cleanup actions for a particual test, and to perform all validation steps in a single method should this prove logically more simple.Execution of a PySys testcase is performed through an instance of the
pysys.baserunner.BaseRunner
class, or a subclass thereof. The base runner instantiates an instance of the testcase, and then calls thesetup
,execute
,validate
andcleanup
methods of the instance. All processes started during the test execution are reference counted within the base test, and terminated within thecleanup
method.Validation of the testcase is through the
assert*
methods. Execution of many methods appends an outcome to the outcome data structure maintained by the ProcessUser base class, thus building up a record of the individual validation outcomes. Several potential outcomes are supported by the PySys framework (SKIPPED
,BLOCKED
,DUMPEDCORE
,TIMEDOUT
,FAILED
,NOTVERIFIED
, andPASSED
) and the overall outcome of the testcase is determined using aprecedence order of the individual outcomes.Variables: - mode (string) – The user defined mode the test is running within. Subclasses can use this in conditional checks to modify the test execution based upon the mode.
- input (string) – Full path to the input directory of the testcase. This is used both by the class and its subclasses to locate the default directory containing all input data to the testcase, as defined in the testcase descriptor.
- output (string) – Full path to the output sub-directory of the testcase. This is used both by the class and its subclasses to locate the default directory for output produced by the testcase. Note that this is the actual directory where all output is written, as modified from that defined in the testcase descriptor to accomodate for the sub-directory used within this location to sandbox concurrent execution of the test, and/or to denote the run number.
- reference (string) – Full path to the reference directory of the testcase. This is used both by the class and its subclasses to locate the default directory containing all reference data to the testcase, as defined in the testcase descriptor.
- log (logging.Logger) – Reference to the logger instance of this class
- project (
Project
) – Reference to the project details as set on the module load of the launching executable - descriptor (
pysys.xml.descriptor.TestDescriptor
) – Information about this testcase, with fields such as id, title, etc - testCycle (int) – The cycle in which this test is running. Numbering starts from 1 in a multi-cycle test run. The special value of 0 is used to indicate that this is not part of a multi-cycle run.
-
__init__
(descriptor, outsubdir, runner)[source]¶ Create an instance of the BaseTest class.
Parameters: - descriptor – The descriptor for the test giving all test details
- outsubdir – The output subdirectory the test output will be written to
- runner – Reference to the runner responsable for executing the testcase
-
addResource
(resource)[source]¶ Add a resource which is owned by the test and is therefore cleaned up (deleted) when the test is cleaned up.
Deprecated - please use addCleanupFunction instead of this function.
-
assertDiff
(file1, file2, filedir1=None, filedir2=None, ignores=[], sort=False, replace=[], includes=[], encoding=None, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on the comparison of two input text files.
This method performs a file comparison on two input files. The files are pre-processed prior to the comparison to either ignore particular lines, sort their constituent lines, replace matches to regular expressions in a line with an alternate value, or to only include particular lines. Should the files after pre-processing be equivalent a
PASSED
outcome is added to the test outcome list, otherwise aFAILED
outcome is added.Although this method can perform transformation of the files directly, it is often easier to instead use
copy
to perform the transformation (e.g. stripping out timestamps, finding lines of interest etc) and then separately call assertDiff on the processed file. This makes it easier to generate a suitable reference file and to diagnose test failures.Parameters: - file1 – The basename of the first file used in the file comparison
- file2 – The basename of the second file used in the file comparison (often a reference file)
- filedir1 – The dirname of the first file (defaults to the testcase output subdirectory)
- filedir2 – The dirname of the second file (defaults to the testcase reference directory)
- ignores – A list of regular expressions used to denote lines in the files which should be ignored
- sort – Boolean flag to indicate if the lines in the files should be sorted prior to the comparison
- replace – List of tuples of the form (‘regexpr’, ‘replacement’). For each regular expression in the list, any occurences in the files is replaced with the replacement value prior to the comparison being carried out. This is often useful to replace timestamps in logfiles etc.
- includes – A list of regular expressions used to denote lines in the files which should be used in the comparison. Only lines which match an expression in the list are used for the comparison
- encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the
getDefaultFileEncoding()
method. - abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertEval
(evalstring, abortOnError=False, **formatparams)[source]¶ Perform a validation based on substituting values into a .format() string with named {} placeholders and then evaluating it with eval.
Example use:
self.assertEval('os.path.size({filename}) > {origFileSize}', filename=self.output+'/file.txt', origFileSize=1000)
See also
getExprFromFile
which is often used to extract a piece of data from a log file which can then be checked using this method.Parameters: - evalstring – a string that will be formatted using .format(…) with the specified parameters, and result in failure outcome if not true. Parameters should be specified using {name} syntax, and quoting is not required as string values are automatically escaped using repr. e.g. ‘os.path.size({filename}) > {origFileSize}’. Do not use an f-string instead of explicitly passing formatparams, as with an f-string this method will not know the names of the substituted parameters which makes the intention of the assertion harder to understand from looking at the test output.
- formatparams – Named parameters for the format string, which can be of any type. Use descriptive names for the parameters to produce an assertion message that makes it really clear what is being checked. String parameters will be automatically passed through repr() before being formatted, so there is no need to perform additional quoting or escaping of strings.
- abortOnError – Set to True to make the test immediately abort if the assertion fails. Unless abortOnError=True this method only throws an exception if the format string is invalid; failure to execute the eval(…) results in a BLOCKED outcome but no exception.
-
assertFalse
(expr, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on the supplied expression evaluating to false.
Consider using
assertEval
instead of this method, which produces clearer assertion failure messages.If the supplied expression evaluates to false a
PASSED
outcome is added to the outcome list. Should the expression evaluate to true, aFAILED
outcome is added.Parameters: - expr – The expression to check for the true | false value
- abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertGrep
(file, filedir=None, expr='', contains=True, ignores=None, literal=False, encoding=None, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on a regular expression occurring in a text file.
When the
contains
input argument is set to true, this method will add aPASSED
outcome to the test outcome list if the supplied regular expression is seen in the file; otherwise aFAILED
outcome is added. Shouldcontains
be set to false, aPASSED
outcome will only be added should the regular expression not be seen in the file.Parameters: - file – The basename of the file used in the grep
- filedir – The dirname of the file (defaults to the testcase output subdirectory)
- expr – The regular expression to check for in the file (or a string literal if literal=True), for example ” ERROR .*”. For contains=False matches, you should end the expr with .* if you wish to include just the matching text in the outcome failure reason. If contains=False and expr does not end with a * then the entire matching line will be included in the outcome failure reason. For contains=True matches, the expr itself is used as the outcome failure reason.
- contains – Boolean flag to specify if the expression should or should not be seen in the file.
- ignores – Optional list of regular expressions that will be ignored when reading the file.
- literal – By default expr is treated as a regex, but set this to True to pass in a string literal instead.
- encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the
getDefaultFileEncoding()
method. - abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertLastGrep
(file, filedir=None, expr='', contains=True, ignores=[], includes=[], encoding=None, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on a regular expression occurring in the last line of a text file.
When the
contains
input argument is set to true, this method will add aPASSED
outcome to the test outcome list if the supplied regular expression is seen in the file; otherwise aFAILED
outcome is added. Shouldcontains
be set to false, aPASSED
outcome will only be added should the regular expression not be seen in the file.Parameters: - file – The basename of the file used in the grep
- filedir – The dirname of the file (defaults to the testcase output subdirectory)
- expr – The regular expression to check for in the last line of the file
- contains – Boolean flag to denote if the expression should or should not be seen in the file
- ignores – A list of regular expressions used to denote lines in the file which should be ignored
- includes – A list of regular expressions used to denote lines in the file which should be used in the assertion.#
- encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the
getDefaultFileEncoding()
method. - abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertLineCount
(file, filedir=None, expr='', condition='>=1', ignores=None, encoding=None, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on the number of lines in a text file matching a specific regular expression.
This method will add a
PASSED
outcome to the outcome list if the number of lines in the input file matching the specified regular expression evaluate to true when evaluated against the supplied condition.Parameters: - file – The basename of the file used in the line count
- filedir – The dirname of the file (defaults to the testcase output subdirectory)
- expr – The regular expression string used to match a line of the input file
- condition – The condition to be met for the number of lines matching the regular expression
- ignores – A list of regular expressions that will cause lines to be excluded from the count
- encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the
getDefaultFileEncoding()
method. - abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertOrderedGrep
(file, filedir=None, exprList=[], contains=True, encoding=None, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on a list of regular expressions occurring in specified order in a text file.
When the
contains
input argument is set to true, this method will append aPASSED
outcome to the test outcome list if the supplied regular expressions in theexprList
are seen in the file in the order they appear in the list; otherwise aFAILED
outcome is added. Shouldcontains
be set to false, aPASSED
outcome will only be added should the regular expressions not be seen in the file in the order they appear in the list.Parameters: - file – The basename of the file used in the ordered grep
- filedir – The dirname of the file (defaults to the testcase output subdirectory)
- exprList – A list of regular expressions which should occur in the file in the order they appear in the list
- contains – Boolean flag to denote if the expressions should or should not be seen in the file in the order specified
- encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the
getDefaultFileEncoding()
method. - abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertPathExists
(path, exists=True, abortOnError=False)[source]¶ Perform a validation that the specified file or directory path exists (or does not exist).
Parameters: - path – The path to be checked. This can be an absolute path or relative to the testcase output directory.
- exists – True if the path is asserted to exist, False if it should not.
- abortOnError – Set to True to make the test immediately abort if the assertion fails.
-
assertThat
(conditionstring, *args, **kwargs)[source]¶ [DEPRECATED] Perform a validation based on substituting values into an old-style % format string and then evaluating it with eval.
This method is deprecated in favour of
assertEval
which produces more useful assertion failure messages and automatic quoting of strings.The eval string should be specified as a format string, with zero or more %s-style arguments. This provides an easy way to check conditions that also produces clear outcome messages.
The safest way to pass arbitrary arguments of type string is to use the repr() function to add appropriate quotes and escaping.
e.g. self.assertThat(‘%d >= 5 or %s==”foobar”’, myvalue, repr(mystringvalue))
Deprecated: Use
assertEval
instead.Parameters: - conditionstring – A string will have any following args substituted into it and then be evaluated as a boolean python expression.
- args – Zero or more arguments to be substituted into the format string
- abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
assertTrue
(expr, abortOnError=False, assertMessage=None)[source]¶ Perform a validation assert on the supplied expression evaluating to true.
Consider using
assertEval
instead of this method, which produces clearer assertion failure messages.If the supplied expression evaluates to true a
PASSED
outcome is added to the outcome list. Should the expression evaluate to false, aFAILED
outcome is added.Parameters: - expr – The expression, as a boolean, to check for the True | False value
- abortOnError – Set to True to make the test immediately abort if the assertion fails.
- assertMessage – Overrides the string used to describe this assertion in log messages and the outcome reason.
-
cleanup
()[source]¶ Cleanup method which performs cleanup actions after execution and validation of the test.
The cleanup method performs actions to stop all processes started in the background and not explicitly killed during the test execution. It also stops all process monitors running in separate threads, and any instances of the manual tester user interface.
Should a custom cleanup for a subclass be required, use
addCleanupFunction
instead of overriding this method.
-
execute
()[source]¶ Execute method which must be overridden to perform the test execution steps.
Raises: NotImplementedError – Raised exeception should the method not be overridden
-
getDefaultFileEncoding
(file, **xargs)[source]¶ Specifies what encoding should be used to read or write the specified text file. The default implementation for BaseTest delegates to the runner, which in turn gets its defaults from the pysyproject.xml configuration.
See
pysys.process.user.ProcessUser.getDefaultFileEncoding
for more details.
-
pythonDocTest
(pythonFile, pythonPath=None, output=None, environs=None, **kwargs)[source]¶ Execute the Python doctests that exist in the specified python file; adds a FAILED outcome if any do not pass.
Parameters: - pythonFile – the absolute path to a python file name.
- pythonPath – a list of directories to be added to the PYTHONPATH.
- output – the output file; if not specified, ‘%s-doctest.txt’ is used with the basename of the python file.
- kwargs – extra arguments are passed to startProcess/startPython.
-
reportPerformanceResult
(value, resultKey, unit, toleranceStdDevs=None, resultDetails=None)[source]¶ Reports a new performance result, with an associated unique key that identifies it for comparison purposes.
Where possible it is better to report the rate at which an operation can be performed (e.g. throughput) rather than the total time taken, since this allows the number of iterations to be increased .
Parameters: - value – The value to be reported. Usually this is a float or integer, but string is also permitted.
- resultKey – A unique string that fully identifies what was measured, which will be used to compare results from different test runs. For example “HTTP transport message sending throughput using with 3 connections in SSL mode”. The resultKey must be unique across all test cases and modes. It should be fully self-describing (without the need to look up extra information such as the associated testId). Do not include the test id or units in the resultKey string. It must be stable across different runs, so cannot contain process identifiers, date/times or other numbers that will vary. If possible resultKeys should be written so that related results will be together when all performance results are sorted by resultKey, which usually means putting general information near the start of the string and specifics (throughput/latency, sending/receiving) towards the end of the string. It should be as concise as possible (given the above).
- unit – Identifies the unit the the value is measured in, including whether bigger numbers are better or worse (used to determine improvement or regression). Must be an instance of
pysys.utils.perfreporter.PerformanceUnit
. In most cases, usepysys.utils.perfreporter.PerformanceUnit.SECONDS
(e.g. for latency) orpysys.utils.perfreporter.PerformanceUnit.PER_SECOND
(e.g. for throughput); the string literals ‘s’ and ‘/s’ can be used as a shorthand for those PerformanceUnit instances. - toleranceStdDevs – (optional) A float that indicates how many standard deviations away from the mean a result needs to be to be considered a regression.
- resultDetails – (optional) A dictionary of detailed information about this specific result and/or test that should be recorded together with the result, for example detailed information about what mode or versions the test is measuring. Note this is separate from the global run details shared across all tests in this PySys execution, which can be customized by overriding
pysys.utils.perfreporter.CSVPerformanceReporter.getRunDetails
.
-
setKeywordArgs
(xargs)[source]¶ Set the xargs as data attributes of the test class.
Values in the xargs dictionary are set as data attributes using the builtin
setattr
method. Thus an xargs dictionary of the form{'foo': 'bar'
} will result in a data attribute of the formself.foo
withvalue bar
. This is used so that subclasses can define default values of data attributes, which can be overriden on instantiation e.g. using the -X options to the runTest.py launch executable.If an existing attribute is present on this test class (typically a static class variable) and it has a type of bool, int or float, then any -X options will be automatically converted from string to that type. This facilitates providing default values for parameters such as iteration count or timeouts as static class variables with the possibility of overriding on the command line, for example -Xiterations=123.
Parameters: xargs – A dictionary of the user defined extra arguments
-
setup
()[source]¶ Setup method which may optionally be overridden to perform custom setup operations prior to test execution.
-
startBackgroundThread
(name, target, kwargsForTarget={})[source]¶ Start a new background thread that will invoke the specified target function.
The target function will be invoked with the specified keyword arguments and also the special keyword argument stopping which is a Python
threading.Event
instance that can be used to detect when the thread has been requested to terminate. It is recommended to use this event instead oftime.sleep
to avoid waiting when the thread is meant to be finishing.- Example usage::
- class PySysTest(BaseTest):
- def dosomething(self, stopping, log, param1, pollingInterval):
log.debug(‘Message from my thread’) while not stopping.is_set():
# … do stuff here
# sleep for pollingInterval, waking up if requested to stop; # (hint: ensure this wait time is small to retain # responsiveness to Ctrl+C interrupts) if stopping.wait(pollingInterval): return
- def execute(self):
- t = self.startBackgroundThread(‘DoSomething1’, self.dosomething, {‘param1’:True, ‘pollingInterval’:1.0}) … t.stop() # requests thread to stop but doesn’t wait for it to stop t.join()
Note that
BaseTest
is not thread-safe (apart fromaddOutcome
,startProcess
and the reading of fields like self.output that don’t change) so if you need to use its fields or methods from background threads, be sure to add your own locking to the foreground and background threads in your test, including any custom cleanup functions.The BaseTest will stop and join all running background threads at the beginning of cleanup. If a thread doesn’t stop within the expected timeout period a
constants.TIMEDOUT
outcome will be appended. If a thread’s target function raises an Exception then aconstants.BLOCKED
outcome will be appended during cleanup or when it is joined.Parameters: - name – A name for this thread that concisely describes its purpose. Should be unique within this test/owner instance. A prefix indicating the test/owner will be added to the provided name.
- target – The function or instance method that will be executed on the background thread. The function must accept a keyword argument named stopping in addition to whichever keyword arguments are specified in kwargsForTarget.
- kwargsForTarget – A dictionary of keyword arguments that will be passed to the target function.
Returns: A
pysys.utils.threadutils.BackgroundThread
instance wrapping the newly started thread.Return type:
-
startManualTester
(file, filedir=None, state=11, timeout=1800)[source]¶ Start the manual tester.
The manual tester user interface (UI) is used to describe a series of manual steps to be performed to execute and validate a test. Only a single instance of the UI can be running at any given time, and can be run either in the
FOREGROUND
(method will not return until the UI is closed or the timeout occurs) or in theBACKGROUND
(method will return straight away so automated actions may be performed concurrently). Should the UI be terminated due to expiry of the timeout, aTIMEDOUT
outcome will be added to the outcome list. The UI can be stopped via thestopManualTester
method. An instance of the UI not explicitly stopped within a test will automatically be stopped via thecleanup
method of the BaseTest.Parameters: - file – The name of the manual test xml input file (see
pysys.xml.manual
for details on the DTD) - filedir – The directory containing the manual test xml input file (defaults to the output subdirectory)
- state – Start the manual tester either in the
FOREGROUND
orBACKGROUND
(defaults toFOREGROUND
) - timeout – The timeout period after which to termintate a manual tester running in the
FOREGROUND
- file – The name of the manual test xml input file (see
-
startProcessMonitor
(process, interval=5, file=None, handlers=[], **pmargs)[source]¶ Start a background thread to monitor process statistics such as memory and CPU usage.
All process monitors are automatically stopped on completion of the test by
BaseTest.cleanup
, but you may also wish to explicitly stop your process monitors usingstopProcessMonitor
before you begin shutting down processes at the end of a test to avoid unwanted spikes and noise in the last few samples of the data.You can specify a file and/or a list of handlers. If you use file, a default
pysys.process.monitor.ProcessMonitorTextFileHandler
instance is created to produce tab-delimited lines with default columns specified bypysys.process.monitor.ProcessMonitorTextFileHandler.DEFAULT_COLUMNS
. If you wish to customize this for an individual test create your ownProcessMonitorTextFileHandler
instance and pass it to handlers instead. Additional default columns may be added in future releases.Parameters: - process – The process handle returned from the
startProcess
method. - interval – The polling interval in seconds between collection of monitoring statistics.
- file – The name of a tab separated values (.tsv) file to write to, for example ‘monitor-myprocess.tsv’. A default
pysys.process.monitor.ProcessMonitorTextFileHandler
instance is created if this parameter is specified, with default columns frompysys.process.monitor.ProcessMonitorTextFileHandler.DEFAULT_COLUMNS
. - handlers – A list of
pysys.process.monitor.BaseProcessMonitorHandler
instances (such aspysys.process.monitor.ProcessMonitorTextFileHandler
), which will process monitoring data every polling interval. This can be used for recording results (for example in a file) or for dynamically analysing them and reporting problems. - pmargs – Keyword arguments to allow advanced parameterization of the process monitor class, which will be passed to its constructor. It is an error to specify any parameters not supported by the process monitor class on each platform.
Returns: An object representing the process monitor (
pysys.process.monitor.BaseProcessMonitor
).Return type: - process – The process handle returned from the
-
stopProcessMonitor
(monitor)[source]¶ Request a process monitor to stop.
Does not wait for it to finish stopping.
All process monitors are automatically stopped and joined during cleanup, however you may wish to explicitly stop your process monitors before you begin shutting down processes at the end of a test to avoid unwanted spikes and noise in the last few samples of the data.
Parameters: monitor – The process monitor handle returned from the startProcessMonitor
method
-
validate
()[source]¶ Validate method which may optionally be overridden to group all validation steps.