Welcome¶
Specter is a Python testing framework inspired from RSpec and Jasmine. The library was created out of a desire to have a relatively flexible Python testing framework that adopted a more code-centric approach to BDD.
Specter is open-source and is available on GitHub. We love contributions!
Note
Questions? Join us on Freenode on the #specterframework channel
Documentation¶
Using Specter¶
Installation¶
You can download Specter from PyPI for easy installation. It is recommended that you use pip or easy_install to install the bindings:
pip install specter
You may consider using virtualenv or pyenv to create isolated Python environments.
Setup¶
By default, Specter looks within the current directory for a folder called “spec” which contains your test files
Example Default Test Structure:
Project_Folder
└── spec
├── submodule
├── another.py
└── __init__.py
├── example.py
└── __init__.py
If you do not wish to use the default folder, you can specify an alternative using the "--search" command-line argument:
specter --search /path/to/folder
Runner¶
Specter allows for quick and easy execution of your tests by just calling ‘specter’ within your project folder:
specter
___
_/ @@\
~- ( \ O/__ Specter
~- \ \__) ~~~~~~~~~~
~- / \ Keeping the Bogeyman away from your code!
~- / _\
~~~~~~~~~
ExampleSpec
∟ this is a test spec
------------------------
------- Summary --------
Pass | 1
Skip | 0
Fail | 0
Error | 0
Incomplete | 0
Test Total | 1
- Expectations | 1
------------------------
Command-line Arguments¶
Specter is a spec-based testing library to help facilitate BDD in Python.
Argument | Description |
---|---|
-h, –help | Show console help |
–search PATH | Specifies the search path for spec files |
–no-art | Disables the ASCII art on the runner |
–coverage | Enables coverage.py integration. Configure using .coveragerc |
–select-module | Selects a module path to run. Ex: sample.TestClass |
–select-tests | Selects tests to run by name. (Comma delimited list) |
–select-by-metadata | Selects tests to run by specifying a list of key=value pairs |
–xunit-results | Output xUnit XML results into a specified file |
–json-results | Saves Specter JSON results into a specifed file |
–no-color | Disables ASCII color codes |
–ascii-only | Disables color and uses only ascii characters (useful for CI systems). |
–parallel | Activates parallel testing mode |
–num-processes | Specifies the number of processes to use under parallel mode (default: 6) |
–show-all-expects | Displays all expectations for test cases |
Writing Specter Tests¶
Naming Rules¶
Most frameworks require you to start your test with a given prefix such as "test_". Specter does not impose any prefix rules on test functions. We believe that it is better to give the developer more flexibility in naming so that their test names better describe what they are actually testing. However, Specter does have a few rules that should be followed.
- All helper functions should start with an underscore (_). Just as Python treats a single underscore as “protected”, so does Specter.
- “before_each”, “after_each”, “before_all”, and “after_all” are reserved for setup functions on your test suites (Specs).
- Currently, we also treat “serialize” and “execute” as reserved names as well.
Writing Tests¶
Writing a test in Specter is simple.
- Create a class which extends Spec
- Create a function in that class that calls expect or require once
Example:
from specter import Spec, expect
class SampleSpec(Spec):
"""Docstring describing the specification"""
def it_can_create_an_object(self):
""" Test docstring"""
expect('something').to.equal('something')
Test Setup / Teardown¶
Example:
from specter import Spec, expect
class SampleSpec(Spec):
"""Docstring describing the specification"""
# Called once before any tests or child Specs are called
def before_all(self):
pass
# Called after all tests and child Specs have been called
def after_all(self):
pass
# Called before each test
def before_each(self):
pass
# Called after each test
def after_each(self):
pass
def it_can_create_an_object(self):
""" Test docstring"""
expect('something').to.equal('something')
Nested Tests¶
Specter tests utilizes the concept of nested test suites. This allows for you to provide a clearer picture of what you are testing within your test suites. For those who have used Jasmine or RSpec should be relatively familiar with this concept from their implementation of Spec.
Within Specter you can create a nested test description (suite) in the form of a class that inherits from the Spec class.
Example:
from specter import Spec, expect
class SampleSpec(Spec):
class OtherFunctionalityOfSample(Spec):
""" Docstring goes here """
def it_should_do_something(self):
""" Test Docstring """
expect('trace').to.equal('trace')
Test Fixtures¶
In Specter, a test fixture is defined as a test base class that is not treated as a runnable test specification. This allows for you to build reusable test suites through inheritance. To facilitate this, there is a decorator named “fixture” available in the spec module.
Example:
from specter import Spec, fixture, expect
@fixture
class ExampleTestFixture(Spec):
def _random_helper_func(self):
pass
def sample_test(self):
"""This test will be on every Spec that inherits this fixture"""
expect('something').to.equal('something')
class UsingFixture(ExampleTestFixture):
def another_test(self):
expect('this').not_to.equal('that')
Expected Output (using --show-all-expects):
UsingFixture
∟ sample test
✔ 'something' to equal 'something'
∟ another test
✔ 'this' not to equal 'that'
Test State and Inheritance¶
Each test spec executes its tests under a clean state that does not contain the attributes of the actual Spec class. This allows for users to not worry about conflicting with the Specter infrastructure. However, the drawback to this is that the instance of “self” within a test is not actually an instance of the type defined in your hardcoded tests. This makes calling super a little bit unconventional as you can see in the example below.
from specter import Spec
class FirstSpec(Spec):
def before_all(self):
# Do something
pass
class SecondSpec(FirstSpec):
def before_all(self):
# self is actually an instance of the state object and not an instance of SecondSpec
super(type(self), self).before_all()
# Do something else
As you can see in the example, you still can inherit the attributes of your other spec classes. However, you just have to keep in mind, that “self” is actually the state object and not the actual instance of the spec.
Assertions / Expectations¶
Assertions or expectations in specter attempt to be as expressive as possible. This allows for cleaner and more expressive tests which can help with overall code-awareness and effectiveness. It is important to note that an expectation does not fast-fail the test; it will continue executing the test even if the expectation fails.
- Expecations follow this flow
- expect [target object] [to or not_to] [comparison] [expected object]
- If you were expecting a status_code object was equal to 200 you would write:
- expect(request.status_code).to.equal(200)
Available Comparisons¶
- equal(expected_object)
- almost_equal(expected_number, places)
- be_greater_than(expected_object)
- be_less_than(expected_object)
- be_none()
- be_true()
- be_false()
- be_a(expected_object_type)
- be_an_instance_of(expected_object_type)
- be_in(expected_object)
- contain(expected_object)
- raise_a(expected_exception_type)
Asserting a raised exception¶
expect(example_func, ['args_here']).to.raise_a(Exception)
Fast-fail expectations¶
In some cases, you need to stop the execution of a test immediately upon the failure of an expectation. With specter, we call these requirements. While they follow the same flow as expectations, the name for this action is “require”.
Lets say you are writing a test that checks for valid content within a request body. You could do something like:
expect(request.status_code).to.equal(200)
require(request.content).not_to.be_none()
# ... continue processing content
Utilizing this concept can allow for better visibility into an issue when a test fails. For example, if in the given example, the request status code was 202, but the rest of the test passes, you will instantly can see the problem is with the response code and not the body of the message. This has the ability to save you quite a bit of time; especially if you are testing web APIs.
Data-Driven Tests¶
Often times you find that you need to run numerous types of data through a given test case. Rather than having to duplicate your tests a large number of times, you can utilize the concept of Data-Driven Tests. This will allow for you to subject your test cases to specified dataset.
Example:
from specter import DataSpec
class ExampleData(DataSpec):
DATASET = {
'test': {'data_val': 'sample_text'},
'second_test': {'data_val': 'sample_text2'}
}
def sample_data(self, data_val):
expect(data_val).to.equal('sample_text')
This dataset will produce a Spec with two tests: “sample_data_test” and “sample_data_second_test” each passed in “sample_text” under the data_val parameter.
This would produce a console output similar to (using --show-all-expects):
Example Data
∟ sample data test
✔ "sample_text" to equal "sample_text"
∟ sample data second test
✘ "sample_text2" to equal "sample_text"
Metadata in Data-Driven¶
There are two different methods of adding metadata to your data-driven tests. The first method is to assign metadata to the entire set of data-driven tests.
from specter import DataSpec
class ExampleData(DataSpec):
DATASET = {
'test': {'data_val': 'sample_text'},
'second_test': {'data_val': 'sample_text'}
}
@metadata(test='smoke')
def sample_data(self, data_val):
expect(data_val).to.equal('sample_text')
This will assign the metadata attributes to all tests that are generated from the decoratored instance method. The second way of assigning metadata is by creating a more complex dataset item. A complex dataset item contains two keys; args and meta.
from specter import DataSpec
class ExampleData(DataSpec):
DATASET = {
'test': {'data_val': 'sample_text'},
'second_test': {'args': {'data_val': 'sample_text'}, 'meta': {'network': 'yes'}
}
def sample_data(self, data_val):
expect(data_val).to.equal('sample_text')
By doing this, only the ‘second_test’ will contain metadata. It is important to remember that you can use this format in conjunction with standard metadata tags as mentioned above.
Skipping Tests¶
Specter provided a few different ways of skipping tests.
-
specter.
skip
(reason)¶ The skip decorator allows for you to always bypass a test.
Parameters: reason – Expects a string
-
specter.
skip_if
(condition, reason=None)¶ The skip_if decorator allows for you to bypass a test on conditions
Parameters: - condition – Expects a boolean
- reason – Expects a string
-
specter.
incomplete
()¶ The incomplete decorator behaves much like a normal skip; however, tests that are marked as incomplete get tracked under a different metric. This allows for you to create a skeleton around all of your features and specifications, and track what tests have been written and what tests are left outstanding.
# Example of using the incomplete decorator @incomplete def it_should_do_something(self): pass
Adding Metadata to Tests¶
Specter allows for you to tag tests with metadata. The primary purpose of this is to be able to carry misc information along with your test. At some point in the future, Specter will be able to output this information for consumption and processing. However, currently, metadata information can be used to select which tests you want to run.
-
specter.
metadata
(**key_value_pairs)¶ The metadata decorator allows for you to tag specific tests with key/value data for run-time processing or reporting. The common use case is to use metadata to tag a test as a positive or negative test type.
# Example of using the metadata decorator @metadata(type='negative') def it_shouldnt_do_something(self): pass
Parallel Testing in Specter¶
For those who need their tests run in a parallel processes, Specter provides a parallel mode. Parallel mode distributes all tests across multiple python processes. This is especially useful if you have a vast quantity of tests or many tests that take a long amount of time.
Usage¶
Activating parallel mode is simple:
specter --parallel
Note
Keep in mind that you can tune how many processes are spawned through the –num-processes argument.
Differences using the parallel runner¶
Reporting¶
One of the key differences you’ll notice is that the reporting is entirely different. Due to the nature of the parallel testing, the normal verbose/pretty Specter output is really not feasible. As a result, we currently provide a simple “dot” reporter with failed output in the pretty format. This allows for us to maintain a decent level of performance for users with very large numbers of tests. In the future, we plan on having specialty reporters for the parallel runner that provide more information.
Note
The xUnit reporter is available in parallel mode as well.
State¶
Due to the concept of parallelism, sharing live state between tests through the class instance is very costly and quite impractical. As a result, Specter does not sync state between tests during test execution. However, each Spec provides before_all() and after_all() functions to which is called before and after test execution, so that state is carried into the tests.
Reporting¶
Types of reporting¶
What is a reporter? A Specter reporter is a module that interacts with test and spec events to produce human or machine-readable output. Currently, Specter supports four types of reporters out of the box:
- Console - Serial BDD-style output.
- Dots - Parallel output that displays a dot per test.
- xUnit - Produces xUnit compatible XML for CI tools.
- Specter - Produces JSON in the Specter format for storage and CI tools.
Console¶
This the default reporter that is used when tests are run serially. It gives a familiar BDD-style output for the user.
Example Output:
SSH Client Verification
∟ can create an instance
∟ can connect
∟ sets key policy on client
∟ can close
∟ can execute a command
∟ can connect with args
∟ unassigned client auto creates a paramiko client
------------------------
------- Summary --------
Pass | 7
Skip | 0
Fail | 0
Error | 0
Incomplete | 0
Test Total | 7
- Expectations | 14
------------------------
Dots¶
This is the default reporter that is used when tests are run in parallel mode. Due to the nature of parallel execution, it is impossible to build a BDD-style report in real-time. As a result, Specter defaults to a simple dot-based report. A single dot represents a passed test; whereas a failed test is marked with an “x”.
Example Output:
.......xx.x
11 Test(s) Executed!
Specter Format¶
The specter format is designed to be a format that allows for you to easily store and retrieve information about your tests. Considering this format is just an expected structure around JSON, this format is especially useful when combining with a document store such as MongoDB.
Root:¶
Attribute | Note |
---|---|
format | Just a helper for parsers. It be the value “specter” |
version | Aid for parsers to know what version of the format we are running. |
specs | A list of spec objects that contain all of the test information |
Example:
{
"format": "specter",
"version": "0.1.0",
"specs": [...]
}
Spec:¶
*attributes in italics are optional
Attribute | Note |
---|---|
id | The UUID generated during the test-run for the Spec |
name | This is considered the “human-readable” name |
class_path | The full qualified class path for the Spec |
doc | Docstring associated with the Spec |
cases | List of test case objects attached to the Spec |
specs | List of child spec objects attached to the Spec |
Example:
{
"id": "35e9900f-9325-4e78-928d-593044c1a4f0",
"name": "Key Based",
"class_path": "spec.rift.clients.ssh.SSHCredentials.KeyBased",
"doc": null,
"cases": [...],
"specs": [...]
}
Case:¶
*attributes in italics are optional
Attribute | Note |
---|---|
id | The UUID generated during the test-run for the Case |
name | This is consider the “human-readable” name |
raw_name | The actual test name |
start | The exact time when the test started (expressed in seconds since the epoch) |
end | The exact time when the test ended (expressed in seconds since the epoch) |
skipped | Boolean to indicate if the test was skipped for some reason |
metadata | Dictionary containing the metadata that was attached to the test case |
expects | List of expectation / requirement objects executed on the test |
success | Is true when all expects successfully pass without errors or failures |
skip_reason | String specifying the reason for why the test was skipped |
doc | Docstring associated with the test case |
error | Contains the error traceback associated with a test |
execute_kwargs | During Data-Driven tests, this contains the kwargs used during execution |
Example:
{
"id": "1aa40954-d207-4264-a80f-e05605f57bd3",
"name": "can generate a paramiko key",
"raw_name": "can_generate_a_paramiko_key",
"start": 1410748788.813371,
"end": 1410748788.81438,
"success": true,
"skipped": false,
"metadata": {},
"expects": [...]
}
Expect:¶
Attribute | Note |
---|---|
assertion | The stringified version of the test assertion |
required | Indicates if the expectation was a requirement i.e. require(...) |
success | Indicates the pass/fail status of the expecation |
Example:
{
"assertion": "sample to equal [1]",
"required": false,
"success": true
}
Release Notes¶
Release: 0.6.0¶
Features and bug fixes¶
- Replacing Astor with ast-decompiler
- Adding support for Python 3.5 and 3.6 - gh-#92
Release: 0.5.2¶
Special thanks to Alexander Shchapov and alin23 for their contributions to this release!
Features and bug fixes¶
- Sorting test cases before execution to ensure consistent ordering
- Fixing bug where a failed expect message can sometime report on the wrong expect.
Release: 0.5.1¶
Special thanks to Mark Church for his contribution to this release!
Features and bug fixes¶
- Switching to bumpversion
- Fixing xunit reporting output issue - gh-#83
Release: 0.5.0¶
Special thanks to Alexander Shchapov for his contribution to this release!
Features and bug fixes¶
- Adding support for the almost_equal() expectation.
- Fixing issue where expect messages weren’t getting captured across lines - gh-#47
- Fixing issue with –select-tests where it didn’t properly select the the correct tests - gh-#69
Release: 0.4.2¶
Features and bug fixes¶
- Fixing regression in raises_a expectation - gh-#67
- Adding support for the –ascii-only cli argument - gh-#68
Release: 0.4.0¶
Features and bug fixes¶
- Adding support to execute individual tests - gh-#65
- Fixing issues with using a negative raises_a conditional - gh-#63
Release: 0.3.0¶
Special thanks to Paul Glass and Stas Sușcov for their contributions to this release!
Features and bug fixes¶
- Switched to Pike for module loading and searching
- Added support for teardown hooks - gh-#61
- Fixed showing exceptions from old-style classes - gh-#57
- Support dataset values to contain lists of dicts - gh-#56
Release: 0.2.1¶
Features and bug fixes¶
- Fixed –num-processes error - gh-#52
- Added extra error handling around except() - gh-#51
- Added Python 3.4 job to CI
Release: 0.2.0¶
Features and bug fixes¶
- Switching to only showing error/failed expectations by default. The old style of showing all expectations is still available through the –show-all-expects cli argument - gh-#46
- Adding support for the Specter report JSON format - gh-#12
- Fixing the summary report colors to reflect the actual test results. - gh-#44
- Added the ability for reporters to add their own cli arguments
- Breaking reporter contract by switching from subscribe_to_describe(self, describe) to subscribe_to_spec(self, spec). This is due to the slow removal of the “describe” terminology in Specter.
Release: 0.1.14¶
Features and bug fixes¶
- Fixed Coverage.py integration - gh-#36 gh-#40
- Fixed coverage reporting in parallel mode - gh-#40
- Fixed duplicated traceback information on errors - gh-#42
- Fixed difficult to trace error messages with expected parameters - gh-#41
- Added support for execution of specter through Coverage (i.e. coverage run -m specter)
Release: 0.1.13¶
Features and bug fixes¶
- Added clean test state per suite - gh-#37 gh-#13
- Added basic parallel testing - gh-#3
- Fixed xUnit test class path
- Fixed standard reporter to not be red all the time - gh-#28
- Fixed be_in() assertion - gh-#34
- Fixed metadata decorator not re-raising assertions - gh-#35
Release: 0.1.12¶
Features and bug fixes¶
- Fixing packaging issue where it wasn’t including the specter.reporting package.
Release: 0.1.11¶
Special thanks to John Wood for his contributions to this release!
Features and bug fixes¶
- Fixed Jenkins unicode error - gh-#27
- Refactored reporting system to be plugin centric - gh-#21
- Added no-color mode for CI systems - gh-#19
- Added xUnit output reporter - gh-#10
- Added duplication filter on data-driven dataset items - gh-#6
- Added console output of parameters on a failed data-driven test - gh-#2
- Added error line indicator on tracebacks
- Added checks and x’s as pass/fail indicators
Maintainer Notes¶
These notes are directed towards helping with the maintenance of the Specter project.
Releasing a new version of Specter¶
Start with a fresh local branch.
git checkout -b prep_for_release
Update and commit release notes in
docs/release_notes/index.rst
.Execute bumpversion.
# Available parts: major, minor, patch bumpversion <part>
Push up branch and tag
git push origin prep_for_release --tags
Create PR.
Wait for CI to pass and PR to merge.
Remove old packages
rm -r dist
Build sdist and wheel
python setup.py sdist python setup.py bdist_wheel
Upload to PyPI
twine upload dist/*
Tested Python Versions¶
- 2.7.x
- 3.3.x
- 3.4.x
- 3.5.x
- 3.6.x
- PyPy2 5.6.0