Contributing¶
This page provides a guide for developers wishing to contribute to Sphinx-Needs.
Bugs, Features and PRs¶
For bug reports and well-described technical feature requests, please use our issue tracker:
https://github.com/useblocks/sphinx-needs/issues
For feature ideas and questions, please use our discussion board:
https://github.com/useblocks/sphinx-needs/discussions
If you have already created a PR, you can send it in. Our CI workflow will check (test and code styles) and a maintainer will perform a review, before we can merge it. Your PR should conform with the following rules:
A meaningful description or link, which describes the change
The changed code (for sure :) )
Test cases for the change (important!)
Updated documentation, if behavior gets changed or new options/directives are introduced.
Update of
docs/changelog.rst
.If this is your first PR, feel free to add your name in the
AUTHORS
file.
Installing Dependencies¶
To develop Sphinx-Needs it can be installed, with development extras, into an existing Python environment using pip
:
pip install sphinx-needs[test,benchmark,docs]
or using Poetry to install the dependencies into an isolated environment:
Install project dependencies
poetry install --all-extras
To run the formatting and linting suite, pre-commit is used:
Install the Pre-Commit hooks
pre-commit install
Build docs¶
To build the Sphinx-Needs documentation stored under /docs
, run:
# Build HTML pages
make docs-html
or
# Build PDF pages
make docs-pdf
It will always perform a clean build (calls make clean
before the build).
If you want to avoid this, run the related sphinx-commands directly under /docs
(e.g. make docs
).
Check links in docs¶
To check if all used links in the documentation are still valid, run:
make docs-linkcheck
Running Tests¶
You can either run the tests directly using pytest
, in an existing environment:
pytest tests/
Or you can use the provided Makefile:
make test
Note some tests use syrupy to perform snapshot testing. These snapshots can be updated by running:
pytest tests/ --snapshot-update
Hint
Please be sure to have the dependencies of the official documentation also installed:
pip install -r docs/requirements.txt
Running JS Testcases with PyTest¶
Setup Cypress Locally
Install Node JS on your computer and ensure it can be accessed through the CMD.
Install Cypress using the npm package manager by running
npm install cypress
. Visit this link for more information on how to install Cypress.Verify if Cypress is installed correctly and is executable by running:
npx cypress verify
. Get out this page for more information about Cypress commandline.If everything is successful then we can use Cypress.
Enable Cypress Test in Python Test Files
Under the js_test
folder, you can save your Cypress JS test files (files should end with: *.cy.js
). For each Cypress JS test file, you will need to write the Cypress JS test cases in the file. You can read more from the Cypress Docs. You can also check the tests/js_test/sn-collapse-button.cy.js
file as reference.
In your Python test files, you must mark every JS related test case with the marker - jstest
and you must include the spec_pattern
key-value pair as part of the test_app
fixture parameter.
Also, you need to pass the test_server
fixture to your test function for it to use the automated HTTP test server. For example, your test case could look like this:
# tests/test_sn_collapse_button.py
import pytest
@pytest.mark.jstest
@pytest.mark.parametrize(
"test_app",
[
{
"buildername": "html",
"srcdir": "doc_test/variant_doc",
"tags": ["tag_a"],
"spec_pattern": "js_test/js-test-sn-collapse-button.cy.js"
}
],
indirect=True,
)
def test_collapse_button_in_docs(test_app, test_server):
...
Note
The spec_pattern
key is required to ensure Cypress locates your test files or folder. Visit this link for more info on how to set the spec_pattern.
After you set the spec_pattern
key-value pair as part of the test_app
fixture parameter, you can call app.test_js()
in your Python test case to run a JS test for the spec_pattern
you provided. For example, you can use app.test_js()
like below:
# tests/test_sn_collapse_button.py
import pytest
@pytest.mark.jstest
@pytest.mark.parametrize(
"test_app",
[
{
"buildername": "html",
"srcdir": "doc_test/variant_doc",
"tags": ["tag_a"],
"spec_pattern": "js_test/js-test-sn-collapse-button.cy.js"
}
],
indirect=True,
)
def test_collapse_button_in_docs(test_app, test_server):
"""Check if the Sphinx-Needs collapse button works in the provided documentation source."""
app = test_app
app.build()
# Call `app.test_js()` to run the JS test for a particular specPattern
js_test_result = app.test_js()
# Check the return code and stdout
assert js_test_result["returncode"] == 0
assert "All specs passed!" in js_test_result["stdout"].decode("utf-8")
Note
app.test_js()
will return a dictionary object containing the returncode
, stdout
, and stderr
. Example:
return {
"returncode": 0,
"stdout": "Test passed string",
"stderr": "Errors encountered,
}
You can run the make test-js
command to check all JS testcases.
Note
The http_server
process invoked by the make test-js
command may not terminate properly in some instances.
Kindly check your system’s monitoring app to end the process if not terminated automatically.
Linting & Formatting¶
Sphinx-Needs uses pre-commit to run formatting and checking of source code. This can be run directly using:
pre-commit run --all-files
or via the provided Makefile:
make lint
Benchmarks¶
Sphinx-Needs own documentation is used for creating a benchmark for each PR. If the runtime takes 10% longer as the previous ones, the benchmark test will fail.
Benchmark test cases are available under tests/benchmarks
.
And they can be locally executed via make benchmark
.
The results for each PR/commit get added to a chart, which is available under http://useblocks.com/sphinx-needs/bench/index.html.
The benchmark data is stored on the benchmarks branch, which is also used by github-pages as source.
Running Test Matrix¶
This project provides a test matrix for running the tests across a range of Python and Sphinx versions. This is used primarily for continuous integration.
Nox is used as a test runner.
Running the matrix tests requires additional system-wide dependencies
You will also need multiple Python versions available. You can manage these using Pyenv
You can run the test matrix by using the nox
command
nox
or using the provided Makefile
make test-matrix
For a full list of available options, refer to the Nox documentation,
and the local noxfile
.
Our noxfile.py
import nox
from nox_poetry import session
# The versions here must be in sync with the github-workflows.
# Or at least support all version from there.
# This list can contain more versions as used by the github workflows to support
# custom local tests
PYTHON_VERSIONS = ["3.8", "3.9", "3.10", "3.11"]
SPHINX_VERSIONS = ["5.0", "6.0", "7.0"]
@session(python=PYTHON_VERSIONS)
@nox.parametrize("sphinx", SPHINX_VERSIONS)
def tests(session, sphinx):
session.install(".[test]")
session.run("pip", "install", f"sphinx~={sphinx}", silent=True)
session.run("echo", "TEST FINAL PACKAGE LIST")
session.run("pip", "freeze")
posargs = session.posargs or ["tests"]
session.run("pytest", "--ignore", "tests/benchmarks", *posargs, external=True)
@session(python=PYTHON_VERSIONS)
def tests_no_mpl(session):
session.install(".[test]")
session.run("pip", "uninstall", "-y", "matplotlib", "numpy", silent=True)
session.run("echo", "TEST FINAL PACKAGE LIST")
session.run("pip", "freeze")
session.run("pytest", "tests/no_mpl_tests.py", *session.posargs, external=True)
@session(python="3.10")
def benchmark_time(session):
session.install(".[test,benchmark,docs]")
session.run(
"pytest",
"tests/benchmarks",
"-k",
"_time",
"--benchmark-json",
"output.json",
*session.posargs,
external=True,
)
@session(python="3.10")
def benchmark_memory(session):
session.install(".[test,benchmark,docs]")
session.run(
"pytest",
"tests/benchmarks",
"-k",
"_memory",
"--benchmark-json",
"output.json",
*session.posargs,
external=True,
)
session.run("memray", "flamegraph", "-o", "mem_out.html", "mem_out.bin")
@session(python="3.8")
def pre_commit(session):
session.run_always("poetry", "install", external=True)
session.install("pre-commit")
session.run("pre-commit", "run", "--all-files", *session.posargs, external=True)
@session(python="3.11")
def linkcheck(session):
session.install(".[docs]")
with session.chdir("docs"):
session.run(
"sphinx-build",
"-b",
"linkcheck",
".",
"_build/linkcheck",
*session.posargs,
external=True,
)
@session(python="3.11")
def docs(session):
session.install(".[docs,theme-im]")
with session.chdir("docs"):
session.run(
"sphinx-build",
".",
"_build",
*session.posargs,
external=True,
env={"DOCS_THEME": "sphinx_immaterial"},
)
Running Commands¶
See the Poetry documentation for a list of commands.
In order to run custom commands inside the isolated environment, they
should be prefixed with poetry run
(ie. poetry run <command>
).
List make targets¶
Sphinx-Needs uses make
to invoke most development related actions.
Use make list
to get a list of available targets.
benchmark-memory
benchmark-time
docs-html
docs-html-fast
docs-linkcheck
docs-pdf
format
lint
needs
test
test-js
test-matrix
test-short
Publishing a new release¶
There is a release pipeline installed for the CI.
This gets triggered automatically, if a tag is created and pushed.
The tag must follow the format: [0-9].[0-9]+.[0-9]
. Otherwise the release jobs won’t trigger.
The release jobs will build the source and wheel distribution and try to upload them
to test.pypi.org
and pypy.org
.
Structure of the extension’s logic¶
The following is an outline of the build events which this extension adds to the Sphinx build process:
After configuration has been initialised (
config-inited
event):Register additional directives, directive options and warnings (
load_config
)Check configuration consistency (
check_configuration
)
Before reading changed documents (
env-before-read-docs
event):Initialise
BuildEnvironment
variables (prepare_env
)Register services (
prepare_env
)Register functions (
prepare_env
)Initialise default extra options (
prepare_env
)Initialise extra link types (
prepare_env
)Ensure default configurations are set (
prepare_env
)Start process timing, if enabled (
prepare_env
)Load external needs (
load_external_needs
)
For all removed and changed documents (
env-purge-doc
event):Remove all cached need items that originate from the document (
purge_needs
)
For changed documents (
doctree-read
event, priority 880 of transforms)Determine and add data on parent sections and needs(
analyse_need_locations
)Remove
Need
nodes marked ashidden
(analyse_need_locations
)
When building in parallel mode (
env-merge-info
event), mergeBuildEnvironment
data (merge_data
)After all documents have been read and transformed (
env-updated
event) (NOTE these are skipped forneeds
builder)Copy vendored JS libraries (with CSS) to build folder (
install_lib_static_files
)Generate permalink file (
install_permalink_file
)Copy vendored CSS files to build folder (
install_styles_static_files
)
Note, the
BuildEnvironment
is cached at this point, only if any documents were updated.For all changed documents, or their dependants (
doctree-resolved
)Replace all
Needextract
nodes with a list of the collectedNeed
(process_creator
)Remove all
Need
nodes, ifneeds_include_needs
isTrue
(process_need_nodes
)Call dynamic functions, set as values on the need data items and replace them with their return values (
process_need_nodes -> resolve_dynamic_values
)Replace needs data variant values (
process_need_nodes -> resolve_variants_options
)Check for dead links (
process_need_nodes -> check_links
)Generate back links (
process_need_nodes -> create_back_links
)Process constraints, for each
Need
node (process_need_nodes -> process_constraints
)Perform all modifications on need data items, due to
Needextend
nodes (process_need_nodes -> extend_needs_data
)Format each
Need
node to give the desired visual output (process_need_nodes -> print_need_nodes
)Process all other need specific nodes, replacing them with the desired visual output (
process_creator
)
At the end of the build (
build-finished
event)Call all user defined need data checks, a.k.a needs_warnings (
process_warnings
)Write the
needs.json
to the output folder, if needs_build_json = True (build_needs_json
)Write the
needs.json
per ID to the output folder, if needs_build_json_per_id = True (build_needs_id_json
)Write all UML files to the output folder, if needs_build_needumls = True (
build_needumls_pumls
)Print process timing, if needs_debug_measurement = True (
process_timing
)
Maintainers¶
Daniel Woste <daniel@useblocks.com>
Contributors¶
Marco Heinemann <marco@useblocks.com>
Trevor Lovett <trevor.lovett@gmail.com>
Magnus Lööf <magnus.loof@gmail.com>
Harri Kaimio
Anders Thuné
Daniel Eades <danieleades@hotmail.com>
Philip Partsch <philip.partsch@googlemail.com>
David Le Nir <david.lenir.e@thalesdigital.io>
Baran Barış Yıldızlı <arisbbyil@gmail.com>
Roberto Rötting <roberto.roetting@gmail.com>
Nirmal Sasidharan <nirmal.sasidharan@de.bosch.com>
Jacob Allen <jacob.allen@etas.com>
Jörg Kreuzberger <j.kreuzberger@procitec.de>
Duodu Randy <duodurandy19@gmail.com>
Christian Wappler <chri.wapp@gmail.com>
Chris Sewell <chrisj_sewell@hotmail.com>
Simon Leiner <simon@leiner.me>