Now GDAL algorithms test are splitted into parts: general tests, raster
algorithms and vector algorithms. This makes testing a bit faster and
easier, as there is no need to run not related tests.
There's two motivations for this:
- the existing one was getting massive and took ages to run, which was
a pain when developing. Smaller batches allow just a subset of test to
be run which is much faster.
- There's a random segfault on test exit which occurs on Travis. Rather
then disabling these absolutely critical tests altogether, I'm using
this as a method of bisecting exactly which alg is causing this.
of main algorithm
Using code like:
buffered_layer = processing.run(..., context, feedback)['OUTPUT']
...
return {'OUTPUT': buffered_layer}
can cause issues if done as a sub-step of a larger processing algorithm. This
is because ownership of the generated layer is transferred to the caller
(Python) by processing.run. When the algorithm returns, Processing
attempts to move ownership of the layer from the context to the caller,
resulting in a crash.
(This is by design, because processing.run has been optimised for the
most common use case, which is one-off execution of algorithms as part
of a script, not as part of another processing algorithm. Accordingly
by design it returns layers and ownership to the caller, making things
easier for callers as they do not then have to resolve the layer reference
from the context object and handle ownership themselves)
This commit adds a new "is_child_algorithm" argument to processing.run.
For algorithms which are executed as sub-steps of a larger algorithm
is_child_algorithm should be set to True to avoid any ownership issues
with layers. E.g.
buffered_layer = processing.run(..., context, feedback, is_child_algorithm=True)['OUTPUT']
...
return {'OUTPUT': buffered_layer}
Allows processing models to be stored inside QGIS project files,
so that opening the project makes that model available.
Some models are so intrinsically linked to the logic inside
a particular project that they have no meaning (or are totally
broken) outside of that project (e.g. models which rely
on the presence of particular map layers, relations, etc)
This change allows these models to be stored inside that project,
avoid cluttering up the "global" model provider with models
which make no sense, and making it easier to distribute a single
project with these models included.
Models are stored inside projects by clicking the new "embed
in project" button in the modeler dialog toolbar. Models can be
removed from a project from the model's right click menu in the
toolbox.
- Broke into per class testcase
- Each method tries to test only one aspect of behavior
- Use unittest assertions for better error output
- Removed non-existant serialize functionality from tests
- Test BooleanParameter
Conflicts:
python/plugins/processing/core/parameters.py