require all input layers to be in the same CRS
The default behaviour is to assume that algorithms are well behaved
and can handle multi-CRS inputs, but algs have the option to
flag that they do not allow this and require the input CRS check.
Those algs should document that they require all inputs to have
matching CRS - processing 3.0 behaviour is to assume that algs
can handle this.
feature addition
Previously, the memory provider would automatically recalculate
the extent of the layer after new features are added by
looping through the entire set of existing features and calculating
the bounding boxes. This is very wasteful, as many code paths
add features one-by-one, so with every new feature added to
the provider every existing feature is iterated over. This caused
memory layers to slow to a crawl after many features are added.
This commit improves the logic so that IF an existing layer
extent is known, then it's updated on the fly as each individual
feauture is added. Instead of looping through all features, we
just expand the existing known extent with the added features
bounds. If the extent isn't known, we just invalidate it
when adding/deleting/modifying features, and defer the actual
extent calculation until it's next requested.
Makes memory layers many thousands of magnitudes of orders faster
when adding lots of features (e.g. when memory providers
are used as temporary outputs in processing)
Geometries are passed as const reference and returned by value.
This make using the API easier and reduces the risk of ownership
problems.
The overhead is minimal due to implicit sharing.
Fix https://github.com/qgis/qgis3.0_api/issues/68
This adds a lot of flexibility to algorithms, as it makes output
sinks truely optional. For instance, the various "Extract by..."
algorithms could add a new optional sink for features which
'fail' the extraction criteria. This effectively allows these
algorithms to become feature 'routers', directing features onto
other parts of a model depending on whether they pass or fail
the test.
But in this situation we don't always care about these failing
features, and we don't want to force them to always be fetched
from the provider. By making the outputs truely optional,
the algorithm can tweak its logic to either fetch all features
and send them to the correct output, or only fetch
matching features from the provider in the first place (a big
speed boost).