Adds a native k-means clustering algorithm.
Based on a port of PostGIS' ST_ClusterKMeans function, this
new algorithm adds a new cluster ID field to a set of input
features identify the feature's cluster based on the k-means
clustering approach. If non-point geometries are used as input,
the clustering is based off the centroid of the input geometries.
Allows the full range of formatting options exposed through
text renderer - e.g. scalebar text with buffers, shadows,
background shapes, letter spacing, etc.
Say goodbye to unreadable scale bar text!
Finally starting a suite of unit tests for overlay algorithms:
- overlay1 - layers that cover various basic overlay situations
- overlay2 - layers where one input has self-intersecting polygons
- overlay3 - layers where intersections return different geometry types
Aside from the performance benefits, the Python version of this
algorithm occasionally fails on Travis with odd errors. Hopefully
by porting to c++ it will fix these, or at least give useful
debug information in the event of a fail.
Also add support for curved input geometries.
This algorithm swaps the X and Y coordinate values in input
geometries. It can be used to repair geometries which have
accidentally had their latitude and longitude values reversed.
This implements a new "import geotagged photos" algorithm
for processing. It allows selection of a folder which it
will scan for jpg files which have been geotagged and
creates a PointZ layer with the result, with attributes
for photo path, altitude, direction and timestamp.
Optionally the scan can be recursive and you can create
an optional table of photos which could not be read
or which were missing geotags.
The algorithm automatically sets the output table to
use an external resource widget to display the linked
photos in the attribute form.
[ALGCHANGE]
Adds two new algorithms which expose QgsGeometry's methods
for segmentizing curved geometries.
"Segmentize by maximum distance":
The segmentization is performed by specifying the maximum
allowed offset distance between the original curve and the
segmentized representation.
"Segmentize by maximum angle":
The segmentization is performed by specifying the maximum
allowed radius angle between vertices on the straightened
geometry (e.g the angle of the arc created from the
original arc center to consective output vertices on the
linearized geometry).
Removes duplicate nodes from the geometry, wherever removing the
nodes does not result in a degenerate geometry.
By default, z values are not considered when detecting duplicate
nodes. E.g. two nodes with the same x and y coordinate but
different z values will still be considered duplicate and one
will be removed. If useZValues is true, then the z values are
also tested and nodes with the same x and y but different z
will be maintained.
Note that duplicate nodes are not tested between different
parts of a multipart geometry. E.g. a multipoint geometry
with overlapping points will not be changed by this method.
The function will return true if nodes were removed, or false
if no duplicate nodes were found.
Includes unit tests and a processing algorithm which exposes
this functionality.
Like the main Join Attributes by Location algorithm, this algorithm
takes two layers and combines the attributes based on a spatial
criteria.
However this algorithm calculates summaries for the attributes for
all matching features, e.g. calculating the mean/min/max/etc.
The list of fields to summaries, and the summaries to
calculate for those, can be selected.
Improvements:
- transparent reprojection to match hub/spoke CRS
- keep all attributes from matched hub/spoke features
- don't break after matching one hub point to spoke - instead
join ALL hub/spoke points with matching id values
This reverts commit e3d79a1fe940b5d813b5f79c51b43393d085bb16, reversing
changes made to 3f7f95ee262ea3646d61600c21faed0866bc70b0.
Reverting again, as Travis started failing after merging PR (with all
test passed) into master branch
This ports to old (pre 2.0!!) topocolor plugin to processing. It's based
off my beta 2.x fork (never publicly released) which implemented
a bunch of improvements to the algorithm allowing for minimal number
of required colors and also balanced counts of features assigned
each individual color.
** Pretty sure this plugin was highlighted in Victor's presentation
about plugins-which-shouldn't-be-plugins-and-should-be-processing-algs
instead. It's a prime example of a plugin where the amount of code
required for gui+setup exceeded the actual "guts" of the plugin by
a huge factor, and which is much more useful when it can be
integrated into a larger processing model.
If you have a layer with an unknown CRS, this algorithm gives a list
of possible candidate CRSes which the layer could be in.
It allows users to set the area (and corresponding CRS) which they know
the layer should be located near. The algorithm then tests every CRS
in the database to see what candidate CRSes would cause the layer
to be located at that preset area.
It's much faster than it sounds!! (just a couple of seconds)
Sponsored by SMEC/Surbana Jurong