Measures
Measures are functions that take a single reference to an element of a neuron or a set of them and compute some output value, the measure. We can distinguish between single and set measures, depending on whether they take one or more references as input. Single elements references (e.g. a single Node) are simply C++ references, whereas sets are std::vector
of std::reference_wrapper
.
All classes, functions, etc. related to selectors are defined in the neurostr::measure
namespace. You can include their headers individually or take them all by adding the header file measure.h
Measures are intended to be combined with Selectors and Aggregators to create complex and meaningful measures with low coding effort. You can implement your own measures and, if they have the adequate signature, use them as any other prebuilt measure.
Prebuilt measures
NeuroSTR includes a huge library of prebuilt measures some new, others already presented in the scientific literature or implemented in existing neuroanatomy tools.
Prebuilt measures are organized in the same way as Selectors, by their input element type: all measure functions that take either a single node or a node set as input fall into the Node category. You can find more details about each measure by clicking on their name.
You might notice (and it seems odd) that there are very few Neurite and Neuron measures, but it is on purpose. Since we can use Selectors and Aggregators along with measures to create new measures, we focus on defining "low level" measures, that can be use to create "high level measures". Check the Create a measure to see a simple example of this.
Node Measures
- X,Y,Z component
- Radius, Diameter
- Centrifugal order
- Distance to parent
- Compartment volume
- Compartment surface
- Compartment section area
- Local Hillman taper rate
- Local Burker taper rate
- Distance to root
- Distance to soma
- Path length to root
- Number of descendants
- Non-aligned minimum box volume
- Vector to parent
- Local bifurcation angle
- Local elongation angle
- Extreme angle
- Local orientation
- In terminal branch
- Distance to closest segment
-
Branch Measures
- Hillman taper rate
- Burker taper rate
- Tortuosity
- Node count
- Branch index
- Centrifugal order
- Child diameter ratio
- Parent-Child diameter ratio
- Partition asymmetry
- Rall power fit
- Pk
- Hillman threshold
- Local bifurcation angle
- Remote bifurcation angle
- Local bifurcation angle
- Remote bifurcation angle
- Local tilt angle
- Remote tilt angle
- Local plane vector
- Remote plane vector
- Local torque angle
- Remote torque angle
- Length
- Intersects
Neurite Measures
Neuron Measures
Generic Measures
L-measure Measures
- Soma surface
- Number of stems
- Number of bifurcations
- Number of branches
- Number of terminal tips
- Width, Height and Depth
- Diameter
- Diameter power
- Compartment length
- Branch length
- Compartment surface
- Branch surface
- Compartment section area
- Compartment volume
- Branch volume
- Distance to root
- Path length to root
- Branch centrifugal order
- Node terminal degree
- Branch terminal degree
- Taper 1: Burker taper rate
- Taper 2: Hillman taper rate
- Contraction
- Fragmentation
- Partition asymmetry
- Rall's power
- Pk fitted value
- Pk classic and squared
- Local bifurcation amplitude
- Remote bifurcation amplitude
- Local bifurcation tilt
- Remote bifurcation tilt
- Local bifurcation torque
- Remote bifurcation_torque
- Terminal bifurcation diameter
- Hillman threshold
- Fractal dimension
Aggregators
Aggregators are just functions that compute an aggregate value (e.g the mean) from a set of measures. They are an easy way to get summary values for certain "low level" measures at "high level" elements (Neuron and Neurite). For example, we might want to compute the average branch length in a neurite or neuron, that can be done easily with aggregator functions. Of course aggregators are simple wrappers over well known standard library functions, but they allow us to detect coding errors early in compile time instead of provoking runtime errors.
These are the aggregator functions and factory functions included in the library:
Sum
What it does: Adds up all the values in the given set starting at zero.
Parameters: zero - Starting value
Function signature: (const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
Factory function signature: sum_aggr_factory(T zero)
Average and Standard deviation
What it does: Computes the average and optionally the standard deviation of the given set of values.
Parameters: zero - Zero value
Mean only
Factory function signature: avg_aggr_factory(T zero)
Function signature: (const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
With standard deviation
Factory function signature: mean_sd_factory(T zero)
Function signature: (const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> std::array<T,2>
Maximum and Minimum
What it does: Returns the maximum/minimum value in the given set
Maximum
Function signature: max = [](const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
Minimum
Function signature: min = [](const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
Median
What it does: Computes the median of the given set
Function signature: median = [](const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
Range
What it does: Computes the value difference between the set maximum and minimum values
Function signature: range_length = [](const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> T
Summary
What it does: Computes sum, min, max, median, mean and standard deviation for the given set of values. It returns them in a structure aggr_pack that contains those fields.
Parameters: zero - Zero value
Function signature: (const detail::iterator_type<U>& b, const detail::iterator_type<U>& e) -> aggr_pack<U,T>
Factory function signature: all_aggr_factory(T zero)
Operations
Measure each
What it does: Transforms a measure that that takes a single element in one that takes an element set. As consequence the resultant measure outputs a vector of values.
Restrictions: The given measure f must take a single element as input
Function signature: measureEach(const Fn& f)
Measure each and aggregate
What it does: Transforms a measure that that takes a single element in one that takes an element set and then applies an aggregator function over the resultant set of values.
Restrictions:
- The given measure f must take a single element as input
- The aggregation function aggr must be compatible with the measure output type
Function signature: measureEachAggregate(const Fn& f, const Aggr& aggr)
Selector composition
What it does: Applies the given measure to the selector output, creating a new measure with different input (actually, the selector's input) but same output.
Restrictions:
- The given measure measure input signature must match selector's output
Function signature: selectorMeasureCompose(const S& selector, const M& measure)
Measure tuple
What it does: Applies several measures to the same element(s) and returns the result in a tuple. This saves computation time if the element selection procedure is time consuming.
Restrictions:
- The given measures measures must have the same input signature
Function signature: createMeasureTuple(const Measures&... measures )
Create a measure
This section will show you how to define new measure functions using already existing selector and measures with little code. Although the selector/measure structure is designed to ease the process of creating new measures, it may seem a bit...hard to use at first. We will introduce the concept with a very simple measure, the average branch length in a neuron.
Additionally to the two examples shown below, you can find more examples of selector/measure usage in the source code of the feature extractor utils and in the L-measure definitions.
Neuron average branch length
First of all, we should define the measure we want to create. Specifically, we need to point out the input, output and what does it measure. In this example, our measure is pretty simple:
Input | Output | Description |
---|---|---|
Single neuron | Float number | Average branch length in the neuron |
Then, we should find the selectors and measures that will build up our new measure.
- To measure the [Branch] length, the
branch_length
is the obvious choice. - Since we want to find the average value for every branch in the neuron. To select all branches in a neuron we have the
neuron_branch_selector
.
Let's stop here to plan our next step. We want to combine the branch_length
measure with the neuron_branch_selector
in some specific way to create the measure. Since the selector already has the input that we want (a single neuron), we need to transform the measure to take a set of branches and output the average length, so it can be combined with the selector. In plain words, we want to measure the length of every [Branch] in a set and average the results. If you go back to the measure operations section you will find the measureEachAggregate
function, which does exactly what we need:
- The f argument is the
branch_length
measure. - The aggr argument is the aggregate function
avg_aggr_factory
for floats, that isavg_aggr_factory<float,float>(0)
Then, our intermediate measure will look like this:
measureEachAggregate(branch_length, avg_aggr_factory<float,float>(0));
Now, we can use the selectorMeasureCompose
function to join the neuron branch selector with our intermediate measure. This will output the measure that we have defined at the beginning:
selectorMeasureCompose(
neuron_branch_selector,
measureEachAggregate(branch_length, avg_aggr_factory<float,float>(0)));