Well designed Anaplan for FP&A models are simple, logical and fast. To get the best return we must spend 'sparsity dollars' wisely....Here's why!
We use what we call modelling cohorts to effectively organise and structure our clients data. We advocate using a concept we refer to as sequential consolidation to move and prepare our clients data through their models from record to report. These ideas help us spend our sparsity dollars with wisdom.
We find these design concepts ensure the model infrastructure is optimised for simplicity, a keystone principle we believe ensures our clients models are always fit for purpose.
But what about model performance and speed!
Good models operate efficiently, consuming resources where they get optimal return.
To optimise speed we must understand model sparsity!
Sparsity is unused space, intersections of dimensions which have no relevance to your modelling. Excess sparsity consumes model resources and can cause your model to grow very large.
Is this always a bad thing?
Anaplan is a multidimensional modelling platform. It operates at optimal efficiency across multidimension structures.
However, the standard response to excess sparsity is to create combination lists whose aim is to collapse these structures and rebuild them across small, flat and highly dense tables.
While this will reduce sparsity, model performance will be severely impacted!
Sparse multi dimensional structures are faster to access, navigate and retrieve data from. Calculations run in parallel as each dimension is accessed simultaneously. In contrast flat, dense data is queried in sequence with calculations held in memory and dependent on downstream processing.
We must balance the management of excess sparsity while ensuring good model performance!
However, we have found that the most successful model designs prioritise model performance and will aim to retain sparse structures over more complicated combination lists.
Not only are flat structures slow they are challenging to maintain and require additional modelling along with model actions and processes to manage.
As new 'in use' combinations of live dimensions are detected model actions must be run to add them to the growing combination lists.
Instead, we have found subsets, time ranges and limited use native versions to be more effective in managing sparsity. This allows us to maintain multidimensional live modelling without having to run multiple model actions. New list items are incorporated model wide and users do not have to spend excess time on model maintenance tasks.
Modelling cohorts and sequential consolidation are concepts we designed in response to sparsity management. Modelling cohorts ensures we are using only relevant dimensions in a process while sequential consolidation ensure that irrelevant dimensionality is removed prior to data being mapped into end user reporting.