This is another solution that gives the result closer to the requested ones, but with much more code than Gordon’s.
Introduction
I agree with Gordon that there is no sensible way to achieve what you want directly with crossfilter . crossfilter is row oriented and you want to create multiple rows based on columns. Thus, the only way is to take some kind of “fake” step. And a “fake” step implicitly means that the result will not be updated when the original data source changes. I see no way to fix this, since crossfilter hides its executable files well enough (e.g. filterListeners , dataListeners and removeDataListeners ).
However, dc is implemented in such a way that by default, after various events, all diagrams are redrawn (because they are all in the same global group). And because of this, “fake objects”, if implemented correctly, can also be recalculated based on updated data.
So my code contains two implementations for min / max:
- fast (er), but unsafe if you do not do any additional filter
- slow (er), but safe if you want to do additional filtering
Please note that if you used a fast but unsuccessful implementation and perform additional filtering, you will get exceptions, and other functions may also be broken.
code
All code is available at https://jsfiddle.net/4kcu2ut1/1/ . Let me divide it into logical blocks and see them one by one.
First go to the helper methods and objects. Each Op object essentially contains the methods necessary to switch to reduce + an additional optional getOutput if the drive contains more data, and then just the result, for example, for avgOp minimum / maximum "safe" operations.
var minOpFast = { add: function (acc, el) { return Math.min(acc, el); }, remove: function (acc, el) { throw new Error("Not supported"); }, initial: function () { return Number.MAX_VALUE; } }; var maxOpFast = { add: function (acc, el) { return Math.max(acc, el); }, remove: function (acc, el) { throw new Error("Not supported"); }, initial: function () { return Number.MIN_VALUE; } }; var binarySearch = function (arr, target) { var lo = 0; var hi = arr.length; while (lo < hi) { var mid = (lo + hi) >>> 1;
Then we prepare the initial data and indicate the transformation that we want. aggregates is a list of operations from the previous step, additionally decorated with a key for storing temporary data in a composite battery (it must be unique) and a label for displaying on the output. srcKeys contains a list of property names (all of which must have the same form) that will be processed by each operation from aggregates lits.
var myCSV = [ {"shift": "1", "date": "01/01/2016/08/00/00", "car": "178", "truck": "255", "bike": "317", "moto": "237"}, {"shift": "2", "date": "01/01/2016/17/00/00", "car": "125", "truck": "189", "bike": "445", "moto": "273"}, {"shift": "3", "date": "02/01/2016/08/00/00", "car": "140", "truck": "219", "bike": "328", "moto": "412"}, {"shift": "4", "date": "02/01/2016/17/00/00", "car": "222", "truck": "290", "bike": "432", "moto": "378"}, {"shift": "5", "date": "03/01/2016/08/00/00", "car": "200", "truck": "250", "bike": "420", "moto": "319"}, {"shift": "6", "date": "03/01/2016/17/00/00", "car": "230", "truck": "220", "bike": "310", "moto": "413"}, {"shift": "7", "date": "04/01/2016/08/00/00", "car": "155", "truck": "177", "bike": "377", "moto": "180"}, {"shift": "8", "date": "04/01/2016/17/00/00", "car": "179", "truck": "203", "bike": "405", "moto": "222"}, {"shift": "9", "date": "05/01/2016/08/00/00", "car": "208", "truck": "185", "bike": "360", "moto": "195"}, {"shift": "10", "date": "05/01/2016/17/00/00", "car": "150", "truck": "290", "bike": "315", "moto": "280"}, {"shift": "11", "date": "06/01/2016/08/00/00", "car": "200", "truck": "220", "bike": "350", "moto": "205"}, {"shift": "12", "date": "06/01/2016/17/00/00", "car": "230", "truck": "170", "bike": "390", "moto": "400"}, ]; var dateFormat = d3.time.format("%d/%m/%Y/%H/%M/%S"); myCSV.forEach(function (d) { d.date = dateFormat.parse(d.date); d['car'] = +d['car']; d['bike'] = +d['bike']; d['moto'] = +d['moto']; d['truck'] = +d['truck']; d.shift = +d.shift; });
And now to the magic . buildTransposedAggregatesDimension is what all the hard work does here. Essentially, it takes two steps:
First, groupAll get aggregated data for each combination in the cross product of all operands and all keys.
Split the grouped mega object into an array that can be a data source for another crossfilter
Step number 2, where my "fake". It seems to me that it is less "fake" than in the Gordon solution, since it does not rely on any internal details of crossfilter or dc (see the bottom Method in the Gordon solution).
Also the separation in step 2 is where the data is actually transferred according to your requirements. Obviously, the code can be easily modified to not do this and produce results in the same way as in the Gordon solution.
Please also note that it is important that the additional step does not do additional calculations and only converts the already calculated values to the appropriate format. This is important for updating after filtering to work, because in such a table related to the result of buildTransposedAggregatesDimension , buildTransposedAggregatesDimension still effectively bound to the original data crossfilter .
var buildTransposedAggregatesDimension = function (facts, keysList, aggsList) {
The small helper method buildColumns creates columns for each source key in srcKeys + an additional column for the operation label
var buildColumns = function (srcKeys) { var columns = []; columns.push({ label: "Aggregate", format: function (el) { return el.label; } }); srcKeys.forEach(function (key) { columns.push({ label: key, format: function (el) { return el.getOutput(key); } }); }); return columns; };
So now let's put everything together and create a table.
var facts = crossfilter(myCSV); var aggregatedDimension = buildTransposedAggregatesDimension(facts, srcKeys, aggregates); dataTable = dc.dataTable('#dataTable');
There is another piece of code shamelessly stolen from Gordon to add a line chart for additional filtering.