Upgrade the model of sending events to multiple Aggregators to allow compressed vs. uncompressed events
Here are some of my notes on the initial multi-aggregator implementation:
The initial implementation is quite practical, and it simply has every EventBuilder send a copy of each event to each Aggregator. This has the downside of sending every event twice, and it also includes an extra copy of the data in the EventBuilder. However, once events are received by a suitably configured OnMon Aggregator, they are dropped on the floor if the online monitoring algorithms are busy processing a previous event, so prescaling is handled automatically.
In response to this, Alessandro asked if compressed events can be sent to the disk-writing Aggregator and uncompressed events to the online-monitoring Aggregator. This is not possible, in the initial implementation.
However, this is a desirable feature, so we should think about how to achieve this. It is possible (likely?) that it will require a totally different model of sending events from the EventBuilders to the Aggregators.