by Andreas Heider
1 September, 2016 - 7 minute read

As outlined in our Introduction to Technology at Winton, we rely heavily on software to make our investments. Technology helps us ingest huge amounts of data, analyse it to make investment decisions and act on those decisions.

With our technology being so central to how we operate, we need to be able to rely on its output being correct. Due to the scale at which we operate we can’t manually check every single value being generated, nor do we have the manpower to go and resolve inconsistencies manually.

Therefore we must design our systems such that they guarantee correctness, even in the case of external failures. As we are managing other people’s money, correctness and data consistency is crucial.

We need to build reliable systems that we can trust.

The first fallacy of distributed computing

The days of being able to run our strategies in a spreadsheet on a single computer have long passed. Nowadays our trading systems run around the clock, in multiple data centers distributed around the world.

But when building distributing our systems one needs to deal gracefully with failure. As we add more pieces to our systems, inevitably some of them will fail. As the famous list of fallacies of distributed computing says: The network is not reliable.

And it is not just the network: Failures can come from unexpected places, such as unexpected slowness in other systems. For example, slow disk writes can result in the same effects as a network failure.

Trying to prevent these failures from happening is futile. To produce a reliable system, we need to incorporate the possiblity of failure into every part of our our system and prepare it to handle outages.

To demonstrate, imagine a simple system consisting of two applications A and B that need to keep track of the same value; for example, our position (the number of shares held) in a certain stock. Over the day this position changes as we buy and sell shares.

Every time we trade A sends B a message containing how many shares we bought or sold. When system B receives that message it adjusts its view of the position accordingly.

Reset A 10 B 10

We want this system to be consistent: Both A and B should always see the same position, apart from the short period of time it takes for the notification message to reach B. And in an ideal environment this simple system works nicely.

But in reality things go wrong and messages can get lost. This can result in data inconsistencies, with numbers not matching up.

Click on the lightning bolt while a message is in transit to trigger a failure and observe what happens to the state of A and B afterwards.

Processing messages at least, at most or exactly once

The system above processes messages at most once. Every message sent by A is processed by B zero or one times, and if a message is lost the state of both applications diverges.

To deal with messages getting lost we can introduce acknowledgements.

Once B has successfully processed a message it sends an confirmation back to A. If A does not receive this confirmation it assumes the message got lost and sends it again.

Reset A 10 B 10

This is much better! If a message is lost in transit from A to B the error is detected, A sends it again and the system corrects itself.

But observe what happens if you interrupt the ack message. B has already processed the message and updated its state, but A thinks the message got lost and sends it again. Therefore B will receive the same message again, process it again and ends up with too large of a position.

This is at least once processing. Every message sent by A is processed by B one or more times, and if a message is re-sent the state of both applications diverges.

Ideally we would like everything to happen exactly once, where every message sent by A is processed by B once and only once. But when communicating over an unreliable link this is not generally possible. Even if A tried to send an ack back, confirming it had received Bs ack that ack could get lost as well. This is known as the Two Generals’ Problem.

If you are a developer you might wonder at this point: “Is this not already solved by TCP? Can’t I just use that for my communication and get on with my day?”. And indeed, TCP is a solid solution for one instance of this problem. But the same principle applies to all levels of your communication, not just to one network connection. For example, B might actually be a database that A writes to by sending queries like UPDATE positions SET position=position+5. This update statement can get lost or applied twice, resulting in inconsistent state.

Two ways of achieving exactly once processing

So we live in a cruel world where messages can get lost or delivered multiple times. How can we still build reliable and consistent systems?

There are many different valid approaches, two popular ones are described below. They both deal with the problem in a similar way, building upon at least once processing. We cannot avoid messages being duplicated, but we can design our processing logic such that even if messages are received multiple times they are only applied once.

Idempotency

One way is to reformulate the problem such that messages can be processed multiple times without affecting the end result. In our example we can do this by sending our entire current position instead of how many new shares we bought:

A 10 B 10

Try to get this system into an inconsistent state. You will see that you cannot. Unlike the previous systems it always heals itself and goes back to a consistent state.

If applicable, it is a great idea to build systems based on idempotency. It usually leads to simple and reliable systems. However, this is not always easy in practice. By reformulating our problem we have actually lost some information, and if our system actually needed to know about individual trades we would have to build it differently. Too expensive processing can also be an issue, as B will completely reprocess duplicated messages.

Deduplication

Another option is to include a unique id with each message and deduplicate based on that in the receiver.

A 10 B 10 0

A includes a unique id with each update it sends to B. B keeps track of which message ids it has already processed and ignores duplicates.

This is a great option that often requires less system changes than going for idempotency when dealing with existing software. It also avoids potentially expensive duplicate processing, because B only needs to check the id and can skip messages it has already processed. However, to implement this pattern we need transactional storage in B, in which both the new state and message id can be stored in one atomic operation. If they are stored independently there is a risk of processing messages multiple times.

Conclusion

Delivery guarantees are important when designing systems that handle important data. Your network is not reliable and neither is your database if not used carefully.

Exactly once is not generally possible. But we can design our systems such that they are reliable even under failure conditions. This is usually done by building upon an underlying at least once layer. At Winton we like Kafka for at least once messaging.

Delivery guarantees matter. It’s important to understand your system.

Finally, it is often a good idea to avoid building a distributed system and just keep all your data in a single place, completely avoiding all these issues.