Part 2 of this series is available to read here
Writing this, having just come out of the Spring One Platform conference, I really want to race ahead and get to explaining the BASE principal and how it facilitates microservices. However, I promised to explain why ACID, while not dead, but much like relational databases, should be relegated to use cases where it’s necessary.
Most technological advancements are a matter of what works. One invents or develops what is needed to get the job done. Maybe his means identifying optimizations, but you have postpone them to get the job done. This is what has happened in IT, and this is why we’ve put up with classic blocking and contentious technologies, like relational databases, which was great when memory and processing power was expensive. When we had to wait and make sure that every step of a process had been completed and saved before moving on to the next step because if anything failed, redoing any of that work would have been prohibitively costly. This introduces some serious limitations in software design.
We should all be familiar with servlets and similar web applications. The standard request/reply cycle that forces clients to wait for a response. Now let’s expand that image with infrastructure. Add a database behind the application server. Now we have a client that will have to wait for the application server to reply, and the application server will have to wait for the database to reply. All of this ties up resources with waiting. Now, yes, processor architecture gives us Context Switching, which allows for the computer to switch to another task while waiting, but this is costly and slow. It’s also just a poor usage of resources. However, what recourse does one have when a stateful process needs to be suspended so another stateful process can be executed? Before we jump to conclusions, I’m talking about the processor state that must save and be reloaded as part of Context Switching. These limitations leed us to the ACID principal.
We used the ACID principal to solve these complications by adding staging and confirmation in front of and behind the execution. We make the destination wait for our commands as we issue them inside a transaction. We also make the client wait as we wait for an acknowledgement from the database that our transaction was committed. All the while, context switching from that thread to other threads so we can process other requests. This is all with their own waiting clients, which is terribly inefficient.
Now, we did introduce optimizations that alleviate some of these concerns. For example, most messaging technologies make use of Listeners. I won’t spend time here in how different languages handle this; but all of them basically boil down to using a single thread to listen for new messages. When a message is received, that listener thread can either process the message or, if we want to get fancy, we can give that listener a thread pool so it can process the message on another thread, and then it can go back to listening for more messages. A bit more efficient. Less context switching. Even with fewer threads being used, it was still inefficient.
As should be apparent, limitations in our processing capabilities led to limited implementations, which in turn led to a limiting principal that has been handed down as law to new developers. This limited the software architecture that developers could design, if they didn’t think further on how to solve some of these problems. In my next article, I’ll continue this point to explain the BASE principal and how it facilitates modern software architecture.