The fun thing about writing this series is that it’s all stuff I’ve “known” for 25 years. Now, having to explain it, I get to understand why. Understanding why is my biggest driver & here’s a big chance. I took a step too far when I introduced bi-temporality in
I can get the "How can we handle the disconnect between reality & the system?"
But one the hardest parts is figuring out: When do we really have to?
I guess that the hint is here "It’s impossible to make this consistent or scalable. We want to distribute the benefits of disability insurance as widely as possible.", right?
What can help a business figure out if they already need "scalability, repeatability, & efficiency of automated processing"? Maybe, the use case is so rare, or the market unreachable, or knowledge so insufficient that it wouldn't be profitable. Maybe, in the eXploration phase, we might want to process things manually and learn from that... or maybe that can quickly become a bottleneck.
I still can't figure out the clues our questions to ask ourselves to figure out the sweet-spot between early and late design 🤔
... but maybe this will come up later and I just have to be patient 😉
https://www.datomic.com/ is built on this principle, as far as I can tell.
Curious, do you see this as a contradiction (or contextualisation) to YAGNI, i.e. don't YAGNI if it's expensive to revert our decision? Or does it not contradict because "you ARE gonna need it"?
How are you going to answer the question of “why did we send this (wrong) bill in the last cycle?”, if you have only the time stamp of the move?
If seems to me that you need the second time stamp, the time stamp of when the address correction info was entered into the DB. In other words we need to know both, the date of the move AND the date when we learned about the move.
Overall, it seems that if you want explainability of the past events while responding to changes, you’ll end up with a https://en.wikipedia.org/wiki/Persistent_data_structure for business.
One example of this that engineers are very familiar with is version control: while new features (and bugs) are being added to a new release all the time, you can explain why the release from 2 weeks ago did something funny.
So, this principal is implying that you want version control for your business data… and you want your software to be able to load different versions of your business data to explain what happened at a certain point in history.
You mentioned the "never discard data" principle -- I'm wondering if you have any book recommendations that will allow to dive deeper into the business architecture principles? Would very much appreciate a nudge in the right direction!
At first sight seems related to https://martinfowler.com/eaaDev/EventSourcing.html - your thoughts on that?
Another thought: the issue I have with the trade-off being considered here is that it neglects the cost of developing the UI, which typically tends to be more expensive than the model or data storage. Assuming we are currently at the exploration stage of 3X, this cost could potentially delay our exploration. Would you bite the bullet and construct the UI to reflect the more complex model, or would you prefer to create a simple adaptor to match the simple UI with the complex model?