21 Comments

I began reading your post and realized "hey -- I was just scribbling about that myself" and finally remembered that the context was.

The combination of immutable data (with an inherent timestamp) and the idea of an "effective date" is pretty damned powerful ... and much more representative of reality than the good ol' "just keep a snapshot, bytes are EXPENSIVE" days of dozens and dozens of megabytes in a refrigerator-sized behemoth!!!!

Expand full comment
author

We adapted to the tradeoffs back then. Then the tradeoffs change but thinking changes slower.

Expand full comment

So, so true.

Now I'll have to go watch old Rich Hickey videos again. And *those* start like a decadea and half ago.

Cheers!!

Expand full comment
Aug 4, 2023·edited Aug 4, 2023Liked by Kent Beck

Rich's talks are always re-watch worthy! As it happens I had the fortune to meet Rich and ask him about bi-temporality at the Clojure Conj in April, he said something like "most people don't care about it, but those that do *really* care" :)

Expand full comment
Oct 17, 2023·edited Oct 17, 2023Liked by Kent Beck

Hi Kent, thanks this article, love the principles you take us through to build up the argument that some complexity can be necessary. Like a rocket where every kilo of fuel is a tradeoff against the weight of it, we're faced with adding just enough complexity to solve the problem but not more. And the messy real-life complexity you describe here definitely resonates!

Reading the article, the words "event sourcing" kept ringing in my ears, and I think you're maybe not using those words because you're purposefully describing the problem from a business-point-of-view? Including the solution you propose, which fits into that same business-context, which makes your article very clear and understandable. But I wonder if you'd agree that event sourcing is one implementation that can achieve Eventual Business Consistency?

I say this, because an event would model the backdateable date and also inherently have timestamps showing when it was received, and so it naturally expresses the two timelines. And in an event sourced system, the data that needs to be reprocessed would also be available as events, which increases the likelihood of having all the necessary data for reprocessing, compared to only having access to the current state of normalized data.

More generally, I've been drawn to event sourcing architectures precisely because they reflect a fundamental reality of the world where messy, error-prone, back-filled corrections are necessary to keep everything in sync. Reality is dominated by events more than state, if I can get away with such philosophizing, which is how I feel it connects so well to what you describe.

I'd love to hear your thoughts on this, but above all else thank you for sharing the article.

Expand full comment
author

Event sourcing is kind of equivalent to accounts & transactions. I come at it from the accounts/transactions perspective. Both approaches separate stages of processing. Rather than one function calling another, exactly right now, they do some work & leave breadcrumbs for later work to be done.

Expand full comment

The appendix about the analogy remind me how I love how Pat Helland explains why convergence could be a better name (if eventual consistency wasn’t already everywhere), in "Don't Get Stuck in the "Con" Game" (https://pathelland.substack.com/p/dont-get-stuck-in-the-con-game?nthPub=281)

This makes me think: do you think that "timeline convergence" could be a better name than bi-temporality? I like the idea that _timeline_ converges (not timestamps), as the convergence car the state of the complete timeline (including past timestamps, e.g. in the error-correction "you got my address change wrong" case)

Expand full comment
author

Try it out. "We're implementing this with converging timelines" versus "We're implementing this so the business is eventually consistent". They're both okay for me.

Expand full comment
Aug 8, 2023Liked by Kent Beck

Happy to see this article, Kent. I believe there are a lot of benefits here, the most notable one (customer-facing) that you illustrate: prove to the customer that you're smart enough to be aware of (in your example) changes of physical address. Although digital systems emphasize this less (standardizing addresses is critical to continue to charge recurring subscriptions), other aspects of customer history can benefit from this. Also, from a data audibility perspective, providing temporality can be invaluable for improving data hygiene. I certainly hope that these considerations become more widely known and used. I work for a DB company that offers temporality out of the box, where you can query how the data record was at a specific time in the past. This does not sacrifice performance/low latency, distributed writes, or transaction integrity - the point being: there are already database products out there that offer this robustness. Thanks again for the great article. - Luis @ fauna.com

Expand full comment
Aug 8, 2023Liked by Kent Beck

Kent, thanks for writing this up.

Even for people who understand a concept, it’s hugely valuable since we can leverage your reflections and refinements.

I like the terms effective and posted dates; this is clearer than I’ve seen it expressed before.

Expand full comment
Aug 5, 2023Liked by Kent Beck

Interesting take. Thanks Kent.

Expand full comment
Aug 5, 2023Liked by Kent Beck

Accounting systems have actually used this concept of bi-temporal data for a while, without having such a succinct name and explanation for it, and hence being difficult to grasp for non-accountants. Kudos for labeling and visualizing it so clearly!

As there is an increasing of real time reporting out of financial accounting systems, we’d need to extend this model to tri-temporal -- 1) time of event happening “in the real world”, 2) time of financial reporting period (as these need to be locked periodically), 3) time of recording in the database.

Expand full comment

Hey! Kent, great article which explains things for geeks. Thank you!. The fun thing is that in your “The Analogy” part you share the same I heard today in Software Engineering Daily podcast titled “CAP Theorem 23 Years Later with Eric Brewer”. So, what you example describes is CAP theorem: you can get either consistency or availability.

Link to podcast episode: https://softwareengineeringdaily.com/2023/07/25/cap-theorem/

Expand full comment
author

Correct. For my intended audience I didn’t think bringing up the CAP Theorem by name would help.

Expand full comment

Excluding the partition tolerance from CAP theorem. Then it’s pretty close to your example

Expand full comment
Aug 4, 2023·edited Aug 4, 2023

> Part of the reason it hasn’t taken off is because of the additional complexity it imposes on programmers

The trouble is that SQL and the existing crop of databases make working with time like this far harder than it ought to be. I work on https://xtdb.com where we are building a database engine in which all data is bi-temporal by default, and crucially, without imposing a tonne of schema & query language boilerplate on developers who are building applications that (currently!) only care about 'now'. I think this bi-temporal-by-default approach may be the only way the concept can succeed.

For anyone curious to hear more, this recent presentation I gave on "UPDATE Considered Harmful" may be of interest: https://www.youtube.com/watch?v=JxMz-tyicgo ...and shameless plug (sorry Kent!): I'm also giving a webinar about bi-temporality specifically next week: https://attendee.gotowebinar.com/register/2960607012900067930?source=xtdb-discuss

Expand full comment

I’m thinking how event sourcing can be a possible solution as well. Like when you have a new address in the DB you can replay some of the events for the new address... but that may (and will) trigger new events that will try to update stat that was already exposed and used for reports, etc. it’s not good :)

Expand full comment
Aug 4, 2023Liked by Kent Beck

You can't change the past. You can change your view of the past.

Some state was exposed in a report, and someone took action based on that. We later discover that that state wasn't actually correct and therefore someone took incorrect actions. Just as we took a corrective action with respect to the state that's exposed in the report, the best we can do is take whatever corrective/compensating action we can take with respect to the actions that someone who saw the report took.

Since there's no strict time frame beyond which new facts can come to light which change our view of the past, there's not really any escaping this.

From an implementation perspective, one probably in practice needs polytemporality (something like a vector clock?).

Expand full comment
author

In the applications I've seen 2 dimensions of time is the sweet spot. There may be cases for more but now we're deep in tradeoff territory.

Expand full comment

Hmm I agree it’s a solution for certain cases, but wouldn’t it be easier here to say „a change applies as you communicate it, and if you realise too late to inform about the move then it’s your problem“?

Expand full comment
author

The customer will experience that as poor customer service. Also, sometimes it’s genuinely the business’ fault. Finally, yes this is what most systems do. There’s a better way.

Expand full comment