23 Comments

I can get the "How can we handle the disconnect between reality & the system?"

But one the hardest parts is figuring out: When do we really have to?

I guess that the hint is here "It’s impossible to make this consistent or scalable. We want to distribute the benefits of disability insurance as widely as possible.", right?

What can help a business figure out if they already need "scalability, repeatability, & efficiency of automated processing"? Maybe, the use case is so rare, or the market unreachable, or knowledge so insufficient that it wouldn't be profitable. Maybe, in the eXploration phase, we might want to process things manually and learn from that... or maybe that can quickly become a bottleneck.

I still can't figure out the clues our questions to ask ourselves to figure out the sweet-spot between early and late design 🤔

... but maybe this will come up later and I just have to be patient 😉

Expand full comment

Great question! I don't have a solid answer. Yet.

1. Understand that it's possible. Large companies have muddled through so long that at first they can't imagine a world without muddling.

2. Choose the tradeoff of moving some investment forward.

Pretty soon we will talk about the role of accounts & transactions. I tried to introduce them at Gusto & failed, even though the benefits would have been enormous.

Re: sweet spot on timing--that's why they pay us the big bucks.

Expand full comment

Oh! Accounts & transactions! Looking forward to that!

😆 good one on the sweet spot timing being the value we provide!

That said, most teams are either designing so early or so late, that it is relatively easy to bring them a little bit more in the middle 😅

Expand full comment

Just asking "when?" creates huge value.

Expand full comment

Just one answer I can think of: comes with domain knowledge/subject-matter expertise gained through experience. Listen to those who have experience in the domain or gain that experience over time and change the system to accommodate (pay off the technical debt) as you learn more.

In @Kent Beck ‘s story, he was working with folks that already had enough domain experience to know they needed it up front. I’ve worked with HR folks that knew we needed a benefit year and “as of” date in reports for a benefits admin system we were building. They had the domain experience. We listened to them.

Expand full comment

https://www.datomic.com/ is built on this principle, as far as I can tell.

Expand full comment

Curious, do you see this as a contradiction (or contextualisation) to YAGNI, i.e. don't YAGNI if it's expensive to revert our decision? Or does it not contradict because "you ARE gonna need it"?

Expand full comment

Exactly right. YAGNI is a meditation on our (engineers) tendencies to make design decisions too soon. Some people take it as an excuse not to design until too late. The design decisions in this business architecture series are intended as an antidote--more complex than you might think you need right now, but worth it.

Expand full comment

How are you going to answer the question of “why did we send this (wrong) bill in the last cycle?”, if you have only the time stamp of the move?

If seems to me that you need the second time stamp, the time stamp of when the address correction info was entered into the DB. In other words we need to know both, the date of the move AND the date when we learned about the move.

Overall, it seems that if you want explainability of the past events while responding to changes, you’ll end up with a https://en.wikipedia.org/wiki/Persistent_data_structure for business.

One example of this that engineers are very familiar with is version control: while new features (and bugs) are being added to a new release all the time, you can explain why the release from 2 weeks ago did something funny.

So, this principal is implying that you want version control for your business data… and you want your software to be able to load different versions of your business data to explain what happened at a certain point in history.

Expand full comment

I agree that the version control metaphor is strong. However, half the audience I'm trying to reach has no understanding of version control. That's why I steered clear.

My reason for discussing business architecture is get agreement between business sponsors & engineers on the need for the "extra" up front work necessary to support long-term operations.

Expand full comment

Ah, very interesting! I totally missed that angle/second audience. Also, I can see how writing about this aligns with your mission of helping geeks feel safe: your focus and persistence is inspiring!

How do you envision these writings being used? Do you see engineers sharing them with their business sponsors directly? Do you think the current level of details (including Python code) would be understood by a business person at an insurance company? Or do you expect engineers to learn how to articulate the need for upfront investments themselves in the language that business sponsors understand?

In some sense, the example is about an often missed requirement for retrospective debugging at business level. It is also the type of requirement that is harder to implement later. If I were arguing with my business partner, I would focus on the business use case and on the cost of delaying implementing support. At any business-first (as opposed to technology-first) company, the final say would be up to the business side, but I would make sure that costs and benefits of different decisions are clear. (That said, I have never worked at a insurance company, so take the above with a lot of salt :-) )

Expand full comment

I’m encouraging geeks to speak with empathy for business folks. That’s also why NPV & options are in TF?. The sample code is so geeks can be confident in their understanding of the concepts, also a prerequisite for empathy.

Expand full comment

The nice thing about thinking of this as version control for business data is that the API to version control can be fairly narrow and (most of) the complexity can be concentrated behind that API.

Expand full comment

You mentioned the "never discard data" principle -- I'm wondering if you have any book recommendations that will allow to dive deeper into the business architecture principles? Would very much appreciate a nudge in the right direction!

Expand full comment

Fowler's Analysis Patterns contains some similar material. Otherwise, you're watching the book being written!

Expand full comment

At first sight seems related to https://martinfowler.com/eaaDev/EventSourcing.html - your thoughts on that?

Expand full comment

Yes, it is great that you bring this to the table. When I was reading Kent post, I was thinking about Event Sourcing. There is a post https://medium.com/swlh/event-sourcing-as-a-ddd-pattern-fea6de35fcca that actually also explore the synergy about a solid Domain Modeling and the use of Event Sourcing and CQRS (Command Query Responsibility Segregation)

Expand full comment

That’s the reciprocal of what I’m talking about. You can derive either from the other. I’ll get to how/why to record deltas in a later element of the series.

Expand full comment

I’m assuming you’re suggesting to use deltas instead of event sourcing.

You can’t always derive deltas because deltas lose intent (this changed but _why_? Did my bank account balance change because I made a withdrawal or because I had a late fee?) and even assuming they are the same, if you use deltas, the cost of deriving events from deltas is much higher than that of deriving deltas from events.

Furthermore, any entity-based model you use is going to be optimized for something, and if that something changes, suddenly your model is optimized for the wrong thing. If you event source, creating a new write validation model is as trivial as writing a new projection off of your events.

So even assuming they’re isomorphic (in the mathematical sense), they still have different practical implications which tend to favor events over diffs for most types of information.

Expand full comment

I'm not suggesting anything. Just noting that the two models are duals of each other (within the limitations you noted).

Expand full comment

Another thought: the issue I have with the trade-off being considered here is that it neglects the cost of developing the UI, which typically tends to be more expensive than the model or data storage. Assuming we are currently at the exploration stage of 3X, this cost could potentially delay our exploration. Would you bite the bullet and construct the UI to reflect the more complex model, or would you prefer to create a simple adaptor to match the simple UI with the complex model?

Expand full comment

Wisen, I'd stick with simple UI first as ugly as it gets then iterate, specially with the exploration stage, what do you think?

You don't have to enable that ugly UI to everyone (thanks to feature toggles) but you can earn knowledge from it.

My typical example is native datepicker vs custom datepicker... your custom datepicker might be better... but says who? let's ship ship, iterate and observe...

Will that end up be more expensive than shipping the right version from day-1? Maybe, but who said it's the right version?

Also, as this might alter the UX, reverting back to another UI and teaching it to users might end up more expensive than the implementation cost.

What do you thing?

Expand full comment

and you will often need an adapter anyway so let's make it simple first to fit our needs.

One of my favorite examples is Stripe's API "last_payment_error" https://stripe.com/docs/api/payment_intents/object#payment_intent_object-last_payment_error

We can guess that they have more information than the "last error" but the API only exposes the last error as this is more than enough for most of their API consumers.

This reduces the surface of the API (less things to learn, less things to maintain etc...) until it's not fit anymore but that day (if it comes), they can extend the API.

But, they probably store all errors.

Expand full comment