By the way there is yet another way to represent time delays, from discrete control system theory, which is a z^(-k) block where k is the number of sampling periods the signal is delayed.
Was your use of two parallel lines to represent delay the invention you were referring to? If so, that's the standard representation of delay in causal loop diagrams, which is the diagramming technique I thought you were using. See https://systemsandus.com/2012/08/15/learn-to-read-clds/ If not, where did you get your diagramming conventions from?
This reminds me that I'd love for you to write about the phenomenon of people either intentionally or unintentionally creating a modest variant of something else and the variants continuing to exist long after it becomes evident that the world would be a better place if the variants were collapsed. As it relates to this instance, I'm not referring to your use of diagonal lines as a variant of Diagram Effects, but of both Diagram Effects and Causal Loop Diagrams continuing to exist as variants of each other. FWIW, I haven't investigated which came first.
"Maybe include analytics, though, to see how often it is read & how often readers find it helpful. If those analytics show that the documentation turns out not to be valuable, consider not writing it in the future."
I strongly disagree. Documentation is the kind of feature that is used by few, but for the few who use it, it's indispensable. You're not going to be able to capture that in analytics. You're not going to be able to capture that value in analytics.
In addition, heavy use of documentation might well be a smell--indicating unintuitive aspects to code or product that require further explanation. This would be a better use case for analytics: that is, to make sure that users/coders are not excessively reliant on the documentation due to infelicities of design or implementation.
I've enjoyed the OP and the thoughtful discussion of documentation in the comments. I think less about "to document or not to document" and more about "what documentation is worthwhile documentation?"
I agree with the OP that documentation on "how" things work does decay overtime and is better served by expressive, detailed tests and code.
That said, architectural decision records (ADRs) facilitate more of the "why" things work, and I would argue maintain their value as time goes on. It reminds me of Chesterton's fence; I feel much more liberty and power to refactor/remove things when I know "why" the code is there "how" the code is there.
All meant to say, documentation has its place; the trick is figuring out where that place is.
While recognizing all the concerns you mention about documentation, I have to object, at least somewhat, to the suggest that "reading out-of-date documentation is at least a waste of time." I have found repeatedly that software systems, at least at key interfaces, do *not* in fact change that rapidly in their "gestalt" approach to the problem or in their conceptual design. Is there continuous change? Yes, of course.
But, I found that I would regularly get value from critically and cautiously reading older docs, because I almost always found they gave me valuable insights into the *ideas* in the developers minds at the time that software was being written about what was important, what the key concepts were, etc, in short, clues to Naur's concept of _the theory of the program_. And the structures and concepts from these clues would quite often still be found in the existing source code, even though it had changed substantially.
In part, I've always had a somewhat "historical" view of understanding a program: I often like to find out how the software has evolved over time, and what the earlier ideas were. To this end, for example, I place a significant value on the presence of a tidy version control history (although a messy one will do if necessary), in order to see that evolution over time.
None of this says that good team communication, tidy code, a collaborative environment is important, of course it is, and I'm happy to feel that I've mostly been able to work on teams that cared about those things.
Over the years i’ve seen so many examples of this. my analogy is that of swinging pendulum that needs to find the right equilibrium for the organisation, group or team and it normally take several swings to settle, unless there is big enough change to destabilise the system and set the pendulum off again.
By the way there is yet another way to represent time delays, from discrete control system theory, which is a z^(-k) block where k is the number of sampling periods the signal is delayed.
Was your use of two parallel lines to represent delay the invention you were referring to? If so, that's the standard representation of delay in causal loop diagrams, which is the diagramming technique I thought you were using. See https://systemsandus.com/2012/08/15/learn-to-read-clds/ If not, where did you get your diagramming conventions from?
I got mine from Weinberg. Glad I invented/remembered the right notation.
This reminds me that I'd love for you to write about the phenomenon of people either intentionally or unintentionally creating a modest variant of something else and the variants continuing to exist long after it becomes evident that the world would be a better place if the variants were collapsed. As it relates to this instance, I'm not referring to your use of diagonal lines as a variant of Diagram Effects, but of both Diagram Effects and Causal Loop Diagrams continuing to exist as variants of each other. FWIW, I haven't investigated which came first.
I rely on social mechanisms to eliminate unhelpful duplication. It's not perfect but it's also low effort, which I appreciate.
Thanks for the quick reply. Found this reference which says what Weinberg was using was called "Diagram of Effects" and was related to Causal Loop Diagrams (CLDs). https://amr-noaman.blogspot.com/2014/04/diagrams-of-effects.html
'reactionary waterfallism' is such a wonderful phrase!
"Maybe include analytics, though, to see how often it is read & how often readers find it helpful. If those analytics show that the documentation turns out not to be valuable, consider not writing it in the future."
I strongly disagree. Documentation is the kind of feature that is used by few, but for the few who use it, it's indispensable. You're not going to be able to capture that in analytics. You're not going to be able to capture that value in analytics.
In addition, heavy use of documentation might well be a smell--indicating unintuitive aspects to code or product that require further explanation. This would be a better use case for analytics: that is, to make sure that users/coders are not excessively reliant on the documentation due to infelicities of design or implementation.
Sometimes the delay in the feedback loop contributes to stability, see: https://en.m.wikipedia.org/wiki/Positive_feedback#Hysteresis
> Instead, because there is a delay in the link between changing the thermostat & changing the feeling of warmth, you end up with oscillation.
Just FYI, delay isn't necessary for oscillations. The general condition for a system of ODEs
dx/dt = a x + b y
dy/dt = c x + d y
to be oscillatory is that the matrix A = [a, b; c, d] have a pair of complex eigenvalues. Here all the feedback is instantaneous.
I've enjoyed the OP and the thoughtful discussion of documentation in the comments. I think less about "to document or not to document" and more about "what documentation is worthwhile documentation?"
I agree with the OP that documentation on "how" things work does decay overtime and is better served by expressive, detailed tests and code.
That said, architectural decision records (ADRs) facilitate more of the "why" things work, and I would argue maintain their value as time goes on. It reminds me of Chesterton's fence; I feel much more liberty and power to refactor/remove things when I know "why" the code is there "how" the code is there.
All meant to say, documentation has its place; the trick is figuring out where that place is.
While recognizing all the concerns you mention about documentation, I have to object, at least somewhat, to the suggest that "reading out-of-date documentation is at least a waste of time." I have found repeatedly that software systems, at least at key interfaces, do *not* in fact change that rapidly in their "gestalt" approach to the problem or in their conceptual design. Is there continuous change? Yes, of course.
But, I found that I would regularly get value from critically and cautiously reading older docs, because I almost always found they gave me valuable insights into the *ideas* in the developers minds at the time that software was being written about what was important, what the key concepts were, etc, in short, clues to Naur's concept of _the theory of the program_. And the structures and concepts from these clues would quite often still be found in the existing source code, even though it had changed substantially.
In part, I've always had a somewhat "historical" view of understanding a program: I often like to find out how the software has evolved over time, and what the earlier ideas were. To this end, for example, I place a significant value on the presence of a tidy version control history (although a messy one will do if necessary), in order to see that evolution over time.
None of this says that good team communication, tidy code, a collaborative environment is important, of course it is, and I'm happy to feel that I've mostly been able to work on teams that cared about those things.
Over the years i’ve seen so many examples of this. my analogy is that of swinging pendulum that needs to find the right equilibrium for the organisation, group or team and it normally take several swings to settle, unless there is big enough change to destabilise the system and set the pendulum off again.