Kent, you solved this one last century in your Best Smalltalk Pattern Practices book. Apologies if I mangled the title. One function per intention. It puts the spotlight on the real question: the granularity of intentions. Or, in L Peter Deutsch language, the granularity of the invariants in that part of the program.
Posting the link here because anyone who reads Kent should read that. I haven't worked in Smalltalk in 20+ years (sad but true) but the meat of that book still applies - it played a big part in shaping who I am as a programmer. IMO, it's just as valuable as GoF Patterns, Fowler's Refactoring, or any other programming book.
I wonder how function length compares to “popularity” of the function at runtime. My experience is those 74 line monsters are doing a lot of work and are often crucial to the critical path. In fact the complexity is what often defies refactoring into better more succinct steps.
At least in OO languages (Java and C# are my experience), getters, setters, and (short if you're "doing it right") constructors dominate the short-function populace. I wonder if/how excluding those from the analysis would change the results.
Posts like this make me think I really ought to learn statistics one of these days :)
If long functions attract the most changes, programmers' experience of the codebase will be dominated by their experience with long functions. So what should we do to keep long functions habitable? It seems to me that "long" doesn't necessarily mean "mind-bogglingly complex," though the two correlate in my experience.
Or am I missing the point? Maybe increasing alpha is the best we can do?
I believe that increasing alpha is indeed the best we can do. My hypothesis is that anything we do to distort the distribution will squeeze the complexity in some random, more-damaging direction.
Kent, you solved this one last century in your Best Smalltalk Pattern Practices book. Apologies if I mangled the title. One function per intention. It puts the spotlight on the real question: the granularity of intentions. Or, in L Peter Deutsch language, the granularity of the invariants in that part of the program.
I still stand by that advice. People seem to want a number, so giving them a distribution instead seems like a step in the right direction.
https://www.amazon.com/Smalltalk-Best-Practice-Patterns-Kent/dp/013476904X
Posting the link here because anyone who reads Kent should read that. I haven't worked in Smalltalk in 20+ years (sad but true) but the meat of that book still applies - it played a big part in shaping who I am as a programmer. IMO, it's just as valuable as GoF Patterns, Fowler's Refactoring, or any other programming book.
Thanks!
Lots of great advice in that book. I had students and employees read it.
I wonder how function length compares to “popularity” of the function at runtime. My experience is those 74 line monsters are doing a lot of work and are often crucial to the critical path. In fact the complexity is what often defies refactoring into better more succinct steps.
It'd be fun to plot complexity vs run-time cost & see if you get a correlation.
At least in OO languages (Java and C# are my experience), getters, setters, and (short if you're "doing it right") constructors dominate the short-function populace. I wonder if/how excluding those from the analysis would change the results.
Maybe it doesn't matter...hmm...
Try it! The cool thing about scale-free networks is they don't seem to care.
Which just made me realize that running this analysis on a code base that uses something like Lombok might skew the results.
Try it!
Posts like this make me think I really ought to learn statistics one of these days :)
If long functions attract the most changes, programmers' experience of the codebase will be dominated by their experience with long functions. So what should we do to keep long functions habitable? It seems to me that "long" doesn't necessarily mean "mind-bogglingly complex," though the two correlate in my experience.
Or am I missing the point? Maybe increasing alpha is the best we can do?
I believe that increasing alpha is indeed the best we can do. My hypothesis is that anything we do to distort the distribution will squeeze the complexity in some random, more-damaging direction.
Uncle commented in episode 3 of clean coders (https://cleancoders.com/episode/clean-code-episode-3) that there is a direct correlation between how well tested the code is and the length of the functions.
Would you have a summary with pointer to recommend to read about the documented effects of TDD (such as Braithwaite's study)?