15 Comments
User's avatar
David Ungar's avatar

Kent, you solved this one last century in your Best Smalltalk Pattern Practices book. Apologies if I mangled the title. One function per intention. It puts the spotlight on the real question: the granularity of intentions. Or, in L Peter Deutsch language, the granularity of the invariants in that part of the program.

Expand full comment
Kent Beck's avatar

I still stand by that advice. People seem to want a number, so giving them a distribution instead seems like a step in the right direction.

Expand full comment
Eric Rizzo's avatar

https://www.amazon.com/Smalltalk-Best-Practice-Patterns-Kent/dp/013476904X

Posting the link here because anyone who reads Kent should read that. I haven't worked in Smalltalk in 20+ years (sad but true) but the meat of that book still applies - it played a big part in shaping who I am as a programmer. IMO, it's just as valuable as GoF Patterns, Fowler's Refactoring, or any other programming book.

Expand full comment
David Ungar's avatar

Thanks!

Expand full comment
David Ungar's avatar

Lots of great advice in that book. I had students and employees read it.

Expand full comment
Wyatt Barnett's avatar

I wonder how function length compares to “popularity” of the function at runtime. My experience is those 74 line monsters are doing a lot of work and are often crucial to the critical path. In fact the complexity is what often defies refactoring into better more succinct steps.

Expand full comment
Kent Beck's avatar

It'd be fun to plot complexity vs run-time cost & see if you get a correlation.

Expand full comment
Eric Rizzo's avatar

At least in OO languages (Java and C# are my experience), getters, setters, and (short if you're "doing it right") constructors dominate the short-function populace. I wonder if/how excluding those from the analysis would change the results.

Maybe it doesn't matter...hmm...

Expand full comment
Kent Beck's avatar

Try it! The cool thing about scale-free networks is they don't seem to care.

Expand full comment
Eric Rizzo's avatar

Which just made me realize that running this analysis on a code base that uses something like Lombok might skew the results.

Expand full comment
Kent Beck's avatar

Try it!

Expand full comment
Ben Christel's avatar

Posts like this make me think I really ought to learn statistics one of these days :)

If long functions attract the most changes, programmers' experience of the codebase will be dominated by their experience with long functions. So what should we do to keep long functions habitable? It seems to me that "long" doesn't necessarily mean "mind-bogglingly complex," though the two correlate in my experience.

Or am I missing the point? Maybe increasing alpha is the best we can do?

Expand full comment
Kent Beck's avatar

I believe that increasing alpha is indeed the best we can do. My hypothesis is that anything we do to distort the distribution will squeeze the complexity in some random, more-damaging direction.

Expand full comment
Michael Charland's avatar

Uncle commented in episode 3 of clean coders (https://cleancoders.com/episode/clean-code-episode-3) that there is a direct correlation between how well tested the code is and the length of the functions.

Expand full comment
Josselin Perrus's avatar

Would you have a summary with pointer to recommend to read about the documented effects of TDD (such as Braithwaite's study)?

Expand full comment