I LOVE this feature of moderns IDEs, and use it daily (though not often for TDD). However, I have to harken back to the early days when I first encountered it - with Smalltalk. The VisualWorks IDE was so far ahead of its time (with this and many other powerful features), I had a hard time describing it to my grad school classmates and then later, coworkers. It truly was a massive paradigm shift (yes, I feel dirty for using that phrase).
I never used VisualAge Smalltalk, but did cut my Java teeth on the equally great VisualAge for Java, which, again, was so advanced beyond the competitors that it was hard to describe (ironically, VisualAge for Java, including its internal Java compiler, was written in Smalltalk) - show-and-tell was the only way to make someone understand.
To this day I'm still a hard-core Eclipse/STS user (VisualAge for Java was the predecessor to Eclipse), and find myself doing in-situ coding every day. I still have colleagues every now and then who use Eclipse but don't know to take advantage of in-situ coding, as you describe it.
I'm clojure guy and REPL-driven development is a number one feature for me. It is very addictive experience to communicate with runtime in real time - shortening feedback loop to milliseconds. Functional and dynamic nature of clojure make code reload predictable and manageable. Debug coding is just a a pale shadow of REPL-driven. Here is screencast I always recommend as an demonstration - https://www.parens-of-the-dead.com/
When I teach OO development these days, we do start with test-first. See the red squiggles? Click on it. Continue to click on red squiggles until they are gone. Run the test. I call it Speculative Coding. Write what you want it to do, then make it happen. It's a natural outgrowth of TDD, which is the start of the speculation. It's a tough sell to attendees for some reason. They may write the test, but then dive into trying to create a solution that doesn't start with the red squiggle in the test!
"... and you will quickly be able to code in the debugger with all the details of the test fixture visible."
and
"They can try little snippets of code using real data, accelerating the feedback loop for coding. Once the code is satisfactory, they can continue execution (F8) and immediately see if the code matched the test."
One interpretation would be that - when the JVM stopped when detected the exception being thrown and I can see fields, parameters and the call stack - I could attempt to write an implementation of the method that passes the tests, hit F8 and then see the test succeeding. But I'm not sure if that easily achievable in JVM. There are some solutions that improve hot-swapping but when I tested them I usually found them not very reliable.
Or do you mean that you can e.g. evaluate some expressions, change the values of parameters, try to write some implementation and then re-run the test?
Thanks for this technique, I will try it for sure. IntelliJ IDEA allows to do the same, using the same method (modify the template for a new method, set up an exception breakpoint).
I haven't used Eclipse in ages, so I downloaded it and now I understand. While debugging when you change the method and hit Ctrl+S Eclipse will recompile the code and then put the breakpoint at the first line (by dropping the stack frame and re-running up to the first line I suspect). When you hit F8 / Resume you will get the exception again and you can inspect the method parameters and variables again. Neat.
Obviously you can hit various JVM limitations about code swap, but as long as you are changing the method body only it should be fine. You can tidy your code when you get the test passing.
I use IntelliJ IDEA - to some degree it is achievable here as well, although (much) less user friendly. An idea for a plugin I suppose.
Quite old article:-) But still making a lot of sense! Especially when unit tests tend to go deep these day.
If I understand correctly, when the debugger hits a breakpoint, developers can see the real-time values of parameters and variables, providing immediate context. This clarity enables them to modify the code on-the-spot, and after a recompilation, they can continue execution. The outcome of these changes is then validated by the test's assertions.
I still have a lingering question: While the pre-condition becomes crystal clear upon hitting the breakpoint, how can I ascertain the expected output within this specific code scope? I recognize that test assertions will ultimately offer feedback, but their relationship to the code might be as indirect as the initial pre-condition—otherwise, why would it be so challenging to recall?
I can't recall that ever being an issue, but I see what you're saying. Generally by the time I hit the breakpoint I know what I want to return. It's one stack frame away if I forget. And if I get it wrong, the test will gently remind me.
I see. I can imagine how it works. I guess discipline is very important here (as much as any other flavours of TDD) if the solution doesn't make the test happy one should try to REPLACE it with rather than ADD a different solution.
I remember doing this nearly 25 years ago when I learned that Visual Age for Java made this possible. I liked the feeling of it, although I admit I haven't felt the urge to do it since. I wonder whether it became one of those habit-building exercises for me that I just never really wanted again. Even so, it more than did its job of helping me cultivate a focus on one fixture at a time.
Now I wonder what would happen if I tried it again....
Hmm. I've never been super comfortable when using the debugger that Everything is Fine; part of it probably comes from having "grown up" in programming environments that didn't provide strong debugger support. But I think part of it also comes from a feeling that I won't always have the debugger when I want it, for example, stack traces from the field won't let me go back in time and step through the code. So I tend to want to first see if I can beef up the error messages to expose all the information I need to fix the test, and then resort to the debugger only as a fallback.
Having good debug information is helpful, but that doesn't address the "have to have a mental model of all the available data" problem.
The scenario is perfectly natural in Smalltalk. It's a pity that the resources to do it aren't standard in all environments. Not sure why we shackle ourselves this way.
> Early dynamic programming environments like LISP
As a modern day Common Lisper, this is not just supported, but this is exactly how we build things. The CL condition system goes hand in hand with this, and is a critical part of the workflow that Java misses. (e.g. How do you "recover" from an error. Being able to define recovery points is a huge part of giving me confidence that I can invest time and energy into a run with some custom data that probably took me time to construct.)
(I even built an interactive test running tool for Emacs to better support this workflow: https://github.com/tdrhq/slite. If a test fails, it first shows up in red, press 'r' and it fails in a debugger with the full stack trace all the data leading up to the failure/error.)
I have implemented similar functionality in Eclipse. Fewest keystrokes from “test broken” to “here’s where to fix it”. One refinement—unexpected exceptions stop where the exception is thrown, assertion failures stop at the beginning of the test.
Did you ever work with the OS2 Debugger? Hated the OS, but that integrated debugger was fantastic and helped me track down nasty EnvyDeveloper defects.
I LOVE this feature of moderns IDEs, and use it daily (though not often for TDD). However, I have to harken back to the early days when I first encountered it - with Smalltalk. The VisualWorks IDE was so far ahead of its time (with this and many other powerful features), I had a hard time describing it to my grad school classmates and then later, coworkers. It truly was a massive paradigm shift (yes, I feel dirty for using that phrase).
I never used VisualAge Smalltalk, but did cut my Java teeth on the equally great VisualAge for Java, which, again, was so advanced beyond the competitors that it was hard to describe (ironically, VisualAge for Java, including its internal Java compiler, was written in Smalltalk) - show-and-tell was the only way to make someone understand.
To this day I'm still a hard-core Eclipse/STS user (VisualAge for Java was the predecessor to Eclipse), and find myself doing in-situ coding every day. I still have colleagues every now and then who use Eclipse but don't know to take advantage of in-situ coding, as you describe it.
Time to record some videos?
I'm clojure guy and REPL-driven development is a number one feature for me. It is very addictive experience to communicate with runtime in real time - shortening feedback loop to milliseconds. Functional and dynamic nature of clojure make code reload predictable and manageable. Debug coding is just a a pale shadow of REPL-driven. Here is screencast I always recommend as an demonstration - https://www.parens-of-the-dead.com/
There are special tricks to do both TDD and RDD at the same time multiplying power of both. And yeah it is very natural to do it in pair!
Thanks so much for sharing!
When I teach OO development these days, we do start with test-first. See the red squiggles? Click on it. Continue to click on red squiggles until they are gone. Run the test. I call it Speculative Coding. Write what you want it to do, then make it happen. It's a natural outgrowth of TDD, which is the start of the speculation. It's a tough sell to attendees for some reason. They may write the test, but then dive into trying to create a solution that doesn't start with the red squiggle in the test!
There’s a masochistic heroism tendency in programming. Don’t know how to encourage folks away exactly except repeated examples.
I wonder what do you exactly mean by
"... and you will quickly be able to code in the debugger with all the details of the test fixture visible."
and
"They can try little snippets of code using real data, accelerating the feedback loop for coding. Once the code is satisfactory, they can continue execution (F8) and immediately see if the code matched the test."
One interpretation would be that - when the JVM stopped when detected the exception being thrown and I can see fields, parameters and the call stack - I could attempt to write an implementation of the method that passes the tests, hit F8 and then see the test succeeding. But I'm not sure if that easily achievable in JVM. There are some solutions that improve hot-swapping but when I tested them I usually found them not very reliable.
Or do you mean that you can e.g. evaluate some expressions, change the values of parameters, try to write some implementation and then re-run the test?
Thanks for this technique, I will try it for sure. IntelliJ IDEA allows to do the same, using the same method (modify the template for a new method, set up an exception breakpoint).
Edit and continue. https://tidyfirst.substack.com/p/coding-in-the-debugger
I haven't used Eclipse in ages, so I downloaded it and now I understand. While debugging when you change the method and hit Ctrl+S Eclipse will recompile the code and then put the breakpoint at the first line (by dropping the stack frame and re-running up to the first line I suspect). When you hit F8 / Resume you will get the exception again and you can inspect the method parameters and variables again. Neat.
Obviously you can hit various JVM limitations about code swap, but as long as you are changing the method body only it should be fine. You can tidy your code when you get the test passing.
I use IntelliJ IDEA - to some degree it is achievable here as well, although (much) less user friendly. An idea for a plugin I suppose.
I don't know why this isn't the expected behavior for all IDEs, except that it is hard to implement.
Quite old article:-) But still making a lot of sense! Especially when unit tests tend to go deep these day.
If I understand correctly, when the debugger hits a breakpoint, developers can see the real-time values of parameters and variables, providing immediate context. This clarity enables them to modify the code on-the-spot, and after a recompilation, they can continue execution. The outcome of these changes is then validated by the test's assertions.
I still have a lingering question: While the pre-condition becomes crystal clear upon hitting the breakpoint, how can I ascertain the expected output within this specific code scope? I recognize that test assertions will ultimately offer feedback, but their relationship to the code might be as indirect as the initial pre-condition—otherwise, why would it be so challenging to recall?
I can't recall that ever being an issue, but I see what you're saying. Generally by the time I hit the breakpoint I know what I want to return. It's one stack frame away if I forget. And if I get it wrong, the test will gently remind me.
I see. I can imagine how it works. I guess discipline is very important here (as much as any other flavours of TDD) if the solution doesn't make the test happy one should try to REPLACE it with rather than ADD a different solution.
I remember doing this nearly 25 years ago when I learned that Visual Age for Java made this possible. I liked the feeling of it, although I admit I haven't felt the urge to do it since. I wonder whether it became one of those habit-building exercises for me that I just never really wanted again. Even so, it more than did its job of helping me cultivate a focus on one fixture at a time.
Now I wonder what would happen if I tried it again....
Only one way to find out for sure. Would also make an interesting video/stream.
Hmm. I've never been super comfortable when using the debugger that Everything is Fine; part of it probably comes from having "grown up" in programming environments that didn't provide strong debugger support. But I think part of it also comes from a feeling that I won't always have the debugger when I want it, for example, stack traces from the field won't let me go back in time and step through the code. So I tend to want to first see if I can beef up the error messages to expose all the information I need to fix the test, and then resort to the debugger only as a fallback.
Having good debug information is helpful, but that doesn't address the "have to have a mental model of all the available data" problem.
The scenario is perfectly natural in Smalltalk. It's a pity that the resources to do it aren't standard in all environments. Not sure why we shackle ourselves this way.
> Early dynamic programming environments like LISP
As a modern day Common Lisper, this is not just supported, but this is exactly how we build things. The CL condition system goes hand in hand with this, and is a critical part of the workflow that Java misses. (e.g. How do you "recover" from an error. Being able to define recovery points is a huge part of giving me confidence that I can invest time and energy into a run with some custom data that probably took me time to construct.)
(I even built an interactive test running tool for Emacs to better support this workflow: https://github.com/tdrhq/slite. If a test fails, it first shows up in red, press 'r' and it fails in a debugger with the full stack trace all the data leading up to the failure/error.)
I have implemented similar functionality in Eclipse. Fewest keystrokes from “test broken” to “here’s where to fix it”. One refinement—unexpected exceptions stop where the exception is thrown, assertion failures stop at the beginning of the test.
I wonder if it's possible to do REPL-driven-development with Visual Studio Code on any of the languages?
Try it!
Did you ever work with the OS2 Debugger? Hated the OS, but that integrated debugger was fantastic and helped me track down nasty EnvyDeveloper defects.
I didn’t. Sounds great!