Friday Facts #366 - The only way to go fast, is to go well!
Re: Friday Facts #366 - The only way to go fast, is to go well!
[Moderated by Koub : Off topic]
- Deadlock989
- Smart Inserter
- Posts: 2529
- Joined: Fri Nov 06, 2015 7:41 pm
Re: Friday Facts #366 - The only way to go fast, is to go well!
[Moderated by Koub : Off topic]
Re: Friday Facts #366 - The only way to go fast, is to go well!
This is kind of what made me quit the software industry. I've used these practices in my hobby programming projects for years and knew how well those worked. Then when working at a company, I was forced to abandon these practices. Whenever I mentioned them, I got accused of lacking experience or not understanding how It businesses work or whatever. It really eats away at you. I started to question myself, which eventually led to burn out. Tried a few other companies, but ran into the same issues.
Ironically, I had participated in an internal programming competition at that company, which I singlehandedly won. A colleague from a different team mentioned that she wondered how I could write the code for it so fast. I couldn't explain it back then. I wonder whether she'd understand if I would explain it now.
Ironically, I had participated in an internal programming competition at that company, which I singlehandedly won. A colleague from a different team mentioned that she wondered how I could write the code for it so fast. I couldn't explain it back then. I wonder whether she'd understand if I would explain it now.
Re: Friday Facts #366 - The only way to go fast, is to go well!
I'm having a hard time understanding the concept of "test dependency" - is it an actual "hard" dependency inside the code (like a list of tests that must pass before the test is evaluated, some sort of inheritance\interface), a docstring with a list of tests, a library that manages it for you, or what? If you could provide an example it would be greatkovarex wrote: For this, I implemented a simple test dependency system. The tests are executed and listed in a way, that when you get to debug and check a test, you know that all of its dependencies are already working correctly. I tried to search if others use the dependencies as well, and how they do it, and I surprisingly didn't find anything.
Other than that - it was an insightful post, thanks!
Leading Hebrew translator of Factorio.
Re: Friday Facts #366 - The only way to go fast, is to go well!
I can understand this because the idea of having test dependencies was also not obvious for me. If any test is failing then it does not matter which tests were running before. The idea of test dependencies is a helper for kovarex's workflow when if tests starts to fail during refactoring, he does not want some complex tests being thrown at him when other a lot simpler tests are also failing which will give a lot better insight into what became broken. Trying to debug why 2 electric poles belong to different electric networks is a lot harder, but knowing that a lot easier tests failed: let it be electric pole is not buildable - speeds the process of understanding what became broken by not forcing into looking into slightly more complex tests.Dev-iL wrote: ↑Fri Jun 25, 2021 7:28 amI'm having a hard time understanding the concept of "test dependency" - is it an actual "hard" dependency inside the code (like a list of tests that must pass before the test is evaluated, some sort of inheritance\interface), a docstring with a list of tests, a library that manages it for you, or what? If you could provide an example it would be great
In that case i prefere to go the green path when refactoring - if at any point in time tests start to fail then i am likely to revert my local changes to a last known good state, unless i am 100% sure my changes introduced only an acceptable behavioral changes and tests were fragile.
Re: Friday Facts #366 - The only way to go fast, is to go well!
This discussion reminded me to this old discussion
viewtopic.php?p=13750#p13750
That post in conjunction with this current FFF gives deep insights in how Factorio was/is developed.
But for me that’s not the point, I try to point it out a bit better with an example:
Lately a college made unit tests for a module he wrote. I reviewed it and what he did was this: he made a single tests method and two nested loops around it to iterate through all possible states, that test could test. The result was only about 10 lines of code, which did in truth I think 20 tests or so. But it was quite hard to understand what it tests. So he introduced besides the two parameters for the tests a third parameter for the comment, what this test tests.
I said, this isn’t how tests are written. Good tests are much code, less logic and can be read easily. Indeed good tests are much more written for the reader, than for the test itself, the test should result green of course, but can I understand what it tests? Of course, Test-iterations are ok, to test a very small aspect of a unit. But in that case it makes no sense to abbreviate. Write the cases out. Make it more readable, the effort is not as high as you think.
The discussion ended with a compromise: we reduced the test-iteration to one loop instead of two nested and made several of such tests.
So, what has that to do with this? I’m not 100% sure, but for me Kovarex sounded like my colleague: why should I write mocks for the return values of the search? It’s already there, I just need to call it and if I do it like so,I also test the search function.
Writing good tests is sometimes hard, boring, repetitive, stupid work. And you produces tons of stupid code. But that code keeps your ass out of the shit, when you made this simple stupid error and spares your a day of work searching for the bug. And it helps not only others, to understand what this code does, but also yourself: you use the code. Countless times I wrote tests (ya, I know I should make the tests first, but you know bad habits ), and then I found me rewriting the interface, because it was too clumsy.
Again, I cannot know the real reasons why doing it like so. It would be simple to say “this is bad, don’t do it”. But things are in reality much more complex. On the other hand I can conclude from the information I have.
viewtopic.php?p=13750#p13750
That post in conjunction with this current FFF gives deep insights in how Factorio was/is developed.
Interesting, I didn’t know that. Possibly that is the reason, why the tests are as they are? I cannot know and until proven wrong I assume that c++ allows compiling twice: once with indirections for the tests and once for the game/integration tests.bormand wrote: ↑Mon Jun 21, 2021 11:23 amAs far as I can see from @kovarex article, they still have a nice separation and layering. They just don't want to go for "true" unit testing due to overhead, probably. In Java or TS indirections are basically free (well, you just can't get rid of them), so you get attachment points for your mocks for free. In C++ "true" unit testing is always a trade-off between performance and readability, detaching upper layers from lower layers to swap them for the mocks isn't free.
But for me that’s not the point, I try to point it out a bit better with an example:
Lately a college made unit tests for a module he wrote. I reviewed it and what he did was this: he made a single tests method and two nested loops around it to iterate through all possible states, that test could test. The result was only about 10 lines of code, which did in truth I think 20 tests or so. But it was quite hard to understand what it tests. So he introduced besides the two parameters for the tests a third parameter for the comment, what this test tests.
I said, this isn’t how tests are written. Good tests are much code, less logic and can be read easily. Indeed good tests are much more written for the reader, than for the test itself, the test should result green of course, but can I understand what it tests? Of course, Test-iterations are ok, to test a very small aspect of a unit. But in that case it makes no sense to abbreviate. Write the cases out. Make it more readable, the effort is not as high as you think.
The discussion ended with a compromise: we reduced the test-iteration to one loop instead of two nested and made several of such tests.
So, what has that to do with this? I’m not 100% sure, but for me Kovarex sounded like my colleague: why should I write mocks for the return values of the search? It’s already there, I just need to call it and if I do it like so,I also test the search function.
Writing good tests is sometimes hard, boring, repetitive, stupid work. And you produces tons of stupid code. But that code keeps your ass out of the shit, when you made this simple stupid error and spares your a day of work searching for the bug. And it helps not only others, to understand what this code does, but also yourself: you use the code. Countless times I wrote tests (ya, I know I should make the tests first, but you know bad habits ), and then I found me rewriting the interface, because it was too clumsy.
That points again to this: that’s sounds like integration tests, not unit tests. And if there are proper unit tests, there should be no real need for a test dependency. Because that is the dependency: first the unit tests, then integration tests. Surely, it would be nice, but the underlying problem that the test-dependency masks is, that there seems to be only one big integration test.boskid wrote: ↑Fri Jun 25, 2021 9:05 amTrying to debug why 2 electric poles belong to different electric networks is a lot harder, but knowing that a lot easier tests failed: let it be electric pole is not buildable - speeds the process of understanding what became broken by not forcing into looking into slightly more complex tests.
Again, I cannot know the real reasons why doing it like so. It would be simple to say “this is bad, don’t do it”. But things are in reality much more complex. On the other hand I can conclude from the information I have.
Cool suggestion: Eatable MOUSE-pointers.
Have you used the Advanced Search today?
Need help, question? FAQ - Wiki - Forum help
I still like small signatures...
Have you used the Advanced Search today?
Need help, question? FAQ - Wiki - Forum help
I still like small signatures...
Re: Friday Facts #366 - The only way to go fast, is to go well!
https://blog.knatten.org/2012/04/13/zer ... t-virtual/bormand wrote: ↑Mon Jun 21, 2021 11:23 amAs far as I can see from @kovarex article, they still have a nice separation and layering. They just don't want to go for "true" unit testing due to overhead, probably. In Java or TS indirections are basically free (well, you just can't get rid of them), so you get attachment points for your mocks for free. In C++ "true" unit testing is always a trade-off between performance and readability, detaching upper layers from lower layers to swap them for the mocks isn't free.
it's not as bad as you make it out to be. I do this several times a day in my work.
- BlueTemplar
- Smart Inserter
- Posts: 2421
- Joined: Fri Jun 08, 2018 2:16 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
Thanks !TOGoS wrote: ↑Thu Jun 24, 2021 4:20 pmThe game generally, yes, sort of, but not in the UI code. UI code, especially, as Kovarex noted, forms with lots of interactive elements (and I have personal experience with the map preview generator, so I know exactly what he's talking about) tend to become exponentially (or maybe it's just quadratic) more complicated with each new element you add. The reason being that when you add a new interactive element, you need to add logic for what happens when the user changes the value (which might change the state of all the /other/ elements on the form!), and also logic to update it for every other change that might happen.BlueTemplar wrote: ↑Sat Jun 19, 2021 11:00 amBut they *already* seem to be using asynchronous signals and events, so why does Luxalpa thinks that they aren't using FRP ? ("Too many" classes, "not enough" functions ?)NotRexButCaesar wrote: ↑Fri Jun 18, 2021 10:32 pmHere is the wikipedia article: https://en.wikipedia.org/wiki/Functiona ... rogramming.
And is C++ even the right language to do FRP in ?
I don't have experience with FRP, but my understanding of the concept is that it simplifies things by having each interactive element, rather than having to procedurally update the rest of the form, just update the backing data model, and then the state of the entire form is regenerated *by one piece of code* based on that data model. So instead of hundreds of ad-hoc update-Y-and-Z-because-X-changed functions, you just have one update-data-model-because-X-changed functions for each input X (whether that's an interactive element on the form, or something external that might change the data), and a rebuild-the-form-from-the-data function.
It sounds like using lambdas instead of methods for all those helped a bit. I suspect that FRP would be a more fundamental restructuring and probably make things even simpler.
I think React has some built-in features to make that business of rebuilding things efficient. Not sure how easy that would be to adapt to C++.
We have the exact same problem with e.g. the user profile update GUI at the treadmill company where I work now. I hope to use an FRP approach to refactor some of it sometime and then hopefully I'll actually know what I'm talking about.
(I mostly have experience with PyQt.)
I'll look into it the next time I have to make a somewhat complex GUI !
BobDiggity (mod-scenario-pack)
Re: Friday Facts #366 - The only way to go fast, is to go well!
I tried to come up with a simple illustration. However, due to it being simple, the compiler can see through the abstractions in the alternative solutions and can remove most of their runtime costs. But since I've written it, I might as well link it anyways:ssilk wrote: ↑Sat Jun 26, 2021 6:02 am[...]Interesting, I didn’t know that. Possibly that is the reason, why the tests are as they are? I cannot know and until proven wrong I assume that c++ allows compiling twice: once with indirections for the tests and once for the game/integration tests.bormand wrote: ↑Mon Jun 21, 2021 11:23 amAs far as I can see from @kovarex article, they still have a nice separation and layering. They just don't want to go for "true" unit testing due to overhead, probably. In Java or TS indirections are basically free (well, you just can't get rid of them), so you get attachment points for your mocks for free. In C++ "true" unit testing is always a trade-off between performance and readability, detaching upper layers from lower layers to swap them for the mocks isn't free.
https://godbolt.org/z/feesqzz6c Three version of basically the same code
#1 "Normal" C++, what you'd do unless your design forces one of the other options.
#2 The Java way, explicitly extracting the interface and using inheritance/polymorphism
#3 Templates, or what happens implicitly in dynamically typed languages, like JS (Edit: This is what ptx0's link suggests)
Note that the mock version of the test in #1 doesn't compile.
Also note that In this simple illustration the compiler can see through all of it, reducing the extra cost of #2 to a single cheap instruction (putting a constant on the stack); in the general case you'd expect a memory load every time a virtual function is called. This illustration being simple is further illustrated by the mock version of the test in #2 and #3 being optimized away completely.
If you chose option #1 above, which is the most sensible one unless design forces something else, you don't have the option of sneaking in mocks. Thus your unit tests automatically become integration tests. Option #2 has a runtime cost and would slow down the game; option #3 has potential of becoming unreadable, in addition to increased compile time when tests are included (in C++ templates are recompiled for each (distinct) instantiation) and breaking of IDE features if they can't see the instantiation of the template, which makes code navigation harder if you can't resolve the destination of a function call in your head. Also, templates will happily take any class for their parameters. If the parameter's interface doesn't line up with the expectations of the using class, cue lengthy compile errors (C++20's concepts can help with this, and double as documentation of the expected interface). If the interface does line up with the expectations, but still is the wrong class (concepts won't help here), you have errors your unit tests wont catch (since you supply the correct mocks there). Your integration tests should, however, fail.But for me that’s not the point, I try to point it out a bit better with an example:
[...]That points again to this: that’s sounds like integration tests, not unit tests. And if there are proper unit tests, there should be no real need for a test dependency. Because that is the dependency: first the unit tests, then integration tests. Surely, it would be nice, but the underlying problem that the test-dependency masks is, that there seems to be only one big integration test.boskid wrote: ↑Fri Jun 25, 2021 9:05 amTrying to debug why 2 electric poles belong to different electric networks is a lot harder, but knowing that a lot easier tests failed: let it be electric pole is not buildable - speeds the process of understanding what became broken by not forcing into looking into slightly more complex tests.
Again, I cannot know the real reasons why doing it like so. It would be simple to say “this is bad, don’t do it”. But things are in reality much more complex. On the other hand I can conclude from the information I have.
Regarding test randomization, one way of satisfying both camps could be to run the tests in random order, but to display the results in dependency (or declaration) order. Whether any test frameworks are capable of this, I don't know. (Automated tests at my workplace are quite lacking, sadly)
Re: Friday Facts #366 - The only way to go fast, is to go well!
@kovarex hope you read this:
TDD Gurus:
I would recommend you also to check out Kent Beck (father of TDD) and Martin Fowler, in particular its book Refactoring, as there you can find predefined techniques of refactoring that will help a lot. Just reading your post about the logic of manual building being in a big method I thought that applying a few of the techniques in that book would allow you in no time to split and be able to create the proper abstractions to reuse the logic in all the different scenarios you guys have.
The whole idea is that you reach a point where you just write logic, game logic, business logic, journey logic, whatever, completely isolated from anything, easily testable and perfect for the TDD cycle, just using the language (may be some math lib if so, but that would be provided through a port) then you just connect it through an adapter to the upper layers, that you need that to be connected or reused, write another adapter no problem.
Why courses, I'm always was self-taught but in TDD in particular being so counter-intuitive at first, it helps a lot to have someone that can guide you and flatten a bit the learn-curve. Is very easy to take decisions at the begging that makes tons of sense just to realize in 6 to 12 moths laters that you are in a big mess, and what ends up happening is that you or the team, or your bosses force to come back to normalcy because this tdd mumbo-jumbo doesn't work or take to much time or cost too much etc etc. But worse you will suffer the consequences too.
My experience is that classists in the long run would make you do less tests, have less time maintaining them and will have less headaches trying to understand in 6 months why this class interact with this or that, Will allow you to change interfaces more easily and alert you quickly about a interface issue.
Will tell you quite fast when you broke something BUT! is a doble edge sword, If your tests are not behavioral, and each test you do is not testing 1 single thing (you can have several assertions in one test but all must be testing one single aspect or behaviour) you will end up making a change in one place and having thousands of test failing for no reason, and your test names might not help you in understanding why.
Sorry coming back to Mutation, Mutation tests will run your test suite several times, and will start mutating your CODE and check if your test detect those mutations, if those mutations survives that means your tests are not actually testing that bit as expected, many times you would have coverage even several times coverage over those lines, that doesn't mean that logic is properly tested (test without assertions will cover it up, but won't test it).
Specially in the early stages of TDD I would strongly recommend this, and find a way to run those mutations focused on the new tests created, on that commit or branch (if you happen to work in feature branches).
Running them as a nightly or full regression kind of doesn't make sense, in what I believe would be a quite huge suite in your case, as for each mutation needs to run the entire regression, imagine that in 500.000 lines of code.
A sensible Mutator would not mutate every single line, the optimizations that they apply are quite beyond my understanding. But are a really neat tool to have around, I always use it more locally focused, in a pre-commit check or in CI builder checking just the additions in my branch.
HTH
TDD Gurus:
I would recommend you also to check out Kent Beck (father of TDD) and Martin Fowler, in particular its book Refactoring, as there you can find predefined techniques of refactoring that will help a lot. Just reading your post about the logic of manual building being in a big method I thought that applying a few of the techniques in that book would allow you in no time to split and be able to create the proper abstractions to reuse the logic in all the different scenarios you guys have.
Ports and Adapters (Hexagonal):
Also I would recommend you guys to check out Ports and Adapters Architecture AKA Hexagonal Architecture. Then you can check out onion or clean architecture from Uncle Bob too, but the importance of Ports and Adapters, specially for a game I believe is key.The whole idea is that you reach a point where you just write logic, game logic, business logic, journey logic, whatever, completely isolated from anything, easily testable and perfect for the TDD cycle, just using the language (may be some math lib if so, but that would be provided through a port) then you just connect it through an adapter to the upper layers, that you need that to be connected or reused, write another adapter no problem.
Courses:
Also I would recommend you guys when the "normality" comes back and you can be all together in the office or a common space, to hire some guru of this topics to give you good bases on the matter. It will sped up a lot the amount of knowledge you can get. Here in Barcelona where I live there is a small company that focus on this called Codesai, I learn tons with these guys, maybe you can reach them or find something similar there in CzechiaWhy courses, I'm always was self-taught but in TDD in particular being so counter-intuitive at first, it helps a lot to have someone that can guide you and flatten a bit the learn-curve. Is very easy to take decisions at the begging that makes tons of sense just to realize in 6 to 12 moths laters that you are in a big mess, and what ends up happening is that you or the team, or your bosses force to come back to normalcy because this tdd mumbo-jumbo doesn't work or take to much time or cost too much etc etc. But worse you will suffer the consequences too.
Test Dependencies (Classists vs Mockists):
As regards the Test Dependencies, and the test in isolation, Kent Beck and Martin Fowler explain it beautifully, but this is a complex matter in the community between mockists and classists, Mockist will tell you that you should test your class or your unit in isolation and all its collaborators must be mocks, classists (Kent Beck again father of tdd) would tell you otherwise, Kent and Martin will redefine UNIT as behaviour vs the way mockist see UNIT as a Class or Function (depending which one is you first citizen).My experience is that classists in the long run would make you do less tests, have less time maintaining them and will have less headaches trying to understand in 6 months why this class interact with this or that, Will allow you to change interfaces more easily and alert you quickly about a interface issue.
Will tell you quite fast when you broke something BUT! is a doble edge sword, If your tests are not behavioral, and each test you do is not testing 1 single thing (you can have several assertions in one test but all must be testing one single aspect or behaviour) you will end up making a change in one place and having thousands of test failing for no reason, and your test names might not help you in understanding why.
Careful with Coverage, Mutation tests:
Something else I recommend to look for are pit tests or mutation tests etc, The idea behind it is to test the quality of your tests, coverage is nice but also is a doble edge sword, as pursuing 100% coverage could make you make really worthless test forcing to call a method or a condition just for the sake of coverage, and those tests has NO behavioural value whatsoever, then you'd end up in a situation of maintaining those test and many times preventing you to do proper refactors, because you are not sure if that has or not value, if there is a test there must be for something... approach etc.Sorry coming back to Mutation, Mutation tests will run your test suite several times, and will start mutating your CODE and check if your test detect those mutations, if those mutations survives that means your tests are not actually testing that bit as expected, many times you would have coverage even several times coverage over those lines, that doesn't mean that logic is properly tested (test without assertions will cover it up, but won't test it).
Specially in the early stages of TDD I would strongly recommend this, and find a way to run those mutations focused on the new tests created, on that commit or branch (if you happen to work in feature branches).
Running them as a nightly or full regression kind of doesn't make sense, in what I believe would be a quite huge suite in your case, as for each mutation needs to run the entire regression, imagine that in 500.000 lines of code.
A sensible Mutator would not mutate every single line, the optimizations that they apply are quite beyond my understanding. But are a really neat tool to have around, I always use it more locally focused, in a pre-commit check or in CI builder checking just the additions in my branch.
HTH
Re: Friday Facts #366 - The only way to go fast, is to go well!
i don't see how this is an actual problem so long as one uses
https://news.ycombinator.com/item?id=17332776
ccache works at the translation unit level, caching object files. Zapcc caches (among other things) template instantiations between compilations.
If you are in a codebase that has some very complex header files with lots of (potentially nested) template instantiations, and then a large number of executables that all instantiate these templates, Zapcc can make a huge difference in compilation times. CCache doesn't really buy you anything in this scenario.
-
- Burner Inserter
- Posts: 15
- Joined: Thu Mar 01, 2018 7:45 am
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
i'm actually using a c++ GUI library that can do exactly that! it makes things very simpleTOGoS wrote: ↑Thu Jun 24, 2021 4:20 pmI don't have experience with FRP, but my understanding of the concept is that it simplifies things by having each interactive element, rather than having to procedurally update the rest of the form, just update the backing data model, and then the state of the entire form is regenerated *by one piece of code* based on that data model. So instead of hundreds of ad-hoc update-Y-and-Z-because-X-changed functions, you just have one update-data-model-because-X-changed functions for each input X (whether that's an interactive element on the form, or something external that might change the data), and a rebuild-the-form-from-the-data function.
specifically it's RmlUi using the Model-View-Controller features
Mods:
Thaumaturgic Machinations Research Fix
Circuit Pinger
Hazard Lights
Hazard Lights Selection Tool
Hazard Lights Auto Lights
Thaumaturgic Machinations Research Fix
Circuit Pinger
Hazard Lights
Hazard Lights Selection Tool
Hazard Lights Auto Lights
Re: Friday Facts #366 - The only way to go fast, is to go well!
Of course. In fact, a fun game to play is "break it / fix it." Player one writes a test that fails. Player two makes the simplest possible change to the production code to make the new test pass. The purpose of making the simplest change is that it's quicker to do but often exposes more edge or corner cases for player one to exploit.
Re: Friday Facts #366 - The only way to go fast, is to go well!
Hi Kovarex,
I found that TDD completely changed how I approach coding. In a way, it made me paranoid. Making any change without tests in place feels like walking a tightrope without a net. If I write code without a test already in place, I feel like I have low confidence that what I wrote will work. I also have a tendency to feel unfocused if there's no test in place. What am I trying to accomplish with this code I'm writing, if it's not to make a test pass?
I also find it's frustrating that developers who haven't done hard core TDD don't understand and may even laugh at such thinking.
--John
I found that TDD completely changed how I approach coding. In a way, it made me paranoid. Making any change without tests in place feels like walking a tightrope without a net. If I write code without a test already in place, I feel like I have low confidence that what I wrote will work. I also have a tendency to feel unfocused if there's no test in place. What am I trying to accomplish with this code I'm writing, if it's not to make a test pass?
I also find it's frustrating that developers who haven't done hard core TDD don't understand and may even laugh at such thinking.
--John
Re: Friday Facts #366 - The only way to go fast, is to go well!
Hi Kovarex,
Consider the value of Red Green (https://en.wikipedia.org/wiki/The_Red_Green_Show)
This is an inside joke on my team that we use to refer to the problem that tests usually don't have tests. What happens when tests have bugs? First of all, they're not testing the production code effectively or correctly. Secondly, how do you find and fix bugs in tests? Bugs in tests can hide real bugs, give you false confidence or worse.
So one easy thing we try to always do is to make sure our tests go from Red to Green. First write a test. Run it and make sure that it fails (i.e. it's Red). Now work on the production code until the test passes (i.e. it's gone from Red to Green) and all other tests pass. I know it's only a one time indicator that the test works, but at least your changes to the production code changed the test result. Conversely, if the test result doesn't change, then you know something is fishy.
--John
Consider the value of Red Green (https://en.wikipedia.org/wiki/The_Red_Green_Show)
This is an inside joke on my team that we use to refer to the problem that tests usually don't have tests. What happens when tests have bugs? First of all, they're not testing the production code effectively or correctly. Secondly, how do you find and fix bugs in tests? Bugs in tests can hide real bugs, give you false confidence or worse.
So one easy thing we try to always do is to make sure our tests go from Red to Green. First write a test. Run it and make sure that it fails (i.e. it's Red). Now work on the production code until the test passes (i.e. it's gone from Red to Green) and all other tests pass. I know it's only a one time indicator that the test works, but at least your changes to the production code changed the test result. Conversely, if the test result doesn't change, then you know something is fishy.
--John
Re: Friday Facts #366 - The only way to go fast, is to go well!
this only works if your expectations are correct. you could be testing the wrong thing, or creating side-effects through other un-tested or less-tested or improperly-tested codepaths.
Re: Friday Facts #366 - The only way to go fast, is to go well!
For what little it's worth, test dependencies and conditional (non-)execution of tests with failed dependencies is a standard feature of TestNG. Sure, that doesn't help you in the C++ world, but the concept is well explored. There's also a bunch of stuff for test grouping, etc.
Re: Friday Facts #366 - The only way to go fast, is to go well!
I think the great thing about Test Driven Development is that by deciding how a system will be tested before its implemented, you will be forced to create a much clearer image of how the feature will work/interact with the rest of your system, before implementing the code. My experience with programming mainly boils down to a hobby (currently having a go at replicating Factorio in java for learning purposes) but in so many cases I will get an idea for a "great" tool that will make the development so much easier, only to find that it is rendered useless in practice or will be detrimental to the design of the system.
Another philosophy I like is "write code to describe the comments not the other way round". In this approach, when designing complex systems, you write out what each segment (system/class/function) in the system will do before you write any code starting from the general specifications of the program and fill in the gaps until you have comments that describe actual code. Once all of the comments are done then simply write the code implementing the commented instructions and you will find that the chance of design errors (which can often lead to having to redesign the system) is almost entirely removed. Not to mention the implementation is really straight forward, and you now have fully commented code and you could even rewrite the whole program in another language if needed.
example:
this way you can often see dependencies and requirements long before implementing.
Another philosophy I like is "write code to describe the comments not the other way round". In this approach, when designing complex systems, you write out what each segment (system/class/function) in the system will do before you write any code starting from the general specifications of the program and fill in the gaps until you have comments that describe actual code. Once all of the comments are done then simply write the code implementing the commented instructions and you will find that the chance of design errors (which can often lead to having to redesign the system) is almost entirely removed. Not to mention the implementation is really straight forward, and you now have fully commented code and you could even rewrite the whole program in another language if needed.
example:
Code: Select all
Game: This is a game where we build buildings on a grid to make things (dependencies discovered: grid world, buildings, crafting, transport, player)
Grid world: This is a world that our player can move around and place buildings in
Adding buildings:
...
Player Interaction:
...
World display:
Building: These craft items automatically and do things for the player
...
Item Transport:
...
Player:
...
-
- Filter Inserter
- Posts: 292
- Joined: Mon Dec 07, 2015 10:45 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
One of the problems I have encountered after having been a software engineer for about 25 years now is that features and fixes are billable hours, while writing tests are not billable hours. In 95% of the time I have to 'sneakily' add tests during development. Everything is a priority, everything has to be done yesterday. It's one rush job after another.