Friday Facts #366 - The only way to go fast, is to go well!
Re: Friday Facts #366 - The only way to go fast, is to go well!
I must say I'm honestly surprised that the dev team does not use Functional Reactive Programming for their UI code (and things like the building code), as these complex state interactions / asynchronous actions are exactly the kind of use case for this paradigm. I'm pretty sure using that for your GUI code would make working with it a general piece of cake and at least for UI code, it is provable that Reactive Programming is the most performant possible approach as well.
Re: Friday Facts #366 - The only way to go fast, is to go well!
I did not particularly enjoyed that one, but at least I know that you guys are still alive (despite COVID-19), and working on my favorite game.
Re: Friday Facts #366 - The only way to go fast, is to go well!
With a general-purpose compiler? I doubt.
With a specialized compiler that was designed to fold those reactive patterns into usual loops? Probably...
Could you please give some links that support this claim?
- BlueTemplar
- Smart Inserter
- Posts: 3234
- Joined: Fri Jun 08, 2018 2:16 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
So, what is FRP ?
BobDiggity (mod-scenario-pack)
- NotRexButCaesar
- Smart Inserter
- Posts: 1133
- Joined: Sun Feb 16, 2020 12:47 am
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
Here is the wikipedia article: https://en.wikipedia.org/wiki/Functiona ... rogramming.
Ⅲ—Crevez, chiens, si vous n'étes pas contents!
Re: Friday Facts #366 - The only way to go fast, is to go well!
Great read! Love the bit about the shareholders not wanting us to waste time refactoring... that's exactly what's wrong at work, we're always rushing on to the next feature, never paying down the tech debt and it's dissatisfying seeing the same error popping up again and again which we have to manually fix in the database again and again.
I have a (perhaps naive) question about GUI... it seems like every game ever made invents a GUI system from scratch that all do the same things... text box, button, check box, image/video/interactive tutorial pane, toolbar, hit points indicator, etc. To me this feels like millions of developer hours wasted because "that other person's wheel isn't quite round enough". What's the thought process behind building a GUI vs bringing in (and contributing to) an open source library?
PS: Uncle Bob is amazing! I've listened to a few podcasts of his, thanks for the youtube link - that will be good watching!
I have a (perhaps naive) question about GUI... it seems like every game ever made invents a GUI system from scratch that all do the same things... text box, button, check box, image/video/interactive tutorial pane, toolbar, hit points indicator, etc. To me this feels like millions of developer hours wasted because "that other person's wheel isn't quite round enough". What's the thought process behind building a GUI vs bringing in (and contributing to) an open source library?
PS: Uncle Bob is amazing! I've listened to a few podcasts of his, thanks for the youtube link - that will be good watching!
Re: Friday Facts #366 - The only way to go fast, is to go well!
Yeah... But I take his "ultra-factoring" concept with a grain of salt. Why?
To begin with, common languages like C++ or Java have a lot of a visual noise. Sure, those three-line functions may look "like a prose" in something like Haskell, but in C++ they just stomp SNR to the floor, making code harder to read. I see a lot of curly braces, parentheses, argument types, spaces, commas... where is my code?
Secondly, it's extremely hard to factor complex functions right. And if you've read Robert's book, you know what I'm talking about. I had a lot more WTF moments while reading his examples after refactoring. Bunch of three-line functions that toggle random mutable state here and there. WTF?! And if the algorithm is torn apart by three-line pieces, it's very hard to understand what invariants should hold for that mutable state.
Third thing is performance. Abstractions are cool, but they aren't free. Even in release mode they easily confuse the compiler. And in debug mode you pay the full price, so you can drink a tea if you want to debug anything larger than unit test. I agree, we shouldn't debug clean code often. But what to do when I should? I use a lot of third party libraries, after all.
So, I don't believe into the "extract methods until you can't do it anymore" mantra. Finding a good balance is still an art.
Re: Friday Facts #366 - The only way to go fast, is to go well!
I think that is true in most walks of life, but especially in software. 3000 lines of code in one file, vs 1000 files with 3 lines in each... you need to find some happy medium. Developers often get swept up by an ideal and try applying it to every situation, even if it doesn't make sense to apply it. Ever tried using a NOSQL database for relational data because 'oh, you have to use NOSQL, that's the way we do things round here'. Always try to get the right tool for the job.
- freeafrica
- Inserter
- Posts: 49
- Joined: Mon Aug 19, 2019 2:33 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
It's an interesting concept, I think I'll make use of that, however one idea/advice pops to my mind.kovarex wrote:For this, I implemented a simple test dependency system. The tests are executed and listed in a way, that when you get to debug and check a test, you know that all of its dependencies are already working correctly. I tried to search if others use the dependencies as well, and how they do it, and I surprisingly didn't find anything.
In a lot of cases, randomizing test execution order can raise light on bugs. In fix-ordered execution a previous test could've caused a side-effect which may have made an otherwise rightly failing test case pass, thus hiding a (testing/production) bug. Also, having to ensure randomized test execution success raises the robustness of your test-suite.
My advice would be, to have this dependency-based execution order for development purposes, but also have an execution environment with randomized order. That should also output it's test-execution order-chain, so you could reproduce the raised issue.
- BlueTemplar
- Smart Inserter
- Posts: 3234
- Joined: Fri Jun 08, 2018 2:16 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
But they *already* seem to be using asynchronous signals and events, so why does Luxalpa thinks that they aren't using FRP ? ("Too many" classes, "not enough" functions ?)NotRexButCaesar wrote: ↑Fri Jun 18, 2021 10:32 pmHere is the wikipedia article: https://en.wikipedia.org/wiki/Functiona ... rogramming.
And is C++ even the right language to do FRP in ?
BobDiggity (mod-scenario-pack)
- BlueTemplar
- Smart Inserter
- Posts: 3234
- Joined: Fri Jun 08, 2018 2:16 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
AFAIK before Wube made their own graphic engine, Factorio used an open source library : Allegro, but being pretty old, it ended up by limiting Factorio ?seltha wrote: ↑Sat Jun 19, 2021 12:52 am Great read! Love the bit about the shareholders not wanting us to waste time refactoring... that's exactly what's wrong at work, we're always rushing on to the next feature, never paying down the tech debt and it's dissatisfying seeing the same error popping up again and again which we have to manually fix in the database again and again.
I have a (perhaps naive) question about GUI... it seems like every game ever made invents a GUI system from scratch that all do the same things... text box, button, check box, image/video/interactive tutorial pane, toolbar, hit points indicator, etc. To me this feels like millions of developer hours wasted because "that other person's wheel isn't quite round enough". What's the thought process behind building a GUI vs bringing in (and contributing to) an open source library?
PS: Uncle Bob is amazing! I've listened to a few podcasts of his, thanks for the youtube link - that will be good watching!
https://www.factorio.com/blog/post/fff-230
P.S.: Also, not "every game ever made", isn't this one of the things that Unity helps with (with a big performance tradeoff) ?
BobDiggity (mod-scenario-pack)
Re: Friday Facts #366 - The only way to go fast, is to go well!
This really confused me until I remembered operator precedence:
At first glance this looks like you're taking the address of the this pointer
Code: Select all
&this->resetPresetButton
Re: Friday Facts #366 - The only way to go fast, is to go well!
most good unit test suites set up a new environment for each test.Muche wrote: ↑Fri Jun 18, 2021 3:03 pmI've read that an advantage of randomized test order is it could reveal hidden inter-test dependencies (e.g. a later test doesn't set the test environment properly and "relies" on an earlier test to do it).eradicator wrote: ↑Fri Jun 18, 2021 2:16 pm The dependencies aren't explicitly written down anywhere, they're just implicit in the order that tests are run.
-
- Fast Inserter
- Posts: 152
- Joined: Sun Jun 16, 2019 4:04 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
You should separate the LuaEntity.html by the types.
-
- Burner Inserter
- Posts: 7
- Joined: Fri Sep 29, 2017 2:49 pm
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
Ah, another new disciple. Excellent :mrburns:
I suggest you checkout the books on refactoring by Martin Fowler (my key takeaway: if your tests fail unexpectedly when you refactor, your steps are too big) and on TDD by Kent Beck (who demonstrates how to write a unit test framework in a TDD way and plenty more. It's a great read) and that you check out the practice of acceptance test driven development ATDD) where you write end-to-end tests first and then break them down into more and more specific tests (e.g. by cutting out the GUI, cutting out all event handlers... basically you implement each component on its own, having the e2e-tests to make sure that once you're done, the feature works).
If I weren't so comfortable in my current job, I'd definitively consider applying for a job at Wube
I suggest you checkout the books on refactoring by Martin Fowler (my key takeaway: if your tests fail unexpectedly when you refactor, your steps are too big) and on TDD by Kent Beck (who demonstrates how to write a unit test framework in a TDD way and plenty more. It's a great read) and that you check out the practice of acceptance test driven development ATDD) where you write end-to-end tests first and then break them down into more and more specific tests (e.g. by cutting out the GUI, cutting out all event handlers... basically you implement each component on its own, having the e2e-tests to make sure that once you're done, the feature works).
If I weren't so comfortable in my current job, I'd definitively consider applying for a job at Wube
Re: Friday Facts #366 - The only way to go fast, is to go well!
"If the tests don't have any special structure, the situation when 100 tests all fail at the same time is very unfortunate, all you are left with is to try to pick some test semi-randomly, and start debugging it."
Assuming one is not doing a code redesign, having a large number of tests fail at once when making changes signals some possible design flaws in the production code. It can signal that code that has the same reason(s) to change is either "all over" or in a large class that is doing too much. Either one can result in a large number of tests failing. If that's happening when I make a small change (and I try to keep each change as small as possible), I start looking for how I can move code around so that code that has the same reason to change is decomposed in a cleaner way.
Assuming one is not doing a code redesign, having a large number of tests fail at once when making changes signals some possible design flaws in the production code. It can signal that code that has the same reason(s) to change is either "all over" or in a large class that is doing too much. Either one can result in a large number of tests failing. If that's happening when I make a small change (and I try to keep each change as small as possible), I start looking for how I can move code around so that code that has the same reason to change is decomposed in a cleaner way.
Re: Friday Facts #366 - The only way to go fast, is to go well!
"When you do small changes the yes/no indication of tests is enough, but it isn't always an option, especially when you refactor some internal structure, in which case you kind of expect to break a lot of stuff, and you need to have a way to fix them step by step once it can compile again."
*nods* This can happen during redesign, if the redesign isn't broken down into small enough refactoring steps. Someone else has already recommended Martin Fowler's Refactoring book, I second that recommendation: https://martinfowler.com/books/refactoring.html
One reason to isolate dependencies:
It is desired that unit tests be _fast_ and run frequently. My idealized pattern for TDD is:
Ideally, when a subsystem has been isolated in the other tests, there's at least a small number of integration tests that insure that the other subsystems interact with the slow subsystem correctly. Typically, such tests are grouped together and run less frequently -- perhaps before putting a new feature or bug fix into a test environment.
There are other reasons as well for isolating certain kinds of systems from most of our tests. Generally, it can lead to better decoupling between classes, more clearly intentioned classes, etc.
"Unit" is a deliberately vague term. It is super common for OO programmers to treat it as synonymous with "class" or for non-OO programmers to treat as synonymous with "file", etc. That is, there's a tendency to target classes/files with unit tests. However, that isn't necessarily the best approach. If a series of classes comprise a subsystem, those classes will be tightly coupled with each other. It can be merited for the "unit under test" be that subsystem.
A way to think about the unit tests is that they should test through "public interface" of a given subsystem. In this context, I'm using "public" to mean "public to the rest of the codebase". I've found the ideal is to test through the methods that the rest of the code will be using to access the unit-under-test's functionality.
This is also a valuable way to think of refactoring -- "how do I change the behavior of this unit WITHOUT changing its public interface?".
*nods* This can happen during redesign, if the redesign isn't broken down into small enough refactoring steps. Someone else has already recommended Martin Fowler's Refactoring book, I second that recommendation: https://martinfowler.com/books/refactoring.html
One reason to isolate dependencies:
It is desired that unit tests be _fast_ and run frequently. My idealized pattern for TDD is:
- Run all the tests for the project to ensure I'm starting in a good place.
- Write a small specific new test for a class that requires the change.
- Run all the tests for the class being modified, implement a small change that passes the tests.
- Repeat until the series of tiny changes hits my threshold for a commit.
- Run ALL tests in the project to ensure that I haven't broken something somewhere else (which I won't if I succeed with my refactoring skills learned from Fowler, above).
- Commit the changes.
- Repeat this pattern until I've implemented the feature.
Ideally, when a subsystem has been isolated in the other tests, there's at least a small number of integration tests that insure that the other subsystems interact with the slow subsystem correctly. Typically, such tests are grouped together and run less frequently -- perhaps before putting a new feature or bug fix into a test environment.
There are other reasons as well for isolating certain kinds of systems from most of our tests. Generally, it can lead to better decoupling between classes, more clearly intentioned classes, etc.
"Unit" is a deliberately vague term. It is super common for OO programmers to treat it as synonymous with "class" or for non-OO programmers to treat as synonymous with "file", etc. That is, there's a tendency to target classes/files with unit tests. However, that isn't necessarily the best approach. If a series of classes comprise a subsystem, those classes will be tightly coupled with each other. It can be merited for the "unit under test" be that subsystem.
A way to think about the unit tests is that they should test through "public interface" of a given subsystem. In this context, I'm using "public" to mean "public to the rest of the codebase". I've found the ideal is to test through the methods that the rest of the code will be using to access the unit-under-test's functionality.
This is also a valuable way to think of refactoring -- "how do I change the behavior of this unit WITHOUT changing its public interface?".
Last edited by Feaelin on Sat Jun 19, 2021 3:22 pm, edited 2 times in total.
Re: Friday Facts #366 - The only way to go fast, is to go well!
In addition to revealing that a given test depends on the setup of another test, random test order execution can also reveal implementation code that's retaining state when it shouldn't be. Tests that execute in order, and have hidden interdependence will mask unintended stateful behavior.eradicator wrote: ↑Fri Jun 18, 2021 3:12 pmWell, i guess you could split it into blocks? Ones that depend on each other and are tested in order, and the ones that don't depend and are tested shuffled? My amateurish self-made test environment doesn't really support randomization .Muche wrote: ↑Fri Jun 18, 2021 3:03 pmI've read that an advantage of randomized test order is it could reveal hidden inter-test dependencies (e.g. a later test doesn't set the test environment properly and "relies" on an earlier test to do it).eradicator wrote: ↑Fri Jun 18, 2021 2:16 pm The dependencies aren't explicitly written down anywhere, they're just implicit in the order that tests are run.
If you're finding yourself wanting to "manage" which test suites are being run together, that may signal that the implementation code isn't decomposed optimally. I'd look for code that is related but is spread out and code that is together that isn't actually closely related. Moving related code together and separating out unrelated code will (over time) reduce the number of times that making changes results in many tests failing.
Re: Friday Facts #366 - The only way to go fast, is to go well!
Management should be setting timelines based on the engineer recommendations. Part of our responsibilities as engineers is to include _all_ of the effort in our estimation. The tests should be part of the engineer's estimate.
Also, once one has become practiced with TDD, it is often faster than programming without it. It reduces what I call "flailing" -- where one endlessly fiddles with the implementation to get it to work. It also shrinks execution time of the "make a change does it work" cycle, because with the tests in place you already know that the change didn't break the other functionality, provided of course, the other tests successfully cover that functionality.
We perceive TDD as being slower but it very often is faster, even at the micro-level. It is definitely faster at the product level due to the reduction of bugs going into the released version of the project.
Actually, programming is about people too. It is:While programming is about computers, management is about people.
- People (engineers) trying to communicate with a machine (easy)
- Engineers trying to communicate with engineers in the future by writing understandable code (hard)
- Engineers trying to teach machines to communicate with the users of the software (hard)
It is only tedious when you're new to it. Once one gets into the rhythm of it, it can actually be more gratifying than regular programming. With TDD, when the tests are targeted well, you get a hit of dopamine with each "green" test. Once you hit flow with TDD, it's intensely joyful.And you need more disciplined programmers, because TDD is tedious:
- you're in fact doing double the work in one go (code & test) instead of the usual phases
- Instead of "yay, I did it right", after a while it feels more like "well, i did it - Again"
Like a lot of things, one won't get that feeling when first starting out.
There's value in what we call a "spike" where I work. A "spike" is an explorative / R&D type effort to see if something is even feasible. For a spike, someone will simply throw code at a problem until they've gotten a feel for the problem space and how to approach it. At that point, we either toss the code, or start a fresh branch, and implement a solution using TDD.That's why you need to find a solution to break the workload down and also give people a feeling of success. Maybe intermix 'productive' phases using TDD and 'explorative' phases to try things out. The resulting mockups could be used as blueprints for later test cases.
Something I've seen over and over again is that while the spike results in a working solution, invariably, when I re-do it using TDD, I discover that there were scenarios I didn't consider during the spike. This is because TDD creates a clean separation of thought about the possible scenarios. Often, one will identify cases that you wouldn't have otherwise.
- NotRexButCaesar
- Smart Inserter
- Posts: 1133
- Joined: Sun Feb 16, 2020 12:47 am
- Contact:
Re: Friday Facts #366 - The only way to go fast, is to go well!
Please edit the original post to add more content instead of creating multiple posts in a row.
Ⅲ—Crevez, chiens, si vous n'étes pas contents!