@Deadly-Bagel
I have no response to that except what I have already talked about. I can't reply to "You haven't seen the code so how do you know your technique works for them". All I am saying is that unless they process things in a purely random order and they rely on the fact that it's in this uncontrollable random order it should be easy enough to transform ONE aspect of the game to use multiple threads without adversely affecting the others. Unless you have some concrete evidence or a specific "how do you handle this case" question, this "should"/"Could" game is not helping the discussion.
hoho wrote:Problem with that is that Factorio multiplayer is built around only sending the server the actions of a player and the server simulates all those on it's own side. That means at each tick, the game has to complete with computing all those "areas" before it can begin with calculations for the next tick. In other words, the game can't begin with calculating the next tick before everything is complete on the current one.
The multiplayer aspect should not have any part in the update of an inserter. The rewrite is a client-server scenario so a players input is sent to the server and then the server processes it essentially as if it was a local players actions. The processed action is then set back to the clients who update the game state appropriately. In fact, I would not touch the input processing at all. Why would you? The game would spend less than 0.0000000000000001% of its time processing user input.
The suggestion is that all the actions in a frame get processed entirely before processing the next. Its just a question about how many threads are able to update all the inserters. At the moment every inserter is processed on the one thread. I am offering a technique that removes the multithreading headaches that are caused by "inserter A must be processed before inserter B".
hoho wrote:BenSeidel wrote:hoho wrote:@OP, you didn't address the need of perfect determinism in your response to criticism of your original post.
Hmm... because I don't see that there is any indeterminism. Your question about the two inserters removing things from the same chest was addressed in my first post, so I'm not sure what you are asking.
It did talk a little bit about it but I was specifically interested in the scenario I described:
fewer items in the chest than inserters try to pick out. To make things more fun, make it a provider chest and have bots try to take things from it as well.
Or, another alternative, instead of chest have a transport belt with one item running on it but several inserters trying to take it off. Also add those inserters and belt to wire network that connects to machines in half the factory.
I'd be interested to hear the logic you'd propose for grouping up such vaguely connected things to be dealt with by a single thread instead of breaking them up over several threads.
So, to address when there are fewer items in the chest as being picked up by two inserters (btw, this is essentially the same as a pipe trying to fill the orders of two connected chemical factories). If we choose the most fine-grained solution as it will give the most amount of detail, there are other levels of "units of work", such as the "merged" context I have talked about.
So, lets take it from a point where two inserter arms are swinging round to the chest. I will say that the two inserter arms will arrive at exactly the same frame, and that they both will try to pick up 5 iron plate when there is only 3 iron plate in the chest.
So, frame 1, both inserters get processed by an individual thread. On each independent thread they:
read the current inserter arm location, calculate the new one and write it to the "output" buffered variable.
Calculate if the next frame would put them over the chest (if you want it instantly) OR it could be a check that if their current position is over the chest.
Either way, if the check succeeds, "attach" the inserter to a list of inserters for the chest that are eligible to pick up an item (in the next frame, not this one). This can be done using many methods, including a lockable list, as collisions should be extremely rare. If a lockable list is chosen, then the current timestamp or some other mechanism must be used to ensure that any subsequent executions will always give the same result. You could double-buffer the list, or the list values, or many other solutions that would work. If this is the part that is unclear let me know as I can go into more detail on the list part. Hell, the this may only be modified during the user input processing phase as that (up till now) is the only way an inserters input and output locations can be swapped. In this way the chest should always know all the inserters that can possibly be picking up from it, so every entity with a link to the chest can do any calculation they require for their next state.
We'll say that the check succeeded and the inserters were added, ready for frame 2.
In yet another independent process/thread the chest checks if any inserters are trying to remove an item. Seeing none, it does nothing. It should see none regardless of if the inserters have yet run their code. If they have and you have a synchronised list (not a double buffered one) then it should be able to identify that they inserters are there for the next frame and not the current one.
The clock ticks and we start processing frame 2. What was previously written to the output buffers is now "flipped" to be the values in entities.
The inserters, ready to pick up an item both check (again in separate threads) to see who gets to pick up an item out of the chest. They can do this because the "requesting list" is clock-safe (discussed above). Each inserter figures out how many items they can take. In this case one is able to collect 3 while the other one sits and does nothing. The decision as to which inserter gets to pick it up can be based on some ID, such as an EntityID or possibly some physical location like "Top left goes first". It could also be a fifo system based on how long an inserter has been waiting (just note its arrival time in the list) or any other of DETERMINISTIC methods. You CANNOT do "I processed it first, so I get the items".
So, as all 3 entities (chest and two inserters) have access to the same clock-controlled information, and as they all run the same code to determine who takes what, they can all update their state based on the changes, independent of the ordering or timing of the other executions.
When it comes round to frame 3..4... etc, the same code is run, at some point the timeout for the first inserter arm is reached and the inserter removes itself from the "requesting list" and starts its swinging code.
If you wish to include robots in there, then the frame before the robot arrives, attach it in the same manner as the inserter arm to the list of requesting entities. If at any point the chest goes to zero, during the update phase of the robot you are able to check the item count in the chest and cancel the order if need be, or do whatever logic you wish to do with the robots.
I hope I have addressed your question. If I have not let me know what I have missed and I will attempt further explanations.
hoho wrote:Or, another alternative, instead of chest have a transport belt with one item running on it but several inserters trying to take it off. Also add those inserters and belt to wire network that connects to machines in half the factory.
The circuit network is especially easy as you can see the behaviour of "the value becomes available in the next frame" with an addition combinator attached to itself. As the output of a combinator has no bearing on its behaviour during its frame, but only on the input values of the calculation done on the next frame, you should be able to see reasonably easy that you could process each network independently of any other processes in the factory. Values read from the belt are the the contents of the belt during that frame, not the next so you can just read the state of the belt and add it to the network signal for the next frame. In fact you can read the state of any entity during that frame without worrying if that entity has been updated. The double buffer ensures it.
The inserters being enabled and disabled is essentially the same, read the current value of the circuit network (the value for that frame) and decide if that inserter is able to pick up an item. As all threads should see exactly the same "current value", they will all come to the same conclusion.
greep wrote:FYI, you really should read that link smarty posted, it's pretty obvious they've considered everything you've said:
I did, both when it came out and when it was linked.
I don't dispute that what you quoted is the feeling they have towards multithreaded code. It's the feeling that the entire industry has. What amazes me is that you missed the part where they say
The main problem when dealing with parallelism is access to shared resources, eg. when two different threads write to the same piece of memory. This is either solved by locks, which is not really an option on the entity level, or by strategy that prevents the threads to write to the same piece of memory.
So here I am trying to put forward a strategy (that as far as I know is not well known) that does exactly what they are looking for: preventing threads from writing to the same memory location. It's also a strategy that should not need significant changes to their update code. I think that they may even find that their compiler will type-check the places that need to be updated to be clock-controlled. And by significant I mean "major change in logic" not "I have to add .get() to all these errors?".