Neural net entities

Post your ideas and suggestions how to improve the game.

Moderator: ickputzdirwech

User avatar
benwillard
Burner Inserter
Burner Inserter
Posts: 17
Joined: Sat Jun 22, 2019 11:30 am
Contact:

Neural net entities

Post by benwillard »

TL;DR
Neural logic: [element of] neural networks.
What ?
I suggest a component that would read two sets of real values, the first being multiplicators for the second, which would be added. And result in an output.
The next level of these components would be layers of neurons. The smaller the best. And so on.
Why ?
Thas would make Factorio a self-learning-factories builder by adding it easy-as-game deep-learning abilities.
See more about here: https://skymind.ai/wiki/neural-network
Roxor128
Fast Inserter
Fast Inserter
Posts: 168
Joined: Sun Oct 02, 2016 9:48 am
Contact:

Re: Neural net entities

Post by Roxor128 »

I think you could actually do this with the existing circuit network.

Implementing a single neuron would be simple enough. Just a bank of arithmetic combinators multiplying weights fed via a common line and outputting on a common signal, all their outputs fed into a decider combinator taking its threshold from another line.

How you'd modify the weights to make it learn, I have no idea. I never managed to get my head around that aspect of neural networks.

I think Factorio's circuit network might be Turing Complete, though. It's got conditional branching out of the box, and you can build memory cells out of arithmetic combinators, though it is clunky.
User avatar
benwillard
Burner Inserter
Burner Inserter
Posts: 17
Joined: Sat Jun 22, 2019 11:30 am
Contact:

Re: Neural net entities

Post by benwillard »

Roxor128 wrote: Sun Jun 23, 2019 10:14 am I think you could actually do this with the existing circuit network.

I think Factorio's circuit network might be Turing Complete, though. It's got conditional branching out of the box, and you can build memory cells out of arithmetic combinators, though it is clunky.
I agree. The point is to have smaller "neurons". Using present entities would take a lot of time, space, and thinking. Having 1x1 neurons and 1 x n layers already working with Reals would be a fast way. Also it could be used as non-learning processing units.
Roxor128
Fast Inserter
Fast Inserter
Posts: 168
Joined: Sun Oct 02, 2016 9:48 am
Contact:

Re: Neural net entities

Post by Roxor128 »

benwillard wrote: Sun Jun 23, 2019 1:50 pm Having 1x1 neurons and 1 x n layers already working with Reals would be a fast way.
Unfortunately, you're out of luck there. As far as I know, Factorio's internal simulation is integer-based (and the circuit network definitely is). I recall the devs beating their heads against the wall at one point over the fluid system causing trouble for players because it used floating-point numbers and players kept running afoul of rounding issues. I think they've since changed it to an integer system, just to get its behaviour consistent.

Then again, it might not be necessary. If your neurons are essentially multiply, sum, and branch, then I can't see why you couldn't do an integer-only implementation. Use 8-bit values for input and weight, get a 16-bit result from the multiply, maybe get a 20-bit result from the sum (assuming 16 inputs) and use a 20-bit value for the threshold to compare with. The output value would just be 1 or 0 (firing or not-firing), wouldn't it? Back down to 8 bits again.

I poked around the mod portal and found that it is indeed possible to make custom combinators, so it should be possible to make a mod for it. I don't, however, think it's possible to get it down to 1*1. The only combinator that size is the constant combinator, and I suspect it's because it only has an output. If you need inputs as well, I suspect 1*2 would be the minimum size for a combinator with distinct input and output sides. Unfortunately, I don't understand the training side of neural networks well enough to implement it.
User avatar
benwillard
Burner Inserter
Burner Inserter
Posts: 17
Joined: Sat Jun 22, 2019 11:30 am
Contact:

Re: Neural net entities

Post by benwillard »

Roxor128 wrote: Mon Jun 24, 2019 10:54 am
it might not be necessary
[...]
If your neurons are essentially multiply, sum, and branch, then I can't see why you couldn't do an integer-only implementation. Use 8-bit values for input and weight, get a 16-bit result from the multiply, maybe get a 20-bit result from the sum (assuming 16 inputs) and use a 20-bit value for the threshold to compare with. The output value would just be 1 or 0 (firing or not-firing), wouldn't it? Back down to 8 bits again.
You're right : 8 (or maybe 16... wiht one or two or three used for metasignal, makes 13 or 14 bits available) bits is enough (with a NAN value), because it makes enough different values to have a big solutions space. And no, the result is not only "firing or not": the output of a neuron may also be a 16 bits integer. Usable as Real... Rounding is not a real problem in this application. It could be used to display computed values on a digital display (or even a steampunk analogic display :) ) entity, too. The possible rounding problems would be a part of the game.
I suspect 1*2 would be the minimum size for a combinator with distinct input and output sides. Unfortunately, I don't understand the training side of neural networks well enough to implement it.
No, it could be a special electric pole with several wires on it.

The learning part of this kind of circuitry isn't easy to setup, but isn't mandatory for little systems: you can manually search values, as they normally make sense. Like setting a %age of watava to be a limit for watava else.
mmmPI
Smart Inserter
Smart Inserter
Posts: 3801
Joined: Mon Jun 20, 2016 6:10 pm
Contact:

Re: Neural net entities

Post by mmmPI »

Roxor128 wrote: Sun Jun 23, 2019 10:14 am How you'd modify the weights to make it learn, I have no idea. I never managed to get my head around that aspect of neural networks.
Maybe you can have a set of value that represent the travel time of a train like iron ore. Your neural network could be oriented to call train from the location that is usually the fastest to answer the call, with the aim of adapting to traffic congestion.

You can see it as many experiments, each train arriving gives the system an new information , 1 time travel, and modify a weight that would be the average time travel from one spot.

Thus the system would learn to avoid certains areas and use others more often with time. Like if you suddenly congestion on portion of the rail network without noticing at first, the system will start giving bad travel times to the trains from that area and eventually adapt by stop calling them, reducing the congestion.

A more complex system would require that more combinators are build to store more information, because the average time travel from one spot is not very precise, you would want to make some lanes A B C D E and so on,several layers of neurons would be needed to keep track of the different open/close possibilty A is on while BCDE are off , A and B are on rest are off ect.

If you model each possibility at the same time the information each train gives is then "the traveltime from one spot when train comes from C when A and B are close and D and E open" for example.

The value that changes when a train arrives is still a timetravel that will be put in some sort of "average time travel per situation" table. but you give yourself the opportunity to have more precise understanding, by multiplicating the number of nodes you can apply the "weight" into a more specific situation.

The system would then know that when you use copper and iron at the same time you need to call trains from patches that are far away from each other so that the train don't cross because it would give longer time travels and the system could know it, but the system wouldn't mind using some of the iron that is near copper if you require iron and coal at the same time.

Even more would be to keep track of the open/close possibilty A is on while BCDE are off SINCE AT LEAST 3 minutes , A and B are on rest are off BUT "E" WAS CLOSED ONLY 10 SEC AGO ( so trains returning might still cause issue ). If you make different nodes for those situation, then the weight is still an average time travel , but it carries more value for the system.

That was one way i had in mind for application of neural entities, i am not sure it is correct, it is possible to implement that sort of logic with exisiting combinators because as far as i know circuits networks are Turing complete.

But could totally be made easier if you have a neuron entity that is a signle combinator compared to something like shown here viewtopic.php?t=28641 which seems to act like one if understood what i saw at the time.

I understand the training part as feeding information to your system that will be organised and reused later. The neurons are in the brain but you need at least 1 eye of some sort.

The more precisely you wish to index this information, the more experiments you need to run for the datas to start making sense. The more neurons you need to model the more configuration. The bigger the brain, especially if you add an 1 ear to the 1 eye.
User avatar
benwillard
Burner Inserter
Burner Inserter
Posts: 17
Joined: Sat Jun 22, 2019 11:30 am
Contact:

Re: Neural net entities

Post by benwillard »

mmmPI wrote: Mon Jun 24, 2019 1:24 pm
Roxor128 wrote: Sun Jun 23, 2019 10:14 am How you'd modify the weights to make it learn, I have no idea. I never managed to get my head around that aspect of neural networks.
Maybe you can have a set of value that represent the travel time of a train like iron ore. Your neural network could be oriented to call train from the location that is usually the fastest to answer the call, with the aim of adapting to traffic congestion.

You can see it as many experiments, each train arriving gives the system an new information , 1 time travel, and modify a weight that would be the average time travel from one spot.

Thus the system would learn to avoid certains areas and use others more often with time. Like if you suddenly congestion on portion of the rail network without noticing at first, the system will start giving bad travel times to the trains from that area and eventually adapt by stop calling them, reducing the congestion.

A more complex system would require that more combinators are build to store more information, because the average time travel from one spot is not very precise, you would want to make some lanes A B C D E and so on,several layers of neurons would be needed to keep track of the different open/close possibilty A is on while BCDE are off , A and B are on rest are off ect.

If you model each possibility at the same time the information each train gives is then "the traveltime from one spot when train comes from C when A and B are close and D and E open" for example.

The value that changes when a train arrives is still a timetravel that will be put in some sort of "average time travel per situation" table. but you give yourself the opportunity to have more precise understanding, by multiplicating the number of nodes you can apply the "weight" into a more specific situation.

The system would then know that when you use copper and iron at the same time you need to call trains from patches that are far away from each other so that the train don't cross because it would give longer time travels and the system could know it, but the system wouldn't mind using some of the iron that is near copper if you require iron and coal at the same time.

Even more would be to keep track of the open/close possibilty A is on while BCDE are off SINCE AT LEAST 3 minutes , A and B are on rest are off BUT "E" WAS CLOSED ONLY 10 SEC AGO ( so trains returning might still cause issue ). If you make different nodes for those situation, then the weight is still an average time travel , but it carries more value for the system.

That was one way i had in mind for application of neural entities, i am not sure it is correct, it is possible to implement that sort of logic with exisiting combinators because as far as i know circuits networks are Turing complete.

But could totally be made easier if you have a neuron entity that is a signle combinator compared to something like shown here viewtopic.php?t=28641 which seems to act like one if understood what i saw at the time.

I understand the training part as feeding information to your system that will be organised and reused later. The neurons are in the brain but you need at least 1 eye of some sort.

The more precisely you wish to index this information, the more experiments you need to run for the datas to start making sense. The more neurons you need to model the more configuration. The bigger the brain, especially if you add an 1 ear to the 1 eye.
Excellent. You can correct valueit by hand in the beginning, and later automatize. Factorio spirit. Of course, you can do that in binary mode with existent entities (thanks to Turing magics). But it would be like sending a SMS in binary.
mmmPI
Smart Inserter
Smart Inserter
Posts: 3801
Joined: Mon Jun 20, 2016 6:10 pm
Contact:

Re: Neural net entities

Post by mmmPI »

benwillard wrote: Mon Jun 24, 2019 12:47 pm
Roxor128 wrote: Mon Jun 24, 2019 10:54 am I suspect 1*2 would be the minimum size for a combinator with distinct input and output sides. Unfortunately, I don't understand the training side of neural networks well enough to implement it.
No, it could be a special electric pole with several wires on it.
I was thinking about that again, i think the point here is "distinct input and output sides" , an electric pole sums up all signal and connects the wires, if you have a 1x1 combinator, you can only have 1 input and 1 output if you use a color code for green and red, or 2 output, or 2 input, if you try another configuration you will not be able to know what you are doing when you place the wire.


I don't understand what is the expected behavior of a neuron, how would the entity described function in practice, how many connections with how many wires, what would be the logic it applys to what values standing for what and how you set them up so it could help materializes that theory.

There are many theorical description of what could be a neuron according to how you set them up together they are differents as far as i understand, which is not far enough i guess because the whole logic i described before i think you could implement it without neurons or even binary.

The logic I describe I think include what is called backpropagation, but as a workaround, by adding a new value to an average. It is not what a neural network is supposed to do.

Also i misused the word "model" that i discovered after is something specific in that context.
User avatar
benwillard
Burner Inserter
Burner Inserter
Posts: 17
Joined: Sat Jun 22, 2019 11:30 am
Contact:

Re: Neural net entities

Post by benwillard »

Two rows of values: one is coming from outside to get "values", another to get "multiplicators". The output multiply the member of each set with its multiplicator and sums it. Simple. Multiplicators can be locked or not. Backpropagation is when the multiplicators are modified by outputs from "downstream" neurons.
Post Reply

Return to “Ideas and Suggestions”