Page 1 of 1
Big table and one calculation per tick
Posted: Thu Aug 01, 2019 3:29 pm
by darkfrei
Hi all!
How to add new element to big table,
how to delete one element from big table
and how to work on this tick with some element from this table and with the NEXT one in this table on next tick?
After deleting as tabl[index] = nil I cannot just use index=index+1, the table[index+1] can be also nil,
probably next thousands elements too or it can be the last element and I need to work again from index =1
Re: Big table and one calculation per tick
Posted: Thu Aug 01, 2019 9:58 pm
by DaveMcW
Code: Select all
function on_tick(event)
local index = global.big_table_index
global.big_table_index = next(global.big_table, global.big_table_index)
update_big_table(index)
end
function update_big_table(index)
if index == nil then return end -- next() returns one nil index after each cycle
local data = global.big_table[index]
if (we_need_to_delete_it) then
global.big_table[index] = nil
else
-- update data
end
end
This works because we get the next index BEFORE accessing the previous index. The previous index is now at the end of the cycle and we can do whatever we want to it inside the update function.
For a complete example see
https://github.com/DaveMcW/burner-fuel- ... ontrol.lua
Re: Big table and one calculation per tick
Posted: Thu Aug 01, 2019 11:00 pm
by slippycheeze
DaveMcW wrote: Thu Aug 01, 2019 9:58 pm
This works because we get the next index BEFORE accessing the previous index. The previous index is now at the end of the cycle and we can do whatever we want to it inside the update function.
As long as you don't add a new key, next() iteration will work fine. Adding a key is "undefined" behaviour that, in vanilla Lua 5.2, might rehash keys some, and you skip them. Factorio has modified it, so the difference is unknown, but I suspect immune to that for tables < 1000 keys in size.
Regardless, you should assume that inserting a new key may result in skipping items in this pass through the table. If that matters, reset your iteration to the beginning when you add a new key.
Re: Big table and one calculation per tick
Posted: Fri Aug 02, 2019 1:42 pm
by eradicator
@OPs question is too vague and doesn't state what problem he's trying to solve. Performance? Code maintainability? ...?
No particular problem? -> table.remove()
Order unimportant? -> table[index_to_delete] = table[#table]
Table really large (i.e. 100k+ entries)? -> depends on usecase.
Re: Big table and one calculation per tick
Posted: Fri Aug 02, 2019 7:11 pm
by darkfrei
eradicator wrote: Fri Aug 02, 2019 1:42 pm
@OPs question is too vague and doesn't state what problem he's trying to solve. Performance? Code maintainability? ...?
Nothing concrete, it can be every solar panel, every chunk, every electric-energy-interface or every burner inserter.
By all of them we have really big number of elements, here must be the optimization for megabases: you cannot calculate 3600 of them every minute, but it's enough to calculate just one element every tick.
For example the Klonan's
BurnerLeech:
Code: Select all
function leech()
if not fuel_list then
-- skipped
if global.burner_index == nil then
global.burner_index = 1
end
if global.burner == nil then return end
if global.burner[global.burner_index] == nil then return end
check_burner(global.burner[global.burner_index])
if global.burner_index >= #global.burner then
global.burner_index = 1
else
global.burner_index = global.burner_index + 1
end
end
The operation table.remove() needs by big tables really too much CPU time.
I've used the same method by my
RITEG, but in this case the amount of elements not too big.
Now I need to have in memory all of resource entities near the edge of ore patches.
Re: Big table and one calculation per tick
Posted: Fri Aug 02, 2019 11:40 pm
by eradicator
If you're just iterating over a table of entities then the order is irrelevant and you can use the "move last element to deleted position" like i said above.
Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event. And you're forgetting the fact that by using a numbered list in the first place you also have to iterate the whole table to even *find* the entity you want to remove, which is of similar cost.
But if you're worried about that you're clearly optimizing for performance, and not "nothing concrete".
Re: Big table and one calculation per tick
Posted: Sat Aug 03, 2019 10:18 am
by darkfrei
eradicator wrote: Fri Aug 02, 2019 11:40 pm
Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event.
Thanks for suggesting to move the last element, it's very nice solution.
I have the same 1 calculation per tick also here:
Growing Ores, but here the element is edge of the ore patch. The deleting of the element is very often, it comes when the ore patch grows, the script adds new edges and check if old ore edge is not more the edge anymore.
Re: Big table and one calculation per tick
Posted: Wed Aug 07, 2019 1:22 am
by slippycheeze
darkfrei wrote: Sat Aug 03, 2019 10:18 am
eradicator wrote: Fri Aug 02, 2019 11:40 pm
Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event.
Thanks for suggesting to move the last element, it's very nice solution.
I have the same 1 calculation per tick also here:
Growing Ores, but here the element is edge of the ore patch. The deleting of the element is very often, it comes when the ore patch grows, the script adds new edges and check if old ore edge is not more the edge anymore.
Swapping last position into the newly freed position also has the benefit that it keeps your array dense. In Lua sparse arrays are roughly the devil, and should be treated as such. They do all sorts of fun things like breaking `#` in exciting ways that mean it becomes very easy for someone to do the wrong thing with them by skipping everything past the first gap.
Re: Big table and one calculation per tick
Posted: Thu Aug 08, 2019 8:53 pm
by darkfrei
Right now I use this code:
Code: Select all
function on_tick ()
local rocks = global.rocks or {}
if #rocks == 0 then return end
local id = global.next_id or 1
if id > #rocks then id = 1 end
local rock = rocks [id]
local is_valid = process_rock (rock) -- false if rock must be deleted
if is_valid then
global.next_id = id + 1
else
if id < #rocks then -- the element will be rewrite with the last one
rocks[id] = rocks[#rocks]
end
table.remove (rocks, #rocks) -- last one must be deleted
end
end
Is here any probably problems?
Re: Big table and one calculation per tick
Posted: Fri Aug 09, 2019 11:36 pm
by slippycheeze
Your `process_rock()` function *must* check the entity is valid. Otherwise, aside from being a bit ugly, that should be fine in Factorio up to 1000-ish entities. Beyond that, I am not 1000 percent sure you will be desync-safe. You *should* be, because the array can be as large as 2^30 minimum, and probably larger, but I don't know enough about Factorio to be certain.
You should probably consider just using `next()`, but if you prefer this strategy, it'll be fine. There isn't anything wrong with your code.
Re: Big table and one calculation per tick
Posted: Mon Aug 12, 2019 3:42 pm
by eradicator
slippycheeze wrote: Fri Aug 09, 2019 11:36 pm
Your `process_rock()` function *must* check the entity is valid. Otherwise, aside from being a bit ugly, that should be fine in Factorio up to 1000-ish entities. Beyond that, I am not 1000 percent sure you will be desync-safe. You *should* be, because the array can be as large as 2^30 minimum, and probably larger, but I don't know enough about Factorio to be certain.
You should probably consider just using `next()`, but if you prefer this strategy, it'll be fine. There isn't anything wrong with your code.
What kind of desync is dependant on array size, and how does "1000" relate to this?
Re: Big table and one calculation per tick
Posted: Mon Aug 12, 2019 8:10 pm
by Choumiko
eradicator wrote: Mon Aug 12, 2019 3:42 pm
What kind of desync is dependant on array size, and how does "1000" relate to this?
https://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?
But reading everything, pairs() uses next() internally, so there shouldn't be any difference between them i guess
Re: Big table and one calculation per tick
Posted: Mon Aug 12, 2019 8:17 pm
by darkfrei
There's no pairs() function.
Re: Big table and one calculation per tick
Posted: Mon Aug 12, 2019 8:38 pm
by slippycheeze
Choumiko wrote: Mon Aug 12, 2019 8:10 pm
eradicator wrote: Mon Aug 12, 2019 3:42 pm
What kind of desync is dependant on array size, and how does "1000" relate to this?
https://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?
Correct.
Choumiko wrote: Mon Aug 12, 2019 8:10 pm
But reading everything, pairs() uses next() internally, so there shouldn't be any difference between them i guess
In practical terms this is true. You can
see the implementation in vanilla Lua here, which will tell you that there is some non-trivial magic under the hood, but that this is approximately a structured call to next() when the metatable has no override.
darkfrei wrote: Mon Aug 12, 2019 8:17 pm
There's no pairs() function.
pairs() is a primitive function implemented in C as part of the Lua base library. It is, however, absolutely and certainly a function.
Re: Big table and one calculation per tick
Posted: Tue Aug 13, 2019 1:22 am
by eradicator
slippycheeze wrote: Mon Aug 12, 2019 8:38 pm
Choumiko wrote: Mon Aug 12, 2019 8:10 pm
eradicator wrote: Mon Aug 12, 2019 3:42 pm
What kind of desync is dependant on array size, and how does "1000" relate to this?
https://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?
Correct.
darkfrei wrote: Mon Aug 12, 2019 8:17 pm
There's no pairs() function.
pairs() is a primitive function implemented in C as part of the Lua base library. It is, however, absolutely and certainly a function.
The code bit in question does not contain any calls to next() or pairs(). And even if, pairs() always iterates the *whole* table at once and can not be distributed over several ticks, so how is this relevant here? The order of calculations within the same tick is also irrelevant as long as any order leads to the same result.