Big table and one calculation per tick
Big table and one calculation per tick
Hi all!
How to add new element to big table,
how to delete one element from big table
and how to work on this tick with some element from this table and with the NEXT one in this table on next tick?
After deleting as tabl[index] = nil I cannot just use index=index+1, the table[index+1] can be also nil,
probably next thousands elements too or it can be the last element and I need to work again from index =1
How to add new element to big table,
how to delete one element from big table
and how to work on this tick with some element from this table and with the NEXT one in this table on next tick?
After deleting as tabl[index] = nil I cannot just use index=index+1, the table[index+1] can be also nil,
probably next thousands elements too or it can be the last element and I need to work again from index =1
Re: Big table and one calculation per tick
Code: Select all
function on_tick(event)
local index = global.big_table_index
global.big_table_index = next(global.big_table, global.big_table_index)
update_big_table(index)
end
function update_big_table(index)
if index == nil then return end -- next() returns one nil index after each cycle
local data = global.big_table[index]
if (we_need_to_delete_it) then
global.big_table[index] = nil
else
-- update data
end
end
For a complete example see https://github.com/DaveMcW/burner-fuel- ... ontrol.lua
-
- Filter Inserter
- Posts: 587
- Joined: Sun Jun 09, 2019 10:40 pm
- Contact:
Re: Big table and one calculation per tick
As long as you don't add a new key, next() iteration will work fine. Adding a key is "undefined" behaviour that, in vanilla Lua 5.2, might rehash keys some, and you skip them. Factorio has modified it, so the difference is unknown, but I suspect immune to that for tables < 1000 keys in size.DaveMcW wrote: Thu Aug 01, 2019 9:58 pm This works because we get the next index BEFORE accessing the previous index. The previous index is now at the end of the cycle and we can do whatever we want to it inside the update function.
Regardless, you should assume that inserting a new key may result in skipping items in this pass through the table. If that matters, reset your iteration to the beginning when you add a new key.
- eradicator
- Smart Inserter
- Posts: 5211
- Joined: Tue Jul 12, 2016 9:03 am
- Contact:
Re: Big table and one calculation per tick
@OPs question is too vague and doesn't state what problem he's trying to solve. Performance? Code maintainability? ...?
No particular problem? -> table.remove()
Order unimportant? -> table[index_to_delete] = table[#table]
Table really large (i.e. 100k+ entries)? -> depends on usecase.
No particular problem? -> table.remove()
Order unimportant? -> table[index_to_delete] = table[#table]
Table really large (i.e. 100k+ entries)? -> depends on usecase.
Author of: Belt Planner, Hand Crank Generator, Screenshot Maker, /sudo and more.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Re: Big table and one calculation per tick
Nothing concrete, it can be every solar panel, every chunk, every electric-energy-interface or every burner inserter.eradicator wrote: Fri Aug 02, 2019 1:42 pm @OPs question is too vague and doesn't state what problem he's trying to solve. Performance? Code maintainability? ...?
By all of them we have really big number of elements, here must be the optimization for megabases: you cannot calculate 3600 of them every minute, but it's enough to calculate just one element every tick.
For example the Klonan's BurnerLeech:
Code: Select all
function leech()
if not fuel_list then
-- skipped
if global.burner_index == nil then
global.burner_index = 1
end
if global.burner == nil then return end
if global.burner[global.burner_index] == nil then return end
check_burner(global.burner[global.burner_index])
if global.burner_index >= #global.burner then
global.burner_index = 1
else
global.burner_index = global.burner_index + 1
end
end
I've used the same method by my RITEG, but in this case the amount of elements not too big.
Now I need to have in memory all of resource entities near the edge of ore patches.
- eradicator
- Smart Inserter
- Posts: 5211
- Joined: Tue Jul 12, 2016 9:03 am
- Contact:
Re: Big table and one calculation per tick
If you're just iterating over a table of entities then the order is irrelevant and you can use the "move last element to deleted position" like i said above.
Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event. And you're forgetting the fact that by using a numbered list in the first place you also have to iterate the whole table to even *find* the entity you want to remove, which is of similar cost.
But if you're worried about that you're clearly optimizing for performance, and not "nothing concrete".
Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event. And you're forgetting the fact that by using a numbered list in the first place you also have to iterate the whole table to even *find* the entity you want to remove, which is of similar cost.
But if you're worried about that you're clearly optimizing for performance, and not "nothing concrete".
Author of: Belt Planner, Hand Crank Generator, Screenshot Maker, /sudo and more.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Re: Big table and one calculation per tick
Thanks for suggesting to move the last element, it's very nice solution.eradicator wrote: Fri Aug 02, 2019 11:40 pm Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event.
I have the same 1 calculation per tick also here: Growing Ores, but here the element is edge of the ore patch. The deleting of the element is very often, it comes when the ore patch grows, the script adds new edges and check if old ore edge is not more the edge anymore.
-
- Filter Inserter
- Posts: 587
- Joined: Sun Jun 09, 2019 10:40 pm
- Contact:
Re: Big table and one calculation per tick
Swapping last position into the newly freed position also has the benefit that it keeps your array dense. In Lua sparse arrays are roughly the devil, and should be treated as such. They do all sorts of fun things like breaking `#` in exciting ways that mean it becomes very easy for someone to do the wrong thing with them by skipping everything past the first gap.darkfrei wrote: Sat Aug 03, 2019 10:18 amThanks for suggesting to move the last element, it's very nice solution.eradicator wrote: Fri Aug 02, 2019 11:40 pm Also i think your fear of table.remove() is based on some sort of screwed test case. In a list-of-entities scenarios the removal only happens on deconstruction, which is in itself a very rare event.
I have the same 1 calculation per tick also here: Growing Ores, but here the element is edge of the ore patch. The deleting of the element is very often, it comes when the ore patch grows, the script adds new edges and check if old ore edge is not more the edge anymore.
Re: Big table and one calculation per tick
Right now I use this code:
Is here any probably problems?
Code: Select all
function on_tick ()
local rocks = global.rocks or {}
if #rocks == 0 then return end
local id = global.next_id or 1
if id > #rocks then id = 1 end
local rock = rocks [id]
local is_valid = process_rock (rock) -- false if rock must be deleted
if is_valid then
global.next_id = id + 1
else
if id < #rocks then -- the element will be rewrite with the last one
rocks[id] = rocks[#rocks]
end
table.remove (rocks, #rocks) -- last one must be deleted
end
end
-
- Filter Inserter
- Posts: 587
- Joined: Sun Jun 09, 2019 10:40 pm
- Contact:
Re: Big table and one calculation per tick
Your `process_rock()` function *must* check the entity is valid. Otherwise, aside from being a bit ugly, that should be fine in Factorio up to 1000-ish entities. Beyond that, I am not 1000 percent sure you will be desync-safe. You *should* be, because the array can be as large as 2^30 minimum, and probably larger, but I don't know enough about Factorio to be certain.
You should probably consider just using `next()`, but if you prefer this strategy, it'll be fine. There isn't anything wrong with your code.
You should probably consider just using `next()`, but if you prefer this strategy, it'll be fine. There isn't anything wrong with your code.
- eradicator
- Smart Inserter
- Posts: 5211
- Joined: Tue Jul 12, 2016 9:03 am
- Contact:
Re: Big table and one calculation per tick
What kind of desync is dependant on array size, and how does "1000" relate to this?slippycheeze wrote: Fri Aug 09, 2019 11:36 pm Your `process_rock()` function *must* check the entity is valid. Otherwise, aside from being a bit ugly, that should be fine in Factorio up to 1000-ish entities. Beyond that, I am not 1000 percent sure you will be desync-safe. You *should* be, because the array can be as large as 2^30 minimum, and probably larger, but I don't know enough about Factorio to be certain.
You should probably consider just using `next()`, but if you prefer this strategy, it'll be fine. There isn't anything wrong with your code.
Author of: Belt Planner, Hand Crank Generator, Screenshot Maker, /sudo and more.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Re: Big table and one calculation per tick
https://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?eradicator wrote: Mon Aug 12, 2019 3:42 pm What kind of desync is dependant on array size, and how does "1000" relate to this?
But reading everything, pairs() uses next() internally, so there shouldn't be any difference between them i guess
Re: Big table and one calculation per tick
There's no pairs() function.
-
- Filter Inserter
- Posts: 587
- Joined: Sun Jun 09, 2019 10:40 pm
- Contact:
Re: Big table and one calculation per tick
Correct.Choumiko wrote: Mon Aug 12, 2019 8:10 pmhttps://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?eradicator wrote: Mon Aug 12, 2019 3:42 pm What kind of desync is dependant on array size, and how does "1000" relate to this?
In practical terms this is true. You can see the implementation in vanilla Lua here, which will tell you that there is some non-trivial magic under the hood, but that this is approximately a structured call to next() when the metatable has no override.Choumiko wrote: Mon Aug 12, 2019 8:10 pm But reading everything, pairs() uses next() internally, so there shouldn't be any difference between them i guess
pairs() is a primitive function implemented in C as part of the Lua base library. It is, however, absolutely and certainly a function.
- eradicator
- Smart Inserter
- Posts: 5211
- Joined: Tue Jul 12, 2016 9:03 am
- Contact:
Re: Big table and one calculation per tick
The code bit in question does not contain any calls to next() or pairs(). And even if, pairs() always iterates the *whole* table at once and can not be distributed over several ticks, so how is this relevant here? The order of calculations within the same tick is also irrelevant as long as any order leads to the same result.slippycheeze wrote: Mon Aug 12, 2019 8:38 pmCorrect.Choumiko wrote: Mon Aug 12, 2019 8:10 pmhttps://lua-api.factorio.com/latest/Libraries.html says pairs() guarantees the first 1024 numbered keys to be iterated in order. Doesn't say anything about the 1025th to x ones. Maybe that?eradicator wrote: Mon Aug 12, 2019 3:42 pm What kind of desync is dependant on array size, and how does "1000" relate to this?
pairs() is a primitive function implemented in C as part of the Lua base library. It is, however, absolutely and certainly a function.
Author of: Belt Planner, Hand Crank Generator, Screenshot Maker, /sudo and more.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.
Mod support languages: 日本語, Deutsch, English
My code in the post above is dedicated to the public domain under CC0.