I already conceded thatmrvn wrote:First drop the sqrt(). You can compare d^2 instead of d to get the same order.JohnyDL wrote:here's a basic reason why prioritising bots based on distance really won't work (without even needing to look at the code)
for every target you'll need to work out the distance to it so every tick you need to do ((playerX-targetX)^2+(playerY-targetY)^2)^0.5 to get the distance for every blue print in the world and that calculation doesn't take one processor cycle it takes at least 3 for the additions alone and the multiplication depending on how it's done dozens of cycles, might not sound like a lot but if you have 1,000 BPs you're adding 10k+ things to how many operations your computer has to do each tick already and thats without having to fetch and load the data extra times to do this calculation
JohnyDL wrote:Manhattan distance would work (actually doing the pythagorus but not square root would still work as a metric as comparing distances it's just a scale thing) but it wouldn't look right as a diamond of action rather than a circle.
The only way to know what's in range is to run the surface check on the area closest to the player. I talked about that heremrvn wrote:Next add some brains and memory. You do not compute the distance of every ghost every tick. And once computed the distance doesn't change unless the player moves. Even if the player moves you only have to recompute the distance of ghosts within range, which you can do by iterating over chunks.
JohnyDL wrote:That's not how surface search works you can't even quicken the search by looking chunkwise at 32x32 tiles cause it will ask all 1024 tiles in turn.SockPuppet wrote:- Compute ranges in blocks of 6x6 instead of 1x1, so that a bot first travels to the 6x6 area, and inside that area to its destination. You have 25 6x6 blocks (30x30 = 25x6x6) for a mk1 roboport, and then 36 per square. That's 61 computations total, instead of 900, at the cost of a suboptimal route for your bots. Blame the underdeveloped bot AI if you want an excuse. They will still perform better at deconstructing by clearing whole areas faster.
Yup I'm complaining about 900 tile checks the devs are saying it won't work with a 30x30 so 33600 might be a problem.SockPuppet wrote:- Worst case max range is a 183x183 range (mk2 armor, 1 portable reactor, rest mk2 personal roboports), which spans more than one screen when zoomed out. The travel time of the bots takes longer than to compute the max distance. Also, the bots tend to run out of power before returning from such a roundtrip.
Because of optimisation and not adding ideas that will at best add 1.6% extra processing power at worst, with this way of only looking at the bit in the personal roboport area more than double the runtime 33600 checks per player which are multiple instructions each is already well into the multiple hundreds of thousands of operations. But the way the devs describe it and the fact the game would have to load from RAM locations 33600 times per would soon add up, RAM is measured in MHz after all not GHz so multiply 33600 by 60, I'd say that's 2 million probably none sequential RAM calls per second per player. And that's assuming none is in page files on disk. Nope not possible. I mean seriously not possible, 4 players with these roboports would kill a high end machine with quad channel memory. And remember that's only checking the locations of the surface and it's properties assuming the check can be done in exactly one RAM cycle each, that doesn't even include the fact you have to load the program out of RAM and process that too.SockPuppet wrote:- If you still want to play the computational power card: Once you reach megabases with thousands of bots, a few hundred trains and have explored the map quite a bit you experience drops in UPS. Anything less works fine. And this is still on 0.15! We can currently create a massive factory with dozens of players, and still have it playable for all players.
So your solution adds more RAM requirements on the game and duplicates the data leading to the game having to manage keeping the data synced in memory. If I'm also reading correctly I think you're also saying to keep the heap Locally could be calculated at a different time to Server-Side which would cause Desyncs between what my heap is planning and the server has cause my computer and the server are at different speeds and mine can calculate for me but doesn't think other players are important but the server was delayed in reprocessing because it was dealing with other players.mrvn wrote:You also don't compare the distance of every ghost every tick. The best data structure for this is a heap with the nearest ghost at the top. Removing a ghost takes O(log n) time and leaves the next nearest ghost at the top of the heap.JohnyDL wrote: Then you have to sort the list based on this metric, even using a quick sort a 1000 item array takes 2000 comparisons minimum, and 1000s of swaps and reorganising of the data structure even being generous and saying this takes only 20k cycles to do is still a lot
Combined with a little overhead this reordering of items to make it near first then far probably adds between 100k and 1m clock cycles for just 1000 BP objects (I have single blueprints with more items than that). A reasonable CPU is 3.6GHz, it has to do 60 ticks per second, so factorio only has 60m clock cycles to play with per tick and even then the operating system doesn't give it all of those, resulting in this suggestion basically asking for relatively small size blueprinting to cut an extra 1.6% off factorio's available time (which would add noticeable UPS effects given they were super happy about saving 2% in an FFF a few weeks ago) to make personal robots only a little more life like/better when life like robots would have orders of magnitude more processing power as each one could have a 3.6GHz processor and work in parallel not sequentially.
If recomputing the distance (and rebuilding the heap) is too much work for one tick then don't do it. Store the distance and player position for each ghost. Then when processing the heap each time you compare distances first check if the stored distance is up-to-date. If not then recompute. That way you recompute at most O(log n) distances per tick. So with 65536 ghosts within personal roboport range that would be at most 16 distance computations and compares per tick. 20 for a million ghosts, 30 for a billion ghosts.
So instead of your 20k cycles and 1.6% time for 1000 items I'm asking for 200 cycles and 0.016% time to make personal roboports + walking around 2-10 times faster depending on range.
Personally I don't think there's enough time to build the heap in the first place during ANY tick, And suggesting it's O(log2(n)) based on pre-organised data rather than O(n^2) which is the time requirements for the surface search to build the organised data in the first place is simplistic.
You ignore circumstances like placing blueprints anywhere would require a complete rebuild because the heap doesn't know if this new blueprint is in or out of the area or out without either calculating the distances to all the new items O(n) time, n=number of orders, or rescanning the local area O(n^2) time, n=diameter of search area in tiles. Bots building things or players erasing blueprints should cause you to have to reheap for similar reasons or you might end up with bots being issued rescinded orders or even completed orders, that's even if we assume the player doesn't move. If the player does move then the heap needs to scan and add the edges of the range which are in the order O(4n), n=diameter of search area in tiles, each tick anyway. I and I understand you've basically built a queue and you're taking from the top and dismissing but it doesn't help if the player system competes with the roboport network and causes duplicated jobs sent out rather than working with it. The heap has no way to handle an order it doesn't have the item for, it could be dismissed or sent to the bottom of the heap, but that assumes the item won't be added to the player's inventory and the next tick, or the tick after that so you end up with the item being stuck at the top of the heap being recalculated each time for no reason if the item isn't delivered or breaking the player centric model if it's moved to the bottom and becomes available.
And any solutions you come up with to address the issues would add to the compute time, and at this point that's more RAM access time rather than cycles remember.