- performance bottleneck: reduces UPS rate by at least 60%. I tested a 1.5MB map using PCIe 4.0 NVMe and 64x tick rate: 30 game minutes take 35s without, 57s with autosave enabled. This is more than 60% performance penalty due to IO bottleneck; on larger map sizes and slower SSDs the impact might be even more noticable.
- hardware wear: at 64x speed, the 5 minutes interval is reduced to 4.6s. This likely results in writing multiple GB of redundant data per minute, causing unnecessary SSD wear.
- the safety net disappears: at these speeds, all three autosaves get overwritten every 14 seconds. This is often faster than a user could react to errors and failures, effectively invalidating the given safety that it is meant to provide
1. Keep the autosave functionality tied to game ticks, but dynamically scale the interval to match the current tick rate modifier. This solution would be great by preserving deterministic autosave behaviour while simulating wall clock intervals.
2. Alternatively, trigger the autosave based on system time instead of game ticks.
This could also prevent painful data losses, like the one I experienced yesterday:
When stress-testing my new spaceship design, the defense failed and the platform was destroyed.
By the time I realized that the platform was gone, all three autosaves were already overwritten, losing 4 hours of design work due to the rapid 14 seconds cycle.
The intuition says that autosaves would give a safety net and obviate the need of securing the progress manually;
I was quite sad to find out that this effect disappears when using upscaled game speeds.


