ergzay wrote: Tue Nov 05, 2024 2:05 am
Also I did not say or imply "it's 2D game, so it doesn't have right to require GPU". I said nothing like that. I was not thinking that either.
ergzay wrote: Thu Oct 24, 2024 1:02 pmThe GPU and CPU requirements of Factorio have always been quite modest in modern hardware terms. I'm hoping that they didn't completely throw that away in the expansion. I'm worried about the other planets now requiring a ton of shaders when they didn't before.
You are correct, BlueTemplar said "It would be especially a shame if a 2D game like Factorio couldn't be run on low end hardware" and later on in the thread you agreed with his different post, which in my mind grouped you together as "people who have an expactation a 2D game must run on low end HW." I am sorry about that, my mistake.
From my point of view, Factorio 1.0, was a game that was promised to IndieGoGo backers in 2013 and was selling in early access on Steam from 2016. It was fair to deliver the "finished" version of game such that it could possibly run on computers people had when they first supported or purchased the game. And complains about 2.0 running worse than 1.1 did are valid. But Space Age is 2024 game, it's a new purchase, and if we wanted to stuff it with expensive shaders so that you'd need high end gaming computer to even launch it, I think that would be perfectly acceptable thing for us to do. If it doesn't run well on someones computer, they can just refund it, and that's fine.
And lastly, when you write things like "I wish more playtesting had been done before release" I read "You didn't do good enough job." No, I stand by my work, this is not a bug (but I now acknowledge it is an issue that deserves to be improved), another year of playtesting wouldn't change a thing about this.
ergzay wrote: Tue Nov 05, 2024 2:05 am
If anything, by taking advantage of newer hardware and software techniques and by cleaning out a lot of old cruft with the new version by deprecating backward compatability, the software when doing the same old thing should be able to do it even faster in the new version rather than slower.
I don't feel like we are allowed to do that. On PC, we are still building against nearly 20 year old instruction set of Core 2 Duo. We don't use 13 year old AVX instructions, and not even 18 year old SSE4 instructions. We limit ourselves to hardware capabilities of DirectX 10 and OpenGL 3.3, which are also around 15 years old APIs. And even that was a problem for some people when we replaced Allegro 5 which used OpenGL 1.2 with our own graphics backend using OGL 3.3 6 years ago. Similar to how somebody used every oportunity to complain about us dropping 32bit builds years after we did it.
Internally, we are currently discussing when we can drop compatibility with MacOS X 10.10, so that we can update C++ compiler, so that we can use some C++ 20 standard library classes and functions.
And your exchange with BlueTemplar in the thread created by person with 12 year old office laptop strenghtens my feeling that deprecating backwards compatibility and taking advantage of newer hardware is absolutely NOT allowed.
ergzay wrote: Tue Nov 05, 2024 2:05 amThough I disagree with calling the requirements "unreasonably low". M1 is a very powerful chip and was made only a couple years ago. At the time it was released it blew away all but the most power hungry AMD and Intel processors. It should be noted that the M1 is not the "Minimum" system requirements. It's the Steam "Recommended" system requirements.
M1 is powerful chip, at late game states of our playtesting I used my Macbook Air to play the game, because none of my other computers were fast enough to keep up with the server. Including the computer I have in the office, which was able to render the scene you reported in this thread in 120 FPS. M1 is very powerful CPU, and possibly decent GPU. The GPU may be amazing computationally, but it probably has the same memory bandwidth as the CPU (I am assuming, I have not read the technical specification document), which is order of magnitude less than what dedicated GPUs have. Combine this with high resolution display and it's inadequate.
Normally, I my Macbook is set to 1650x1050 with "Render in native resolution" disabled. And I did not understand how the resolutions on mac work. Now that you helped me understand it better, I can see and understand that disabling "Render in native resolution" when you have your macbook configured to 1280x800 is something you'd like to avoid. And that's why I am keeping this report alive. I do agree M1 is powerful chip, and it's possible we'll find a good way to use power of the CPU to reduce workload put on the GPU (which is something we always try to balance, but what makes it complicated is that it needs to be ballanced for different systems differently - essentially M1 Macs being near one extreme of the spectrum and Nintendo Switch being on the opposite extreme)
For PC, I have tried to define recommended system requirements for graphics as HW configuration on which you'll be able to play the game smoothly with high sprite quality and high texture compression quality in 1080p. From my experience of playing on Macbook Air in 1650x1050 it seemed to me like HW I can recommend.
ergzay wrote: Tue Nov 05, 2024 2:05 am
I'll add my general perspective that gives context to my statement. As a low level (of the stack) software engineer myself its become a pet peeve of mine that all software just seems to run worse and worse over time despite from the user perspective nothing has changed. So this has become a thing of mine that I never want to make software run worse when it is doing the same thing it did before. If features are added they should only tax the system more when those features are used. In this case nothing from the expansion or new version is being used nor are any new effects at play, yet the performance is significantly worse. That's the context for my statement and why it frustrated me, because it hit a pet peeve of mine.
Thanks for your perspective. I am sorry for unleashing some of my anger and frustration from past exchanges with disappointed fans and from years of Space Age development. It was uncool out of me. I don't mean professionally (I don't care much about that), I mean as person to person.
But 2.0 doesn't do the same thing it did before, and it is percievable in vanilla too. So, pre-1.1 the game had just 1 type of lights, large spotlights usually with very wide falloff. These lights are rendered to a lightmap of 1/4th size of the game view. In 1.1 we added new type of lights - we call them detail lights (and the old type we call gradient lights now), these are sprites used for LEDs and various glows on parts of entities. These are rendered to the second lightmap in full scale.
In 1.1 there is a flaw with these lights. They render separately from sprites, so if the detail light is behind another sprite, it "shines" through the sprite. This has been reported as bug several times (
115286,
68280,
45614 - reports I could find quickly, but there were many more. I remember one in particular with tank and heat pipe.) We have discarded such bug reports, because at that time we thought the performance cost of the fix was not worth it.
But it was one of the first things we changed for 2.0. Now game sprites are rendered into multiple render targets in single pixel shader invocation - game view, and light map. Detail light sprites are just regular sprites with a flags which pixel shader interprets and either blends the sprite to game view normally and as light occluder into the lightmap, or as additive light into the lightmap. It is prette cheap even on lower end dedicated GPUs, but makes every game view pixel about 50% more expensive to render on integrated GPUs with much lower memory bandwidth. And light sprites being just regular sprites with a flag is core engine change for which it is not practical to make an option to switch between old and the new behavior. Whether the cost is worth it is debatable. But, the game is not doing the same thing 1.1 did. And as far as I know, it does the new thing in very optimized way (but clearly not optimal for M1 + high resolution)