DerivePi wrote:Then I have to say that you shouldn't be playing multiplayer. You can detect when something is changed you can't detect the purpose the player had in mind for changing it. Of course, as a last option you can beg the admin, if present, to kick that player.
..yes, which is why I'm not proposing a system that relies on detecting people's purposes (or indeed suggesting any mechanism to decide who to ban at all).
torne wrote: I don't think that you can define a set of actual systems that dictate what constitutes "reportable" behaviour.
Of course you can. There is definitely a framework of rules that can be crafted (and should be) that emphasizes "fun" over unfair. I personally don't care for the global report system. I just want them removed, after due process, from the game I'm currently playing. Just give me a button that says "Do you want JackA removed from the game for griefing?" If a majority click yes, griefer is gone. Back to fun. Due process would be the occurrence of a specific offense as detected by the system, the lodging of a complaint and then a vote.
I'm also not saying it's impossible to come up with any rules for a server, or have any system for voting on bans, or anything. I'm responding to the idea that you can have the *game* detect these situations in a sensible way. Humans can come up with any kinds of rules and systems they want, and there's lots of possible useful ideas there, but as you said yourself it's pretty hard for the game to judge intent.
vtx wrote:You can see the account verification as a READ ONLY data on factorio server. The fact to allow game server to WRITE data on factorio server. It's easy to "hack" directly that data so some people can affect your reputation without you to connect to their server.
I don't think you understood what I'm trying to explain here, because I didn't really flesh it out that much. This isn't about allowing anyone to write to data on the factorio server (or the factorio server having anything to do with this process whatsoever other than login attestation) - this is about enabling someone to run a service that individual game servers can use to report and judge people's reputations, while preventing servers from submitting reports about players who have never interacted with that server.
The account verification system currently provides a mechanism to check that players are who they say they are and that they own the game - nobody can go on a server that has verification enabled and claim they are "torne" except for me. This works by the server talking to both my client and the verification server and using a bit of cryptography: simplified, my client sends some data that identifies me, the server I'm connecting to forwards that data to the verfication server, and the verification server tells the server whether it's "legit" or not.
What I'm suggesting is that it would be possible to do that same process in a way that leaves the server I'm connecting to with a piece of data that basically says "torne connected to this server at this date/time", which would be possible for *anyone* to verify as being true, and which nobody can forge unless they know my account password. If the login process worked like this (which it probably doesn't right now), then someone could come along and use that as the basis to invent a reputation service. Servers could use this proof to submit reports about that user (good or bad, whatever form you wanted to have, entirely up to whoever makes the service), and while obviously the server might say "they're a griefer" when they're not, what it couldn't do is make a report about you if you'd never been on that server. So, a bunch of evil people could ban you from their server for no reason, and report that to a service, but they can't submit 500 separate reports about you unless you connect to 500 different evil servers.
This doesn't involve Wube actually getting involved in the process of saying who is a griefer or not, and there could be more than one different service tracking stuff - it would be entirely up to server operators which services they interact with, and what conditions they use to determine who to allow onto their server, or who to report. The only part that actually has to be implemented in the game itself is the cryptographic mechanism to prove someone was really connected to the server in the first place, without which it is indeed easy to just submit a huge number of completely made up reports about people.
I've experimented with building distributed reputation systems before and there's a lot of different ways to handle the reputation aspect, but what you always need as a starting point is a secure system of *identity* for proving who is who.