A series of escalating liberties

We might naively design the objective function of an open objective system under the expectation that participants will simply tell us, honestly, how intense their preferences are (EG, how much they care to be positioned near to specific people, how much stronger their desire is, to have a local pizza shop, than the desires of others).

That wont work. Given that option, we should expect a participant to exaggerate the strength of all of their preferences to exert maximum influence over the planning calculations, which results in a system that at best can’t measure the strength of preferences, at worst becomes a game of counting to infinity really fast.

Usually this can be addressed by normalizing participants’ preference expressions, which usually means taking the sum of all of their expressed preferences and dividing each of them by that sum, and optimizing for the satisfaction of the normalization. This is actually equivalent to giving the participant a finite quantity that they can choose to spend asking for some accommodations while leaving other features of the plan open for others to decide. (And this is kind of a more intuitive UX for it. And I guess when they’ve allocated all of their quantity, any further allocation should just draw a bit equally away from all of their other allocations.)

And then, if you allowed participants to save up this quantity between voting rounds, then they’d be able to express varying overall strength of preference over time. For instance, consider a person who’s currently more focused on the virtual world than the physical one. This person wont care what neighborhood they lodge in, during that year, their position just isn’t very important to them. It’s useful to let them communicate that to the system, to step back and let others have their way, and we should reward this politeness with additional quantity in later rounds.

Finally, if you allow participants to send their quantity to other participants, they can trade prominence during planning for things that they want more than that, and at that point, after each of these reasonable additions of freedom, this quantity becomes a currency, periodically minted and distributed to participants and given value by its utility for a specific kind of planning.

But this currency is not Money. It is a money, but it is not USD, and it is not Bitcoin. It corresponds to a concrete good which is being grown over time; new homes and communities. I don’t endorse global finite standard currencies, but quantifying debt makes sense, and it would probably make sense here.

A complication: How much to bid?

If we accept that votes constitute a finite credit, we now have to be more guileful about how to bid. Say we have an opob that divides three fruits between different participants according to how much they bidded. Alice likes apple, Bob likes banana, and both like fig. If alice knows that Bob doesn’t want any portion of the apple, then Alice can bid a very small amount for it and still receive the entire thing, likewise with bob and bananas.

TODO: Alice likes apples, and both bob and alice like figs. By default, we’d expect alice to get the apple and a large part of the fig, while bob only gets the remaining part of the fig. Is that fair? Should an ideal system, instead, give bob the entire fig? I guess not. We can’t measure the intensity of peoples’ happiness relative to each other, it has no basis in reality, we can only measure negotiation power. Say that instead of apple, Alice’s special fancy is making contact with a sort of special bioelectric field phenomenon she calls “mungst”. No one else even recognizes the existence of mungst, let alone wants it, but alice wants it, and sometimes receives it. Does this mean that alice should receive less fig?