Open objective systems have complications around privacy that I should address.
If we want to be open and transparent and accountable about how we develop plans for the meeting peoples’ preferences, that’ll require some degree of openness about what their preferences are. Some preferences are somewhat embarrassing, petty grudges, crushes, kinks, phobias, and many of these preferences will have to be revealed to planners.
Sometimes this can be solved by limiting planning to a few trusted people. This somewhat undermines the point of an opob, but not completely, sometimes you can trust a closed, opaque group as long as you know that there’s openness and accountability internally, you might not be able to hold them accountable, but if they can hold each other accountable, there are problems that will become much less likely.
You could also limit competition to algorithmic planners that run on isolated computers and then have their memories erased after they’ve produced their proposals, which is something you can’t do with human planners. (From what I can tell, making this kind of process leak-resistant and open/censorship-resistant is, though technically interesting, actually possible.)
Sometimes it can’t be solved that way either, when the opob requires a human hand, and where the opob is serving a plurality of conflicting interests. In these cases we should remind ourselves that a world where different people can be honest and straightforward about their preferences is both desirable and necessary, for how can we agree to approach our ideal world if we can’t even admit to each other what it would be?