Ben Garfinkel, a research fellow at Oxford University, writes about the difference between social privacy (what your intimates and acquaintances know about you) and institutional privacy (what governments and corporations know about you):
How about the net effect of these two trends? Have the past couple hundred years of change, overall, constituted decline or progress?
. . . . .
My personal guess is that, for most people in most places, the past couple hundred years of changes in individual privacy have mainly constituted progress. I think that most people would not sacrifice their social privacy for the sake of greater institutional privacy. I think this is especially true in countries like the US, where there are both high levels of development and comparatively strong constraints on institutional behavior. I think that if we focus on just the past thirty years, which have seen the rise of the internet, the situation is somewhat more ambiguous. But I’m at least tentatively inclined to think that most people have experienced an overall gain.The Case for Privacy Optimism
And overall he concludes that he is optimistic about privacy trends, particularly because of artificial intelligence:
The existence of MPC [Multi-Party Computation] protocols implies that, in principle, training an AI system does not require collecting or in any way accessing the data used to train it. Likewise, in principle, applying a trained AI system to an input does not require access to this input or even to the system’s output.
The implication, then, is this: Insofar as an institution can automate the tasks that its members perform by training AI systems to perform them instead, and insofar as the institution can carry out the relevant computations using MPC, then in the limit the institution does not need to collect any information about the people it serves.
This view, which of course assumes quite a bit of technology, is both plausible and consistent with a number of other researchers who view AI technology as being a potential improvement on our ability to manage human bias and privacy intrusions.
I also tend to believe the glass is half full. That’s my own bias.