Fair Processing Versus Autonomy

The core fair information practice principles came directly from “Privacy and Freedom” published in 1967.  “Privacy and Freedom” was written prior to the publication of the paper in 1970 that described relational databases.  When “Privacy and Freedom” was published autonomy and fair processing could be seen as one in he same.  Data was collected from individuals for discreate purposes, and individuals provided their consent for those purposes.  Fair processing beyond consent was covered by consumer protection legislation with privacy imp0lications.  The best example is the 1970 Fair Credit Reporting Act.  A consumer economy required a system of personal data that all credit active people needed to participate in.  We have a legacy of this fair processing legislation including Drivers Privacy Protection Act and Video Privacy Act.

Our current observational world that drives the analytics that make IoT and AI work has accelerated the need for fair processing guidance beyond special circumstances to the norm.  Can one expect individuals to govern the data driven world with their informed consents, the means for autonomy?  The recent International Conference of Data Protection and Privacy Commissioners placed the emphasis on ethics as the driver for fair processing.  Increasingly, the concept of permission where consent isn’t effective is pushed through legal provisions such as legitimate interests and legitimate processing.  Recent guidance rom the Hong Kong China Privacy Commissioner for Personal Data has guiderails for processing beyond common understanding.

The IDEAL bill attempts to fill the gap between where consent is effective versus fair processing beyond common understanding in the section 4 discussion of consistent uses.  The legislation would link consistency to original specified uses.  Instead, should it be linked to processing within the context of the processing to the interests of individuals and individuals as a group.  I am concerned that IDEAL bill would lead to notice inflation to create the means for consistency, rather than a clear means to determine that a use is within the context that would establish that processing is fair.

5 comments

  1. Michelle Richardson
    I agree with Marty. Purpose limitations are the clearest and most effective way to make consent meaningful, and we at CDT would go so far as to say that when hen it comes to some types of sensitive data, a ban on secondary uses – regardless of individual consent – is appropriate. We are reaching a point where the internet is ubiquitous, opaque, complex, and unavoidable enough that the only way to protect users is to set a floor of appropriate behavior.

    I know we all stretch our analogies when discussing the internet, but it is fair to point out other scenarios where we have decided that individual negotiation is simply not possible. For example, when I walk into a public building, I do not personally need to know, understand, and make a decision about whether there are enough sprinklers or fire extinguishers. When I walk into a drug store, the onus is not on me to negotiate the safety and effectiveness of each drug with each pharmaceutical company. The point is not that some people may be able to navigate the world this way, but overwhelmingly most of us cannot and a bargain needs to be struck on our collective behalf. It will certainly be complicated, but we need to figure out what baseline behavior we expect in our digital world.

    • Annie Anton
      As the technologist / engineer amongst the experts here, my remarks focus primarily on providing the perspective of someone who would need to be able to implement / codify the the law in software. To this end, should the idea of banning secondary uses of information become law, it would make it much easier for engineers to design and implement software that could enforce such a secondary use ban at run time. Otherwise, nothing in our enforcement regime will change as we will continue to leave enforcement up to lawyers to codify in legal contracts and data use agreements, which may or may not actually reflect how the relevant software actually operates. Having said that, from the ML (machine learning), artificial intelligence (AI), and data science point of view, such a ban could severely cripple lots of great research; it is often the case that data collected for one purpose is analyzed to identify patterns, etc. (not the original purpose) that of fields ML, AI, and data science can make significant advances. I would expect push back from the ML, AI, and data science communities.

  2. Paula Bruening
    I agree with Marty – consistent use as the basis for secondary use will lead to a proliferation of notices that likely won’t result in informed choices. His context proposal osal makes sense. However, while “over-notification” is counterproductive and to be avoided, it doesn’t argue for less transparency – I think the notice provisions in the Intel bill reflect the need to keep individuals informed without burdening them with complex notices that don’t help them safely navigate the data ecosystem.

  3. Omer Tene
    The problem with any proposal to replace purpose limitation with a policymaker-defined notion of context is that it becomes paternalistic and overrides consent. Where it is real, that is informed med and voluntary, consent SHOULD legitimize data use. Of course I know it’s difficult to reach that “real consent” standard; but doing away with it undermines the whole framing of this as a privacy protection law. How is context determined absent regard for individual preferences/choices? Who will decide? What does Marty’s “processing within the context of the processing to the interests of individuals and individuals as a group” even mean?
    And to Michelle’s comment, I disagree that “a ban on secondary uses [of sensitive data] – regardless of individual consent – is appropriate”. If medical researchers can use health data — with appropriate safeguards – to cure a lethal disease, there’s a strong societal interest to do so even without consent. And certainly if patients — say, of a disease caused by a rare genetic mutation — AGREE to this use, as you assume, who are we to deny them this opportunity? And if we do deny them, we better find another reason to do so than protecting their privacy, which they agreed to trade off.

    • Michelle Richardson
      Hi Omar. I expect that any bill will have exceptions for certain behavior like cybersecurity efforts, traditional business practices like billing or system maintenance, and similar uses that are just just fundamentally consistent with offering a service. Whether and how to include deidentified data as an exception, or by implication through a linkability standard seems to be in the mix too. My understanding is that the situation you describe – research on a rare genetic mutation – happens through intentional participation in a study (and therefore not secondary at all), through very specifically defined de-identification practices, or upon peer review under HIPAA. Do we want unregulated entities to be able to do similar medical research without similar controls?

      But that gets to whether we center a bill around theoreticals and edge cases or the everyday, widespread data processing practices that we know go on as a regular course of business. I would rather base a bill on the latter and write smart exceptions.