Topic

12/2 Changes Made to Section 4

ViewHide Discussion

One comment on “12/2 Changes Made to Section 4

  1. David Hoffman
    We have noticed two typos in Section 4 of the bill that are creating confusion on how the use limitation section works. The two changes we are making to clarify ify our intent are:

    1. In Section 4c we had a reference to “notices required by Section 4(b).” This should instead be “notices required by Section 4(f)”.
    2. In Section 4d(1)(C) we had a reference to “C. any uses that satisfy the language of consistent uses under 4(d)(c)”. This should instead be “C. any uses that satisfy the language of consistent uses under 4(d)(3)”.

    We are also cleaning up some formatting issues, but at this time those are the only substantive changes.

Topic

De-Identification

I suggest that this draft legislation incorporate an appropriate standard for de-identification, based on risk factors related to the potential for re-identification.  The HIPAA model represents at this point the most well-defined and sophisticated regulatory approach that I am aware of (there obviously are others, most of which also incorporate a risk based approach). The… Read more »

ViewHide Discussion
Topic

Preemption of State Law

Preemption Part 1: Background on Federal Preemption of Stricter State Laws For the discussion of this bill, I am planning to focus on one very tricky legal issue – preemption. reemption. My focus is not on whether preemption in general is a good idea. The basic trade widely discussed currently is get privacy protections nationally… Read more »

ViewHide Discussion

5 comments

  1. Peter Swire
    Preemption Part 2. This post discusses points raised by the text of Section 10 of Intel’s draft bill, on preemption. The attempt in this post is to spot issues that that have not been widely discussed to date. A main theme – preemption is a technically complex topic, and a lot of careful legal work is needed to avoid unintended consequences.

    1. Section 10(a) sets forth the general preemption provision – “any civil provisions of the law of any State” that “are primarily focused on the reduction of privacy risk.”

    Comment: There will be debates about what counts as “primarily focused on the reduction of privacy risk.” Some sort of vagueness will likely be necessary for any preemption provision, because of the wide scope of laws that might address the handling of personal information in an information economy such as ours. This is one area where legislative history, including Congressional findings, may be of use in clarifying the meaning of the text.

    2. Section 10(b) says that the new law will not be “construed to limit the enforcement of any State consumer protection law by the attorney general of the State.”

    Comment: As written, the text would apparently preempt general consumer protection law protections brought by individuals or class actions. All 50 states have “little FTC Acts,” which prohibit unfair and deceptive practices. In quite a few states, there are at least some circumstances where individuals can bring claims under the little FTC Acts.

    Two interpretations seem possible here. First, the apparent intent of the provision is to prevent “privacy” claims by individuals under the little FTC Acts. Second, the provision could also be read to preempt ALL claims by individuals under state consumer protection laws, even if they were not privacy-related. If only the first is intended, then language should be added that clarifies that non-privacy consumer protection laws would remain in effect for individual enforcement.

    3. Section 10(c) sets forth a fairly short list of state laws that would remain in effect despite the broad preemption language. For instance, general common law or statutory claims under tort, contracts, and trespass would continue, as well as state laws aimed at preventing fraud. Also, 10(c)(4) has a useful provision stating that contracts about privacy are enforceable under state law.

    Maintaining the background common law (and statutes) of tort and contract is a sensible idea. One can imagine the following interpretive problem, however. Suppose that a state passes a statute that says: “Under State tort law, it is a tort if there is a privacy invasion.” Or, “Under State contract law, violation of privacy is breach of contract.” At that moment, the preemption provision in Section 10(a) conflicts with the retention of state law under Section 10(c). Some more work is needed here to clarify the interconnection of tort and contract law with the laws in 10(a).

    It won’t work to try to define laws as “common law tort and contract” protections. Modern tort and contract law apply a huge number of statutes in addition to common law case development.

    4. Section 10(c)(3) says that medical privacy provisions, with respect to entities covered by HIPAA, are not preempted. Most states have at least some additional medical privacy laws, so this provision would save those long-standing laws from preemption. Note, however, that state medical privacy laws apparently would be preempted with respect to organizations that are not HIPAA covered entities. These might include, for instance, HIV-discrimination laws, or substance abuse clinics that are outside of HIPAA. When HIPAA went into effect in 2003, there was a great deal of work done on the intersection with state laws. For any federal privacy law to move forward, I would suggest careful attention to this range of state privacy laws, by HHS and others.

    5. There is a somewhat glaring omission of how the draft bill intersects with state laws that implement previous federal privacy laws. As stated in my prior post, that list includes at least these: the Electronic Communications Privacy Act (ECPA); the Right to Financial Privacy Act; the Cable Communications Privacy Act; the Video Privacy Protection Act; the Employee Polygraph Protection Act; the Telephone Consumer Protection Act; the Driver’s License Privacy Protection Act, and the Telemarketing Consumer Protection and Fraud Prevention Act (Do Not Call ).

    To take one prominent example, the draft bill appears to preempt the state laws that require two-party consent for wiretaps. ECPA itself only requires one-party consent, but a number of states have long required consent from both parties before the audio taping is permitted. I am not taking a position on how to proceed with these previous privacy regimes, but experts in each regime should be engaged in deliberations on the text.

    6. Similarly, GLBA sets a floor for financial privacy protections, but states are allowed to be stricter (except where Fair Credit Reporting Act preemption applies). Many times, GLBA and HIPAA are considered somewhat equivalent, as federal laws that cover huge sectors (financial services and health care). The draft bill permits stricter medical privacy laws at the state level, but not stricter financial privacy laws.

    7. Social Security number laws, and other lesser-known existing state laws. Many states have specific laws limiting how companies can use Social Security numbers. It appears that those laws would be preempted, unless they count as “anti-fraud” laws. More generally, before preempting, Congress should hold hearings to learn the range of state laws that currently primarily address the reduction of privacy risk. At least where the states have sensible laws already in place, we should be thoughtful before repealing those laws. For years, the late Robert Ellis Smith published an annual update of state privacy laws that were in effect.

    8. Grandfathering of state laws. Given the somewhat dizzying possible number of existing state laws that would be preempted, an alternative approach would be to “grandfather” some or all existing state privacy laws. This sort of grandfathering provision is extremely common in federal legislation, including when the Fair Credit Reporting Act was amended to include preemption.

    This sort of general grandfathering approach would face opposition from the business community. After all, one impetus for federal legislation has been to preempt the California Consumer Protection Act. One possible approach for drafting is to have a general grandfathering provision, but negotiate a specific list of state laws that would be preempted, such as the CCPA.

    9. Data breach and cybersecurity laws. Are state data breach laws “primarily focused on the reduction of privacy risk”? Maybe so. If so, the draft bill preempts all the state data breach laws, without providing any federal framework for data breaches. I don’t think preempting data breach laws was the intent, so that needs to get fixed.

    Similarly, I can see a pretty good argument that state cybersecurity laws are “primarily focused on the reduction of privacy risk.” Multiple states have built extensive legal regimes about businesses using encryption and following other cybersecurity practices. The bill should, at a minimum, be clear whether it is preempting these state cybersecurity laws.

    10. As this post has tried to illustrate, there is an enormous amount of existing federal and state privacy law outside of the current FTC enforcement regime. In my experience, many of the people involved in general privacy legislation have implicitly assumed that the FTC more of less “occupies the field” for privacy protection. My comments here invite FTC-focused privacy experts to consider the huge amount of existing U.S. privacy law that has little or nothing to do with the FTC. Years of hard work and enforcement of those existing laws should not be thrown away by a casual preemption process.

    11. Why defining the scope of preemption is difficult. Some statutes are relatively narrow in scope. CAN-SPAM, for instance, preempts only laws that “expressly regulate the use of electronic mail to send commercial messages.” Compare that with the incredibly broad scope of a general privacy law – all commercial use of personal data, in our complex economy, in this information age. Section 10(a) may be about as good a general principle as can be found. But there should be a lot more subsections in Section 10 to address the other issues discussed here.

    12. Preemption is technically complex, as well as politically controversial. Perhaps the most important lesson is that preemption is a technically complex subject. Subject-matter expertise is needed to intersect with all of the different regimes discussed in this post. Careful drafting is essential, preferably after developing a detailed legislative record.

    13. An anecdote, to close. During the drafting of the HIPAA medical privacy rule, we had to consider the intersection with the federal education privacy law, FERPA. (Think school nurses and college medical clinics.) Can you imagine how many states have at least some law governing the privacy of school children? To address the medical-related issues, I remember meeting with a lawyer whose practice focused on representing school boards. In the first meeting, she mentioned at least a dozen issues that I had never considered. I urge study of existing federal and state laws before disrupting many things outside of the focus of companies that are thinking mostly about FTC enforcement.

  2. Omer Tene
    Great comment Peter. So invaluable. Really fleshes out why I believe this will end up being perhaps the most contentious and debated provision of the law. The comparison to CAN-SPAM PAM in item 11 is instructive. The difference between the relatively narrow scope of “commercial email” and the incredibly vague boundaries of laws “primarily focused on the reduction of privacy risk” couldn’t be starker. There is so much that could be packed into that term.
    Perhaps the most practical – and sure to be controversial – example is your item 9, state breach notification laws. Of course, there is good reason to harmonize 50 distinct state laws addressing the same issue. But by no means is it clear that this should be wrapped into a federal privacy bill. And if it is, of course the bill would have to address the issue head on.
    Thanks again for the insights Peter.

  3. David Hoffman
    Peter – I agree with Omer, that this is both going to be one of the most challenging areas for drafting, and also one of the most critical to get get right. Your post in a fantastic contribution to the discussion, and this is an area that needs much more analysis. Let me make a narrow point and then a broader comment about the goals of what we tried to achieve.

    State Data Breach Notification Laws – Our drafting goal was consistent with what Omer notes that we do not want to preempt these laws. While I do believe a federal data breach notification law would be much better than the non-harmonized patchwork we have right now, privacy legislation is difficult enough to draft without also including this task. We had thought we had effectively carved this out of the preemption language with the inclusion of “data breach notification” in Section 10(c)(1). However, there are several people who have said they do not believe we have exempted those laws from preemption. Do you have a recommendation on how to make this clearer in our next draft?

    Goals – Our overall objective with the preemption language was to create a uniform national standard for information privacy that would allow organizations to implement compliance programs while minimizing the need to pay large law firm lawyers to analyze fifty different (and potentially at times conflicting) sets of requirements. My experience is that such a patchwork is often bad for both individuals (unclear what rights they have) and organizations (the aforementioned challenge of designing a compliance program and the increased cost of legal fees). We also want to create an even playing field across industry sectors, as we increasingly see organizations operating in multiple areas and wanting to take advantage of the value of combining data across their diverse operations. Having the FTC oversee a common set of standards across all industry sectors seems like the best way to create that even playing field.

    To accomplish these objectives we wanted to preempt state laws aimed at the overall issue of information privacy (CCPA) while not preempting laws that while they may have privacy implications are primarily not aimed at the same topic (trespass). We also were not intending that this law would be the only law governing information privacy, and that Congress may consider some areas of information use so risky, that they may want an additional law (e.g. HIPPA, GINA). Once we made that decision, we also felt it important to leave in place the state laws that act as additional protections for those federal statutes. Perhaps that approach is not feasible. Your example of GLBA is excellent. I am hard pressed to understand why it would be a good idea to have a separate system for the financial sector in addition to this bill. Do you think it would be more advisable to attempt to also cover the privacy portions of other federal laws (HIPPA, GINA, COPPA, GLBA, VPPA, etc) and explicitly also preempt them here with a grandfathering clause?

  4. Tim Sparapani
    Peter, David, Omer and other friends – I see two key, unaddressed preemption questions that deserve extra thought from Intel and other thought leaders. They are, in brief: (1) How > (1) How to preempt without eviscerating private contractual requirements that reference other privacy laws (that might be preempted) and that function to create important privacy protections between businesses for consumers?; and
    (2) How to preempt without destroying several decades of important court decisions that substantially advance personal privacy and protect the public?

    To address the first question again: How can we have the sort of preemption that leads us to advancing privacy without accidentally jettisoning private contracts between parties (IMHO one of the most important forcing functions driving improved privacy practices from one company through its vendors, partners and customers)? I worry that preemption, which I generally favor as long as the floor of preemption is a high one that advances personal privacy substantially, will lead to accidental removal of either federal sectoral or state laws upon which private contracts requiring privacy standards are based. The clearest example of this might be state student privacy laws/regulations, COPPA, or HIPAA, which are referenced in a myriad of contracts leading to at least minimum privacy protections for the public.

    Addressing the second question, we must realize that numerous, essential court cases have incrementally advanced personal privacy. Often the basis for those important judicial decisions is the existence of either federal sectoral privacy laws or state privacy laws. Eliminate those laws through preemption and we accidentally produce a result that causes a retreat from important, judge-made law advancing personal privacy.

    I don’t have the answers, yet, but I know these questions need to come into focus and deserve our collective best efforts to answer them in a pro-privacy way.

  5. Kirk Nahra
    One additional possibly to think about on preemption. I base this on my experience primarily working with health care privacy laws over the past 20 years, where many of the the state laws are confusing, narrow, very detailed, and often ignored and unenforced. I have used the analogy for a HIPAA preemption analysis that says that HIPAA is written in english and the state laws are written in french, and they just don’t talk to each other. Most of these laws were passed long ago for separate specific purposes, typically before HIPAA existed. My suggestion – which I have made in the health care space and now encourage consideration of here on a broader level – would be to have preemption apply to essentially wipe out existing laws on the books today (subject obviously to Peter’s comments and other comments about what this would actually apply to), but then permit states to pass “tougher” laws in the future. This would be a compromise approach, borrowed from FCRA. It would wipe off the books older and perhaps unnecessary laws. It would permit a state to look at the new federal law and say “we want to improve privacy protection in this specific area as compared to the federal law.” Presumably (although this cannot be guaranteed – see the Texas health care privacy law that was passed after HIPAA) this approach would remove laws that are no longer needed or useful, but would allow newer laws that could be easily compared to the federal law in terms of where the differences are (if the states were thoughtful about what they were actually trying to make tougher). This would not be complete preemption (which, all things equal I might prefer), but would result in a more limited and focused set of additional laws that would require compliance.

Topic

Fair Processing Versus Autonomy

The core fair information practice principles came directly from “Privacy and Freedom” published in 1967.  “Privacy and Freedom” was written prior to the publication of the paper in 1970 that described relational databases.  When “Privacy and Freedom” was published autonomy and fair processing could be seen as one in he same.  Data was collected from… Read more »

ViewHide Discussion

5 comments

  1. Michelle Richardson
    I agree with Marty. Purpose limitations are the clearest and most effective way to make consent meaningful, and we at CDT would go so far as to say that when hen it comes to some types of sensitive data, a ban on secondary uses – regardless of individual consent – is appropriate. We are reaching a point where the internet is ubiquitous, opaque, complex, and unavoidable enough that the only way to protect users is to set a floor of appropriate behavior.

    I know we all stretch our analogies when discussing the internet, but it is fair to point out other scenarios where we have decided that individual negotiation is simply not possible. For example, when I walk into a public building, I do not personally need to know, understand, and make a decision about whether there are enough sprinklers or fire extinguishers. When I walk into a drug store, the onus is not on me to negotiate the safety and effectiveness of each drug with each pharmaceutical company. The point is not that some people may be able to navigate the world this way, but overwhelmingly most of us cannot and a bargain needs to be struck on our collective behalf. It will certainly be complicated, but we need to figure out what baseline behavior we expect in our digital world.

    • Annie Anton
      As the technologist / engineer amongst the experts here, my remarks focus primarily on providing the perspective of someone who would need to be able to implement / codify the the law in software. To this end, should the idea of banning secondary uses of information become law, it would make it much easier for engineers to design and implement software that could enforce such a secondary use ban at run time. Otherwise, nothing in our enforcement regime will change as we will continue to leave enforcement up to lawyers to codify in legal contracts and data use agreements, which may or may not actually reflect how the relevant software actually operates. Having said that, from the ML (machine learning), artificial intelligence (AI), and data science point of view, such a ban could severely cripple lots of great research; it is often the case that data collected for one purpose is analyzed to identify patterns, etc. (not the original purpose) that of fields ML, AI, and data science can make significant advances. I would expect push back from the ML, AI, and data science communities.

  2. Paula Bruening
    I agree with Marty – consistent use as the basis for secondary use will lead to a proliferation of notices that likely won’t result in informed choices. His context proposal osal makes sense. However, while “over-notification” is counterproductive and to be avoided, it doesn’t argue for less transparency – I think the notice provisions in the Intel bill reflect the need to keep individuals informed without burdening them with complex notices that don’t help them safely navigate the data ecosystem.

  3. Omer Tene
    The problem with any proposal to replace purpose limitation with a policymaker-defined notion of context is that it becomes paternalistic and overrides consent. Where it is real, that is informed med and voluntary, consent SHOULD legitimize data use. Of course I know it’s difficult to reach that “real consent” standard; but doing away with it undermines the whole framing of this as a privacy protection law. How is context determined absent regard for individual preferences/choices? Who will decide? What does Marty’s “processing within the context of the processing to the interests of individuals and individuals as a group” even mean?
    And to Michelle’s comment, I disagree that “a ban on secondary uses [of sensitive data] – regardless of individual consent – is appropriate”. If medical researchers can use health data — with appropriate safeguards – to cure a lethal disease, there’s a strong societal interest to do so even without consent. And certainly if patients — say, of a disease caused by a rare genetic mutation — AGREE to this use, as you assume, who are we to deny them this opportunity? And if we do deny them, we better find another reason to do so than protecting their privacy, which they agreed to trade off.

    • Michelle Richardson
      Hi Omar. I expect that any bill will have exceptions for certain behavior like cybersecurity efforts, traditional business practices like billing or system maintenance, and similar uses that are just just fundamentally consistent with offering a service. Whether and how to include deidentified data as an exception, or by implication through a linkability standard seems to be in the mix too. My understanding is that the situation you describe – research on a rare genetic mutation – happens through intentional participation in a study (and therefore not secondary at all), through very specifically defined de-identification practices, or upon peer review under HIPAA. Do we want unregulated entities to be able to do similar medical research without similar controls?

      But that gets to whether we center a bill around theoreticals and edge cases or the everyday, widespread data processing practices that we know go on as a regular course of business. I would rather base a bill on the latter and write smart exceptions.

Topic

How to make a privacy statute that will stand the test of time

Brevity is the soul of wit, and of durable legislation. I admire the core principles in Intel’s proposed bill but suggest that it could be made much simpler, placing confidence in the expanded enforcement powers of the FTC, State AGs and the courts, to implement core privacy principles in a wise manner. Congress, guided by… Read more »

ViewHide Discussion

6 comments

  1. Tim Sparapani
    Here’s the TLDR summation of my response to Danny – Bright lines with rules for interpretation of those brief rules are the most workable solution for protecting consumers and benefit fit innovators, but companies need to be forced to undertake the process of analysis and so Intel’s emphasis on setting out process and requiring accountability is, of necessity, spelled out at length. In short, we need bright lines and we need legislatively-mandated process. Intel’s draft, wisely, does both.

    I’m of two minds with respect to Danny’s thoughtful comment. We all want and need a law that will stand the test of time. Numerous privacy laws (ECPA, COPPA to name just two) have had their viability undercut by a focus on regulating existing technologies. Others, such as HIPAA, are less protective than they should be because they were overly reliant on weighty process or a too-keen focus on the current type of business relationships and market construction at the time of their enactment. Legislators are facing a once-in-a-generation chance to regulate wisely. My definition of wise regulation — to further illustrate my recommendations below — is enactment of a workable system that gives us all the benefits of technological advancement with a means for mitigating or eliminating the consequences to both individuals and society from any advancement.

    So, how best to draft a law that withstands the test of time and achieves these twin results of being simultaneously pro-innovator and pro-consumer? Whether the law is brief or verbose, we should attempt to enact a law that anticipates as many of the challenges that are emerging and draft policy to help guide the future.

    As I’ve learned time and time again advising startups, innovators want bright line rules. Short, sweet, direct. I’ve heard repeatedly from innovators some version of the refrain “Just tell me what it is I need to do or don’t do and I’ll code my software, build my systems and design my User Interface accordingly.” So, yes to bright lines. That will help us get innovators bought in to this legislation and give them rules they can readily remember and properly implement to advance compliance. Most startups have a bias against weighty rules — at first — until they realize that a lack of clarity leaves them feeling exposed. That can, quite surprisingly, sometimes slow their pace of innovation. If the answers are spelled out in the law or regulation they can adapt to anything.

    Consumers, surprisingly, often function best with bumper sticker short bright lines. Think of these as privacy slogans that they can remember and wield to advance their data protection. A good example is that every American believes they have a right to free speech. The intricacies of the First Amendment and the limitations on that speech — hint, there are many — are lost, but the bumper sticker protection gives a workable system that can generally ground both companies and consumers.

    These short, bright line rules are especially helpful in guiding emerging technologies without burdening innovation or blocking new technologies. Short, straightforward rules partner well with avoiding dystopian outcomes from misuses of new and emerging technologies involving consumer data.

    So the answer for the length of legislation should be short and sweet, right? Not so fast.

    While the world’s largest companies and those who are already serving customers in the EU are now accustomed to engaging in analysis of their data collection, storage, use and sharing, they didn’t get there over night. And, they’ve learned a lot about what works and what doesn’t. Unfortunately, it’s my experience that the expertise needed to undertake this systemic privacy analysis is not often available to startups. More importantly, it’s not often found at companies that aren’t used to thinking of themselves as consumer data companies. Most manufacturers don’t understand how they’ve also become companies chock full of consumer data, for example. Therefore, both startups and old line (aka companies that don’t think of themselves as “tech companies”) need wise privacy process spelled out for them. If either of these types of companies don’t have a virtual roadmap dictating what it takes to analyze privacy risks to their customers they may do a cursory analysis at best. Or, they might miss important steps. Or, they might not be able to bind their executives to the work it takes to implement proper privacy protective processes throughout the business. Worse still, they might fail to bind their vendors or subcontractors, or let data sneak out the back either accidentally or via sale to a third party that won’t protect that data appropriately or could misuse it.

    Here’s the genius of Intel’s draft. It gives us bright line rules for emerging technologies and then spells out privacy analysis and process for entities.

    • Danny Weitzner
      I agree 100% with Tim as to the process that large data-holding companies need to go through in order to be good stewards of personal data. However, I do not not believe it is the role of either Congress or the FTC (in a rulemaking process) to prescribe such internal procedures. The extensive accountability requirements described in section 4(h) are great guidance for at least certain organizations, but they are not necessary to ensure that a company refrains from abusing personal data nor are they sufficient to ensure that such abuse doesn’t happen. So I consider it overkill to expend enforcement resources on monitoring all of these steps when they do not necessarily produce the right result. Simply put, companies should be held to specific substantive privacy standards — the ones in section 4 are pretty good. If they follow these rules, that’s great. If not, they should be punished, regardless of whether or not they had a good accountability process in place. So, right of the bat, I would remove section 4(h). The standards there are great advice about how to work to be responsible, but this statute ought not to be in the business of giving management advice. It should state rights and responsibilities clearly and see that they are enforced.

      • David Hoffman
        Danny, I am interested to know what portions of Section 4h you believe would be overkill. In my experience they are high level concepts that all organizations that handle personal nal data should follow. I still think we do need a carve out from the bill for entities that are small, do not use data in a sensitive way and who do not manage the personal data for large numbers of individuals. That being said, the requirements in 4h are so flexible that they could apply to just about anyone without much added cost. For example, privacy training is available for free on the internet. Also, for a small organization it is not difficult to combine the responsibilities of a privacy official with those of the employee who is in charge of information security. Net, I am wondering which of those requirements you believe fall in the category of “extensive”.

  2. Marty Abrams
    Process requirements for individuals lead to less fairness while actionable process requirements for organizations actually frees them to innovate for everyone while still protecting individuals. The lesson from the recent cent Conference of Data protection and Privacy Commissioners is that, with all the requirements in the GDPR, fairness is the de facto standard for whether processing is in bounds or out. By design, whether the word that comes first is privacy, data protection, comprehensive, or ethics, is the process that weights consequences, for all interests, whether those consequences are good or bad. Congress can’t and shouldn’t define the fine print of what is in or out. Yes Congress can say that secret processing, or processing to accommodate fraud are out. Congress may also define the bounds of the public commons versus the private space. But using legislative text to define the future is a fools errand. As for startups, they can do processing by design. I have worked with organizations that have.

    The role of regulators gets more difficult if there are not bright lines. The reluctance to use unfairness when overseeing fair processing is an indication of the difficulty off enforcing against subjective standards. But we need to find a way to make that possible. There are positive lessons. The Information Commissioners in British Columbia and Alberta have effectively overseen accountability in their respective provinces. That oversight has been based on guidance published in 2012. The Spanish passed legislation in 2011 that established a sliding scale for fines based on comprehensive programs. We need to vet the regulator role more fully in the next few months.

    • David Hoffman
      Marty, this is an excellent point. I have heard others say that we really should just empower the FTC to use their Section 5 “unfairness” authority to require companies to to practice ethical data processing. I doubt that will work, based on the political issues we have seen with the FTC attempting to expand their use of unfairness without having more of a guide from Congress on what is unfair. I do start from the presumption that the best way to protect privacy is to require companies to put in place the minimum people, policies and processes to demonstrate they are behaving responsibly. That is what we attempted to capture in Section 4h. In my experience, having organizations go through the exercise of determining how to put the items included in 4h in place, is the single best way to make sure that individual privacy is protected (instead of just providing for the ability for enforcement actions after privacy has already been lost). The requirements listed in 4h strike me as the minimum that organizations need to do to demonstrate they are behaving responsibly. Some social media and public discussion comments seem to think the requirements will create too much bureaucracy and paperwork. I am interested in which of the requirements people think will do that. Intel’s view is that privacy is a fundamental human right, and therefore it is not too much to ask that organizations demonstrate they will protect that human right by behaving responsibly.

  3. Pam Dixon
    The powerful Fair Credit Reporting Act and the Violence Against Women Act are just two pieces of US privacy legislation that serve as cases-in-point that lasting privacy thought requires both oth high level concepts as well as particulars. Brevity may be the soul of wit, but it is clarity that is the soul of legislation that achieves both durability and quality. The FCRA and VAWA have proven effective and important in the US to provide both protection and guidance.

    Without the particulars and procedural guidance for those entities that are implementing, companies and other entities covered by a given legislation are left to a no-man’s land of ambiguity and definitional volatility which typically results in legal uncertainty.

    Marty’s points about process requirements are well-articulated, and persuasive.

Topic

White box analytics

A greatly misunderstood area of data privacy is the “data” portion of the term. Many people assume that I am personally against data use, simply because I work on privacy issues. But remember: data is not just a point of risk in privacy, it is also a crucial basis of making decisions informed by a… Read more »

ViewHide Discussion

4 comments

  1. Annie Anton
    I also like the fact that the proposed bill prevents “the use or application of outputs from machine learning, algorithms, predictive analytics or similar analysis that would violate any state ate or federal law or regulation to wrongly discriminate against individuals or facilitate such discrimination, or deny any individual the exercise of any Constitutionally-protected right or privilege.” Having said that, for years I’ve advocated for technology-neutral laws because technology evolves so quickly. When specific technologies are codified in law, the law can be rendered obsolete in a few years and/or lead to entrenchment in outdated technologies. White-box analytics are thought of as ways improve credibility and trustworthiness. They accomplish these two objectives via transparency. Transparent algorithms allow us to answer “why?” questions so that we can refine and improve our analytical approaches. Thus, I would prefer to see the existing language concerning machine learning focused on the objectives that we want to see achieved, specifically “transparency”.

  2. Pam Dixon
    Well-said, Annie. I agree completely. Transparency is the key, and it is essential to convery the ideas driving white-box analytics while retaining technology-neutral language. Otherwise, as we have all painfully lly learned, aging language regarding technologies (as is readily found in ECPA) invariably creates meaningful gaps in intended protections.

  3. Omer Tene
    Specific wording comments: In 4(d)(4), I think the obligation “shall only be done after the covered entity conducts an assessment….” should be made ongoing. That is, organizations should be required required to reassess this periodically, not just at the point of deployment. This is particularly important given the “black box” characteristic – and potentially covert discrimination – of machine learning. To paraphrase J Stewart, you only know it when you see it.
    Per 4(d)(4)(C), an organization may proceed with automated decision making only if it “Concludes that, after all reasonable steps are taken to mitigate privacy risk, the automated processing does not cause, or is not likely to cause, substantial privacy risk.” I’m not sure this is true. In some cases, we want the organization to proceed even in the presence of substantial privacy risk. It depends what other interests are at stake. Think obviously TSA body scanners, DHS cyber defense. But also commercial uses, KYC by banks, etc. I think there needs to be an additional balancing provision/step.

    • David Hoffman

      An excellent recommendation. I completely agree and it is on the list of necessary changes for the next draft. Please keep them coming!

Topic

Who is covered by the law?

I am confused/concerned about some of the coverage issues.  I gather the general idea is to have broad coverage.  There are a small number of places where current FTC exemptions (e.g., non-profits) are over-ridden.  How would the insurance industry and financial services be covered, if at all?  Today they are largely outside of FTC coverage,… Read more »

ViewHide Discussion

16 comments

  1. Michelle Richardson
    I am still processing this definition but one issue immediately jumps out at me: the exception for entities that have fewer than 15 employees or utilize the personal data of of fewer than 5,000 individuals. I assume this is motivated by concern for small businesses and we certainly need to make sure an overarching privacy framework works for them. But two changes are important here.

    First, an exception for small businesses in the context of a privacy bill should hinge solely on the amount of data an entity handles. It’s easy to imagine a tech startup that has only a few employees yet collects or uses data on millions of people. On the flip side, there must be large corporations that have minimal interaction with consumer data- B2Bs, professional services, or manufacturers, for example. To the extent that an exception is based on number of employees, it is at once over and under inclusive. In 2018, privacy risk just doesn’t correlate with employee numbers. (As an aside, other proposals include exceptions based on revenue and they are equally mismatched for the same reasons.)

    Second, small data processors may warrant different obligations or face different penalties, but they should not be outside of a privacy regime altogether. We are still thinking about where to draw those lines at CDT and it is especially hard to draw them in a framework like this one.

    I will flag that Congress needs to be on the lookout for a sleight of hand here. The small business conversation often starts with concern about compliance costs for dry cleaners and delis but quickly morphs into a debate about product development and competition issues. We certainly aren’t getting through the privacy debates of 2019 without talking about these latter two topics, but that’s what we should do: talk about them and deal with them thoughtfully.

    • David Hoffman
      Michelle, I agree this is something we need to improve in the draft, and am interested in what people think on how we can do that. We received similar thoughtful ful feedback on social media and I am inclined to agree that our current approach both includes entities that we do not want to include (the local butcher shop that has just that many employees, and has enough historical data on meat purchases to satisfy the data subject number), and may not include entities that should be covered (a company with only a few employees, who processes data on only a few thousand individuals, but the data is incredibly sensitive and impactful). Our approach currently relies heavily on risk assessment as a mechanism to determine the “consistent uses” and as a foundation of the accountability requirements. Could this be another area for an evaluation of risk, which would allow for more appreciation of context? For example, we could capture everyone, but then carve out entities that the FTC provides guidance do not create any “significant privacy risk”. That may require either definition of the term “significant privacy risk” or allowing the FTC latitude to define it in guidance and rules. I worry that such an approach will not provide enough clarity for organizations that need to understand whether they are covered under the bill or not. Another possibility would be to expand on the bills notion of “sensitive data uses” and use that as a carve out to the carve out. What I mean by that is that we could keep the numbers approach we have, but then list certain uses like to determine medical, racial, ethnic, sexual, religious or biometric qualities of the individual and then say that even a small entity that is using data in those ways would fall under the bill. We specifically stayed away from defining
      “sensitive data” as increasingly we see that any data, when combined with advanced analytics, may be used in sensitive ways (my grocery shopping history may say many things about my medical stats and religious beliefs). Thoughts?

      • Peter Swire
        In my view, a risk-based approach to who is covered is a bad idea. Who is in/out is perhaps the most important provision in the law – it tells an s an organization whether it has a ton of obligations, or none at all. We saw this with “covered entity” under HIPAA. The California Consumer Protection Act shows a way to address Michelle’s concerns – an entity is covered if it meets any of three threshold criteria. That’s a little strict, but likely the clearest way to address the issue.

        • Anne Klinefelter
          I agree with Peter that having multiple qualifying criteria, like the California law provides, would cover these concerns. But, I am not sure that risk-based carve-outs are going to provide g to provide clarity for any of the stakeholders, and it seems that clarity and predictability are some of the goals of this initiative.

          • Annie Anton

            As the non-lawyer, here, I also agree that risk-based carveouts are not predictable or clear. Instead, multiple qualifying criteria offer a much more consistent, and enforceable approach.

      • Michelle Richardson
        I’d avoid risk assessment here – for both determining who is covered and subsequent responsibilities. It’s too complicated for the limited data processing we are trying to encompass. Substantively, here’s s what CDT is calling for: (1) individual rights to access, correction, deletion and portability, (2) transparency and security, (3) clearer prohibitions on discrimination, and (4) ban on secondary uses of sensitive data. I believe small processors will be able to do most if not all of these if the requirements are clear.

        But in the format of this specific bill, it could look like a limited data processor safe harbor. How about a provision that requires entities that hold data on less than 5,000 subjects *and* do not collect, use, or share sensitive data do the following: provide access and deletion rights, implement reasonable security, post a public privacy policy, and abstain from sharing information with 3rd parties? These small data processors can choose to opt in to the larger system if they want to play in the data processing game.

    • Jules Polonetsky
      I wonder if GDPR which does exclude companies with under 250 employees from some of GDPR, but then includes them if they do certain activities based on risk and scale, ale, has some good logic. But I take note of Peter Swire’s point about the uncertainty the risk approach creates about who is in our out, given some subjectivity. It may be useful to work off the GDPR exclusion, but revise it in a way that provides more certainty. Clearly any entities with large amounts of risky data must be subject to the most important protections, no matter employee numbers. But I have in mind the early days of wikipedia or GEDMatch today…or similar businesses…maybe some excluded as a NFP, but many low profit hobbyist sites need to be considered. The idea that a small part time blogger would get captured because of ads or analytics code on site is perhaps a good use case to consider. ( the big ad tech partner certainly should be captured)..but a definition that would result in more pop-up cookie consent banners for every blog of any popularity should give us pause. Learn from the European experience…..for the good lessons and for where US should be interoperable, but provide more flexibility, when risks is low – if that low risk can be defined very clearly.

    • Danny Weitzner
      It strikes me as very hard to get certainty on scope by using either number of employees or number of data subjects. Instagram famously had 13 employees when it was was acquired by Facebook for $1Billion. Should they have been left uncovered by this bill — probably not. The original commercial transfer of data to Cambridge Analytica was done by an entity with fewer that 15 employees, too. At the other end of the spectrum, what about people who have >5000 followers on a social media platform or in a chat group? I have more than 5000 contacts stored locally on my smartphone. These examples show, I believe, that there is no way to chose bright-line coverage rules without missing big privacy risk or covering those whose behavior is bellow the level at which we would want to create compliance obligation such as having an accountability program.

      I’d observe that many laws rely on prosecutorial discretion or simple incentives regarding prioritization of enforcement resources as a means of avoiding unreasonable burdens on smaller entities. I believe guidance to enforcement authorities, rather than bright-line rules, is the better way to seek a balance of protection of important rights vs regulatory burdens.

  2. Marc Groman
    As I read the draft bill, the bill covers entities currently subject to FTC jurisdiction plus common carriers and non-profits. I fully support that approach in a federal privacy bill ill and I don’t think that approach is all that controversial given the nature of data collection and business practices today. If it is controversial, happy to engage in that discussion. Other entities outside the scope of FTC jurisdiction, including the business of insurance, do not appear to be covered by the draft federal bill. That appears to be left to the states, which is consistent with the historic approach to insurance and some other business practices.

    The more difficult issue raised by Kirk is not about the entities that are covered by the bill, but about how this draft bill or any proposed federal privacy law will interact with current federal privacy laws such as HIPAA. I’m not even addressing the even more complex issue of state preemption yet. Unfortunately, policymakers are not working from a clean slate and any new federal privacy law must contemplate the full range of existing federal laws that touch on the collection, use, and other processing of personally identifiable information. As I understand the draft, the FTC is required to submit a report to Congress to address this issue and make recommendations. How any proposed federal privacy law will interact with related requirements in current federal laws such as HIPAA, GLBA, FCRA, COPPA, VPPA, FERPA, CAN-SPAM, Cable Act, etc. is very complex. I have yet to see a proposal that threads the needle in a way that makes sense. I would argue that it’s not good for business or consumers (and not good for competition) if we have very different standards across sectors absent some compelling reason. GLBA, for example, often comes up in this context but in fact GLBA is not a privacy law. GLBA has minimal requirements regarding notice to consumers and some limited choice with respect to a narrow subset of data covered by the statute. Thus, it would not be logical or reasonable to exclude entities from a federal privacy law simply because they currently are subject to those minimal requirements set forth in GLBA. On the other hand, entities covered by GLBA should not be subject to inconsistent or overlapping standards. Thus, to the extent GLBA’s notice and opt out regime remains in place (and perhaps it shouldn’t), GLBA entities should not be subject to similar requirements in a new federal privacy law. But those entities should be required to comply with additional requirements, if any, that a new federal privacy law puts in place. Any other result strikes me as absurd, potentially leaving financial institutions and consumers’ sensitive financial data subject to the lowest privacy standards. Of course, this all depends on the requirements of a particular bill and this is all hypothetical.

    • Peter Swire
      Marc is correct to mention the numerous federal privacy statutes that would need to be addressed in an eventual bill. It’s not just GLBA and HIPAA, which the current draft raft bill tries to address. Each interaction has its own complexity, and those engaged in the process will need to take great care with each interaction.

      When we were writing the HIPAA Privacy Rule in 1999 and 2000, for instance, the intersection just with FERPA took a great deal of work. Huge numbers of organizations, such as school districts in that case, have relied on the other regulatory frameworks. There may be serious compliance challenges and unintended consequences unless there is intensive engagement and drafting with stakeholders affected by each of the other federal bills.

      I am adding to my preemption discussion tonight as well. Intersection with state laws are complex. To take one example, would the federal bill preempt the numerous state laws that set limits on what private entities may or may not do with Social Security numbers? Each of those interactions need to be considered by stakeholders and experts in the process before a final bill can be drafted.

  3. Dan Capiro
    Thanks for a comprehensive and thoughtful proposal. It’s a good starting point and more substantive than other proposals to date. However parts of it strike me as fighting the last last war by being heavy on process/compliance and light on valuing data/privacy as a strategic risk for senior management. Not clear what is gained by codifying the FIPPS or giving the FTC rule-making authority.

    • David Hoffman
      Dan, very interesting. What specific sections strike you as too heavy on process? If we do not start with the FIPPs or give the FTC rule-making authority, then what would uld you propose as an alternative? Do you think we should just stay with the status quo?

  4. Paula Bruening
    With respect to what entities are covered by this bill, I agree with Michelle’s observation that even small businesses can collect, store and process vast amounts of data. Basing coverage rage
    on the number of individuals an entities employs ignores the power technology places in the hands of these companies. I agree that what entities are covered should be based on the amount of data they manage and process. Further, I am concerned that completely excluding some small companies would introduce weaknesses in protections, given the interconnected nature of systems and businesses, and the fact that these companies can use vendors that provide powerful processing capabilities. I suggest that streamlined requirements for companies with smaller data holdings would help raise awareness across parts of industry that may still not be fully aware of privacy risks and best practices, while avoiding overly burdensome obligations that don’t necessarily promote privacy.

  5. Paula Bruening
    I agree with Intel’s approach to structure this bill using the FIPPs and FTC rule-making as a starting point. In doing so, it aligns U.S. law with international law and and frameworks but still allows for flexibility to reflect U.S. thinking about data governance and includes risk assessment and mitigation. I am still considering the specifics of the bill’s FIPPs based requirements, but I would note that overall the bill does not suggest a purely procedural application of the FIPPs. Rather, it recognizes discussions over the last 5-10 years that have considered interpretations of the FIPPs in ways that serve emerging technologies and data use without compromising the ability to innovate.

  6. Pam Dixon
    I would like to return to Kirk Nahra’s concerns about HIPAA: “I also am concerned about the approach towards HIPAA-regulated entities. As I read it, they are covered by this red by this new proposal, would remain covered by HIPAA, and would get no benefits from preemption. This seems to be the worst of all worlds specifically for that industry (arguably the US industry with the overall toughest regulation today).”

    Kirk’s concern has real merit. HIPAA is an extremely difficult statute to work with because HIPAA applies to the commercial sector *and* to the government sector. HIPAA also applies to some educational institutions. The draft bill does not apply to government. If a new law with different standards applies to only a portion of the entities regulated by HIPAA, this brings a nightmare of compliance and could create meaningful problems for information exchange.

    So— working this through to some of the end points, I have some questions to pose. What happens when government and commercial health care providers exchange health care records about a mutual patient? Commercial sector health care providers would have new and different requirements. What effect would differing standards have on the compliance and liability of a commercial entity in the scenario where a commercial entity shares records with a gov’t entity, which would have different and potentially lower standards? Will patients be less able to switch between gov’t providers and commercial providers because commercial providers incur liability from sharing with entities with lower standards? What happens to those entities that are formal Business Associates under HIPAA, and they serve both government and commercial entities?

    I doubt the intent of the bill was to create chaos in the health care system. But carving HIPAA-covered entities in half creates meaningful disruption that will likely have have far-reaching unintended consequences. I worry most about impacts on patients.

    How this would work with the FERPA-HIPAA intersection in the education environment is truly unfathomable, as that intersection is already fraught with complexity.

  7. Omer Tene
    Re: the “does size matter” issue, it’s interesting to look at the Israeli data security regulations from 2017. They implement a modular risk based approach, applying different obligations to organizations ons categorized as “basic” risk (residual category), “intermediate” (ie., processing various categories of sensitive data), or “high” (intermediate with more than 100,000 data subjects or more than 100 authorized employees). There is also a “sub basic” category to address Jules and Danny’s concern about essentially household use. See here: https://iapp.org/news/a/the-new-israeli-data-security-regulations-a-tutorial/

Topic

Governance frameworks

I will be commenting in a number of brief posts. This is comment 1, relative to the Findings, which are important to set the context for the bill language. In looking at developing a federal privacy framework, it is crucial to consider that we are in a transitional period, where we are moving through the… Read more »

ViewHide Discussion

9 comments

  1. Marc Groman
    Pam, I assume that the term “knowledge governance” is distinct from “data governance?” If yes, can you please articulate the difference? This is beyond the requirement that a company must ust maintain a comprehensive, accurate, detailed and current inventory of all of the data held by the company, as well as a complete and deep understanding of data flows, data use, and other data processing within the enterprise?

    • Pam Dixon
      Marc, yes, and thank you for your question. Data governance is a baseline, knowledge governance goes beyond data governance and is inclusive of it. I use the term knowledge governance nce because as data is evolving, it is not just raw data we need to think about anymore— data when subject to analysis can become greater than the sum of its parts and create new information and in some cases, knowledge. Knowledge governance allows us to fill this gap.

  2. Peter Swire
    Pam’s thoughtful post addresses the findings part of the draft bill. A couple observations on the current, short findings: (1) Those of us who have been working with GDPR know h GDPR know the importance of the “recitals” that come before the GDPR text. Those recitals are far more detailed than the current findings in this bill. (Brevity here makes sense given the early stages of the process.) For consideration of this draft bill, and other draft bills, I suggest the community should be considering the findings/recitals in more depth, in addition to the statutory text.

    (2) The findings currently have a very brief mention in #2 of the sorts of points that organizations such as the Chamber of Commerce currently highlight – the benefits of an information economy, innovation, etc. Going forward, if a goal is to bring more business organizations into the process, there may be ways to make those points while still highlighting and emphasizing the protection of privacy.

    (3) One contested area is the scope of First Amendment restrictions on what a U.S. government can do in the privacy area. I found the brief provision in Section 11 on this topic to be good based on my initial read.

    • Pam Dixon
      Peter, Agreed on the importance of findings — GDPR has taught us that the findings are every bit as important as the bill text. It would be beneficial for this al for this bill’s findings to have more reach and depth. I would like to see the findings take on the governance issues that might not make it into the bill text.

  3. Danny Weitzner
    I’m a big fan of legislative findings to explain broad Congressional intent — especially to address the possible First Amendment challenge as raised by Peter. However, I think it’s actually ally a really bad idea for Congress to write hundreds of recitals. It risks creating a sea of complex and possibly contradictory directions which are more likely to confuse the enforcement process. I would like to see Congress write rules as clearly as possible and then rely on the enforcement agency (the FTC) to provide guidance and explanation as needed. I’ve written about my general view on the enforcement role of the FTC in relation to the statute in another Topic (“How to make a privacy statute that will stand the test of time”). Competition law provides useful guidance here. Both the FTC and DOJ have well-established mechanisms for clarifying their interpretations of the statutes they enforce (FTC Act, Sherman Act and the Clayton Act) both with respect to specific transactions (DOJ Business Review Letters & FTC Advisory Opinions) and broader guidelines regarding areas of antitrust enforcement. Such mechanisms are much more likely to provide useful guidance over time rather than congressional findings. I don’t actually think that the FTC needs legislative authorization to do this with respect to a future privacy statute, but it wouldn’t hurt for Congress to encourage this as good practice.

    • Anne Klinefelter
      If findings are to be included, I agree that the First Amendment needs a nod so that the burden imposed is explained or justified. While calming consumers so that organizations ions can use data to innovate is getting at that point in Section 2(b), I think a direct finding about privacy harms should be included. Perhaps 2(b) could be expanded or a new 2(c) could be added to say something like:

      Use of personal data by organizations can also produce adverse outcomes for individuals including discrimination and loss of liberty and can produce societal harms including avoidance of social and commercial systems and weakening of democratic engagement.

    • Peter Swire
      Danny, I may disagree with you here somewhat. Guidance is all well enough, and the FTC gives guidance now that only sometimes gets followed. Without actual rule making power to er to clarify things, guidance alone will lead to weak protection of privacy rights over time.

      Assuming that the FTC would get rule making power, as the draft bill provides, then legislative findings may actually be useful, as they are in Europe.

      • Danny Weitzner
        I see your concern, Peter. But’s distinguish two choices: 1) findings vs. rule making, and 2) findings vs. rules-developed-by-enforcement. I did not mean to comment directly on (1), though I gh I have strong feelings about it. I was really just speaking against the assumption that Congress can guide either rule making or enforcement activity through findings. I consider these to be half-measures that are as likely to confuse as to guide. If the issue matters, then it should be written as an enforceable provision or included in the scope of rule making (even though I’m not always a fan of that).

Topic

The Safe Harbor Certification

How the safe harbor and the criminal provision work.

ViewHide Discussion

4 comments

  1. David Hoffman
    I have received some feedback over social media that people are concerned that the criminal law provision in the bill will mean that corporate officers could go to jail if if there is a relatively innocent violation of the terms of the bill. That is not what we intended and not what I think the draft does. The goal was to allow companies the opportunity to have a safe harbor from civil penalties, but to also make certain that safe harbor could only be used by companies who implement a robust privacy program. We chose the following language for the standard for the criminal liability “knew that the statements required by the certification are not true. Reckless disregard of whether a statement is true, or a conscious effort to avoid learning the truth, can be construed as acting knowingly under this statute. ” We just want to capture situations where a corporate officer does not do the review, knows the content in the review is not accurate, or consciously ignores issues presented in the review. We were going for something very close to the False Statements Act, which already governs similar certifications to the former EU-US Safe Harbor Agreement and the current Privacy Shield. What do people think about the language we used? Did we get the intent requirement right?

  2. Peter Swire
    A few thoughts on the criminal statute: (1) Criminal intent in environmental law. I used to teach environmental law. There have been ongoing battles there about what “knowing” violation means. ation means. Professor Richard Lazarus gives the history here: http://nrs.harvard.edu/urn-3:HUL.InstRepos:13548461. Lazarus does a good job of showing the complexity of the issues.

    (2) Section 7 of the draft bill imposes criminal penalties if an accountability report is filed, “knowing that the periodic report accompanying the statement does not comport with all the requirements set forth in this Act.” This could be read as the opposite of risk-based. Imagine a manager who has invested heavily in privacy protection, and has done a gap analysis, and knows they have made big progress but have only 98 of the 100 elements covered. If the manager files the report knowing a violation of 2 of the elements, then that appears to be a criminal action under the statute.

    (3) One way to address this is to exclude de minimis violations. Change the language to “does not substantially comport” with “the requirements” of the Act. Or, something like a “material” violation is criminal, but less than that is not. The problem with “material,” however, is that it is a term of art in securities law, about moving the stock price, so not clear the standard in the privacy setting between a “material” violation and a “non-material” violation.

    (4) Another model here is Sarbanes Oxley, where corporate officers have to certify to the accuracy of the financial reports under Section 302, subject to criminal penalties under Section 906. Interestingly, the language is pretty close to what is in Intel’s draft bill. Notably it includes the compliance with “all the requirements” as in the draft bill. On the plus side for privacy, people take Sarbox certifications very seriously. The concerns, however, have been about whether the compliance cost is too high – the certifications flow down from top management to lower level managers, to avoid the huge penalties.

    (5) In conclusion, I am not sure the best way to draft. As written, however, I suspect there will be loud objections from the business community that this is re-creating Sarbanes Oxley, with the requirements in practice of all of those sub-certifications. May be worth researching what proposals have been made to make the compliance there more workable while still keeping the structure of accountability.

    • Danny Weitzner
      David, I’d be interested in hearing more about your rationale for the Safe Harbor provision. It appears that it is based on the view that having a robust privacy program ram entitles the covered entity to one free pass against all by equitable relief. That’s a big bet on the value of such programs. As I wrote in reply to Tim Sparapani on the overall structure of the statute, I do consider accountability programs to be valuable management tools, but I’d like to hear more about the thinking behind this view?

      • David Hoffman
        Danny – What we attempted to achieve with Section 7(a)(3) was to have the safe harbor only apply to civil penalties and not equitable remedies. The idea is based on on a fear that organizations might have about the lack of predictability of potential FTC enforcement (I do not have that fear, but I know many in the business community do), while still making certain that if an individual is harmed, that they can recover. This still strikes me as the right approach to provide for the protection of individuals while providing a carrot for organizations to have a corporate officer certify the accountability program. Is there a better way to do this?

Topic

Preventing New and Emerging Consumer Harms

Innovations based on consumer data often are an enormous benefit for individuals and for society writ large. Yet, with new innovations and/or more intensive usage of consumer data there can be new threats to consumers that become possible. Ideally, our country’s policies and laws concerning personal data should optimize to increase the pace and magnitude… Read more »

ViewHide Discussion

5 comments

  1. Peter Swire
    Tim Sparapani raised great points about the challenging of “future proofing” a bill, knowing that new issues will arise over time. As one example, algorithmic transparency and discrimination due to e to algorithms have become hot topics in privacy debates. Before roughly the Podesta report in 2014, those topics had not been clearly identified. Those kinds of possible harms were not on the map.

    Roughly speaking, there are three ways that the law typically handles this common challenge – how to protect against future harms:

    (1) Use broad terms, such as “reasonable care” in torts or “unauthorized use” for computer hacking statutes;

    (2) Provide rule making authority, which the Intel draft bill does under 5 USC 553; or

    (3) Wait for the legislature to pass a new law, to address the new harm.

    Industry typically prefers the third choice – wait for Congress, and no binding requirements until then. Many observers, however, see how hard it is for Congress these days to update the law. Almost everyone agrees the Electronic Communications Privacy Act of 1986 needs an update – tech has changed just a bit since 1986 — but passage stalls year after year. If we want privacy protection actually to succeed for individuals, then I think the third choice is hard to defend.

    That leaves us the first two choices, somewhat simplified: (i) use “reasonable care” in handling individuals’ data; or (ii) give an agency (the FTC) rule making authority.

    I have long believed that the FTC, in these situations, should be given rule making authority. To industry, I say that yes there is risk of an overly-strict agency. On the other hand, how else does the system react to change?

    To address industry’s concerns, once again there are multiple texts in environmental law that cabin the federal agency’s discretion in various ways. The principal one is to require the agency to do a cost/benefit analysis to justify the regulation, hopefully with “costs” and “benefits” defined in a thoughtful way that includes non-statistical factors. Cass Sunstein is the guru on this topic, and led revisions to the federal cost/benefit process that resulted in Executive Order 13563. Vox published this interview with Sunstein on the topic, at https://www.vox.com/future-perfect/2018/10/22/18001014/cass-sunstein-cost-benefit-analysis-technocracy-liberalism.

    To see how this approach can work in practice, HIPAA provides a hopeful example. The HIPAA Privacy Rule was enacted after going through a thorough cost-benefit analysis. The rule has occasionally been updated since the first version of the “final rule” that issued in 2000. More to the point, the Office of Civil Rights at HHS has issued many FAQs over time, which provide the sort of guidance that the Intel draft bill supports in Section 8. Congress very occasionally has made updates, notably in 2009, but the FAQ process has done a pretty good job of guiding industry when new problems arise, backed up by the rulemaking power if something big happens.

  2. Marty Abrams
    I agree with Tim and Peter. I would also place great effort (as the Intel bill does) on the accountability chain. Organizations that facilitate other organizations using data where the e the first is the steward, have an obligation to assure conditions that come with the data travel with the data. Elizabeth Denham, currently the UK ICO, pioneered those concepts when she was the Canadian Assistant Commissioner for the private sector in 2009. The guidance she wrote on accountability when moving data is well inline with the Intel bill.

  3. Pam Dixon
    I like the idea of FAQs that allow for updates and interpretive guidance. HHS has been responsible and timely about responding to rapid technological changes. Something like the HHS FAQs AQs could readily be put in place to allow for practical guidance to implementers.

    Marty’s point about ICO Elizabeth Denham’s work regarding guidance is well-taken; whatever form it takes, iterative and ongoing practical guidance will be essential.

    • Chris Wolf
      FAQs by the enforcement authority can serve as a kind of “regulation-lite,” adding meaning to existing statutory/regulatory language. On the utility of FAQs to assist in statutory interpretation, I note note that in 2015, the FTC published “Complying with COPPA: Frequently Asked Questions,” and indicated “These revised FAQs from the FTC can help keep your company COPPA compliant.”

      • Tim Sparapani
        Chris Wolf’s reminder about the COPPA FAQs is spot on. These have been tremendously helpful to me and to my clients who are trying to interpret COPPA. This sort of of “regulation lite” (I like that term) is one lightweight eans of keeping any statute from becoming stale (or worse) shortly after enactment.