Please Add Your Voice

While the “A Conversation Between Experts” page is intended to highlight a public discourse between widely known Privacy Experts, this page serves as a place for you to add your voice. Please feel free to discuss your views and ideas, comment on the topics being discussed by the experts, provide suggestions for the draft legislation, or otherwise respectfully contribute to this important conversation. Thank you for your interest and for sharing your voice.

 

25 comments

  1. Bob Gourley
    Dear all, thank you very much for taking the initiative to make positive change through drafting legislation. I do have a consideration. First some background: After over 20 years of er over 20 years of close in cybersecurity work I have seen both victory and defeat in data protection, including victory and defeat in protecting privacy. Some of the greatest violations of privacy have come from criminal groups operating outside the rule of law, including groups that operate with sanction of corrupt nations. There have also been numerous examples of hostile nations abusing the privacy of citizens of open nations by breaching IT systems.

    There are many actions that can be done to help mitigate risks of violation of privacy by criminal groups operating overseas and hostile nations that may sanction them or operate data collection systems themselves. But one that has never been applied is regulation on export control.

    The idea relevant to this draft legislation: Since hostile nation’s use the private information of U.S. citizens to further their objectives we should take more steps to prevent its loss, including making personal information on U.S. entities controlled under the Export Administration Regulations (EAR).

    Recommendation: This legislation should make it illegal for any U.S. company to sell, transfer or share data on U.S. persons with any foreign entity owned by or operating in a country designed by the Export Administration Regulations (EAR) as potentially hostile to U.S. persons privacy rights.

    Reply
    • David Hoffman
      Bob, thanks for contributing. I must admit that I am not an expert on Export Administration Regulations, and I may need to ask others for help to understand how this his would work. Does the U.S. government currently keep a list of these countries, and is it the same list as those countries with which U.S. companies are already prohibited from doing business with? If it would be a new list, do you have ideas on what standard should be used on how to develop it, and who in the government should oversee it?

      Reply
    • Grzegorz

      The funny thing is that when it comes to privacy and data protection, US also is a “hostile” and “corrupt” nation.

      Reply
  2. Bill Woodcock
    There are some problematic issues around terms-of-art and the technical mechanisms of interaction between individuals’ proxies and servers they access that make terminology difficult. A distinction has to be made to be made between (1) “collection” meaning the retention of data for the purpose of having/selling/abusing the data, (2) retention of data for the purpose of facilitating technical operations (logfiles, which are still subject to abuse or breach, but don’t exist _for the purpose_ of abuse), and (3) ephemeral stateful retention of data for the purpose of replying to the query or completing the transaction which generated the data.

    (3) is unavoidable, but should be clearly and explicitly called out as _not_ collection. A server can’t reply to a query without retaining the IP address of the origin of the query in memory for the duration of the transaction. But that IP address need not be retained in memory nor exported to a log (2) once the response has been sent. And it need not be exported to a database (1) or analyzed or correlated with other things.

    I think it’s also important to keep near the forefront of everyone’s attention that history teaches us that any retention nearly unavoidably leads to abuse or breach. The _intent_ of the person retaining the data is of no consequence whatsoever. The retention is the problem, regardless of intent.

    Reply
    • David Hoffman
      Bill, these are great comments, and I completely agree. We attempted to get at this issue by carving out entities that solely serve as information intermediaries, but clearly that does oes not go far enough. We need to include a provision that carves out from collection these types of technical uses of data that merely serve to allow for establishing and effectuating the communication. We will want to do that narrowly, as I don’t think we would want to except situations where entities originally obtain that information, but then use the data for a secondary purpose (use of the unique identifier for profiling purposes).

      On the data retention issue, I am conflicted. While I normally agree that due to the risk of data breach, it is always better to get rid of the data as quickly as you can. However, as we move into an artificial intelligence environment, I worry that if we delete data used to train algorithms, we will then have great difficulty explaining how an algorithm came to a particular decision that impacted an individual. I am interested in your thoughts on how to optimize for both limiting retention and establishing explainability.

      Reply
  3. Lynne Taylor
    While I’m not a data privacy expert, I am an education research expert, as well as a dedicated American taxpayer. There are huge concerns anytime you mention data privacy. Especially specially in education, no matter the age of the student or choice of where they are learning, data is being constantly harvested to the point it’s called ‘student data rape’.
    Not once, in this proposed Bill were there clear enough parameters to halt the over 1400 data points being harvested every day. Many of these violate, not only the U.S. Constitution, but the Civil Rights of every single American. Not to mention the overreach by ANY federal agent, agency, or program with, by US Federal law was prohibited from becoming involved in education, including related services and programs.
    Not once, in this proposed Bill was there mention of a repeal on the Executive Order which gutted FERPA to allow for the massive student data rape.
    HIPPA was mentioned, but only once and when it was it was from the State level. What about the federal level? What about all the provisions in the Every Student Succeeds Act and the recent 2019FY appropriations for the U.S. Dept of HHS to increase HIPPA data usage?
    Not once, in this proposed Bill was there any mention to the use of cyber apps on phones which constantly data mine us to death. As there was no mention to the SIBs (Social Impact Bonds) which are creating individual dossiers on Americans, again, especially via education.
    Also, no mention of parental consent in the case the ‘individual’ is a minor.
    Why ‘individual’ and not ‘citizen’, is this a back door to encompass the immigration nightmare?
    In Section 6, only ‘commerce’ is mentioned, why not all the other ways in which data is collected (most of the time without knowledge OR consent)? In education, the data is harvested as an excuse to career track students to better a federal or State economic status, not academics. That, in itself is ‘commerce’. It’s also profit over people. Have We the People become nothing more than mere dollar signs to our government?
    Why is this not treason?
    Only ‘covered entities’ and ‘third parties’ are assumed as the only data handlers in this Bill. What about all the other parties, like the contractors and subcontractors (sometimes at least 15 layers deep) or those operating under grants using personal private data? Are We the People to assume the federal government via this bill will STOP all but the ‘covered entities’ and ‘third parties’.
    Why is a federal bill ordering the individuals around? In Section 2, the need for the federal government to facilitate a citizen’s (Bill uses ‘individual’) control over how their personal private data is used is a top-down manipulation. People are savvy enough to know they do not need government interference in the decision to not want your personal private data collected, used, or shared! The added insult? The government allowing ME to participate in where MY data flows! This one provision in this Bill is also a red flag to every American.
    Lastly, in Section 9, expansion of the federal government by appointing 250 lawyers and 250 tech and administrators is government bloat. We the People do not want, nor need an EXPANSION is federal government for ANY reason.

    Reply
  4. Daniel Sepulveda
    David Hoffman does great work. This bill is a like a greatest hits of all the ideas that have come before it. It’s good. I endorse Julius’ comment on thinking nking more carefully about the definition of personal data and trying to encourage pseudonymous storage and use of data. If the argument is that all data is now personal data because of data analytics and therefore all types of data collection, storage and use should be treated the same, then companies will collect all data including true PII. If I have a general criticism, it is only that it may be too wedded to the ideas that have come before it so it is still very process heavy and prescriptive. It may be too late, but we should think about revisiting the underlying premise of process based solutions for privacy and think more about menus of data protection practices that firms can apply depending on the interaction with the consumer, the sensitivity of data, its volume, and the use in question. I have some other nits, like I think it’s strange to prohibit practices in a section of this law that are already illegal. On the positive side, the bill is balanced, acknowledges and invests in upgrading FTC authority and tools, and takes the challenge seriously. I would be interested in an analysis of how this is stronger or weaker than Cali or GDPR and what is meant by stronger or weaker. For example, does Cali give the State AG more regulatory power than this proposal would give the FTC? But again, overall, a solid contribution and probably something close to where we end up if we get a law.

    Reply
    • David Hoffman
      Daniel, fantastic to have you commenting as you wrote one of the best privacy laws ever drafted when working for Senator Kerry. I do think there are things we can can do in the definition of personal data to encourage pseudonymous storage. With advances in homomorphic encryption, this likely becomes even more important, as we can increasingly allow algorithms to access multiple databases without having to aggregate that information and thereby reduce the likelihood it will relate to identifiable individuals (I say this knowing that some experts on homomorphic encryption will disagree with me, and that may be another good discussion we could have on this site). I have always thought that definitions of personal data should encourage organizations to take steps to store data in ways that decrease its ability to identify individuals, or for the organization to take legal mechanisms (a binding commitment not to combine two databases) to decrease the “reasonable likelihood” that the data will relate to an identifiable individual. I do think we can add some language that will do that.

      I am intrigued by your “menus” approach. Can you say more about how that would work? I will admit that having seen how privacy is actually delivered inside organizations, I am a fan of process requirements and accountability. In my opinion, it is the disciplined and resourced following of risk assessment processes that best protects privacy, as long as it is backed by robust, harmonized and predictable enforcement. Can you tell me more what you mean by the menus approach?

      Also, please give us all of your nits. This really is a discussion draft and we want to make the next version the best it can be. Thanks for joining the discussion.

      Reply
  5. William Rankin
    Thank you for addressing the need for federal privacy regulation in a “participatory democracy” manner. Is it your intention, when looking at sections 4(d)(1)(c) and 4(d)(3), to give a covered a covered entity the ability to process personal data for reasons other than consent (section 4(d)(1)(A)) or as required by law or regulation (section 4(d)(1)(B)) such as legitimate interest or in the performance of a contract? Section 4(d)(3) appears to be similar to a legitimate interest test and a data protection impact assessment. It would be helpful if the regulation stated legitimate interest and in the performance of a contract as valid reasons for processing personal data under section 4(d)(1).

    You can debate the merit of legitimate interest and how it might be used as a loophole for companies to process your data for reasons that might be interpreted as being in addition to the original purpose for which the data was collected. However, legitimate interest is a valid basis for processing if the company considers the purpose, necessity, and the individual’s rights relevant to the legitimate interest being pursued.

    In addition to this, the items in section 4(f)(3) “Complete Notice,” the “Permitted Processing” by which the covered entity is processing the personal data should be required information especially if the final draft of this regulation explicitly states legitimate interest in section 4(d)(1). If legitimate interest is used by a covered entity the legitimate interest pursued should also be required.

    In section 4(h)(2) can the “data privacy leader” fulfill their tasks on the basis of a service contract? Any organization, regardless of their size, should not be required to hire or appoint a member of their staff to fulfill this role if they do not wish to.

    Reply
    • David Hoffman
      William, I have received similar questions on social media about the lack of a “legitimate interests” legal basis for the processing of data. Let me give you my thoughts quickly kly here, and I will also then open a thread on the experts page, because it is a topic worthy of analysis.

      I agree that “legitimate interests” is an incredibly important portion of GDPR, as it is one of the few ways to have a lawful basis for processing in situations where consent is impossible or impracticable. The Art. 29 WP has done good work to interpret legitimate interests to create a balancing test that evaluates risks created by the processing.

      We tried to solve the same problem in a different way. We decided to do away with the two step approach of 1. requiring a lawful basis, and then 2. analyzing whether the use of the data is allowed. Instead, our approach is largely a “use and accountability” mechanism that puts most of the focus on the “consistent uses” section. That is where we include the risk assessment that is similar to the legitimate interests test. I do think we improved on that test by requiring a more holistic understanding of the benefits and risks to the individual and society.

      Reply
  6. Lucia C Savage
    I read the comments of Marc with great interest. I think it is important to look more carefully at the difference between GLBA, which I agree is not a privacy acy law, and FERPA or HIPAA, which are in fact privacy regulations, and to make a clearer standard for what compliance with those laws means in light of this proposal. It will disserve consumers and innovation if compliance actual and well-established privacy law or regulation could still yield enforcement activity under this proposed additional FTC standard.

    Reply
  7. Paul Gibbon
    – where related intent, rights, and laws have been recognized or recorded as appropriate, the right to privacy remains time relative in domains applicable. Thus where statutory rights and laws aws can be recognized, any real time understanding and facilitation in relation to both it, and that, should be relevant and considered where applicable, in, to, or for, application or use, and prior to the existence of digital mediums and future uses of them. Therefore statutory rights and laws related to protocol that can demonstrate or utilize this, are to the fore.

    Reply
  8. Dan Caprio
    David, I’d like to see a little more carrot when it comes to reporting to senior management. For instance, framing strategic privacy risk across the enterprise rather than simple compliance compliance reporting. Stress the need for a risk based approach for senior execs that sits on top of compliance.

    Reply
  9. Andrei Blanaru

    It looks like it would bury small companies in paperwork and exonerate big companies of liability. Not good!

    Reply
    • David Hoffman
      Which portions do you think would create more paperwork than a company should already need to produce. We attempted to restrict the requirements to just those high level items that hat any company should have to complete to demonstrate it is behaving responsibly. There is always risk that the FTC could create additional documentation requirements as part of the guidance or rulemaking, and maybe we should include some language directly the FTC to expressly consider any burdens that documentation requirements would create. Thoughts?

      Reply
  10. Joseph Jerome
    Intel deserves a round of kudos for putting something out there, encouraging (and responding to) public feedback, and promising a future revision that incorporates suggestions. The advocacy community could learn arn something from this approach.

    That said, I wanted to raise a few issues I’d like to see more discussion about. There’s a lot in here that’s ultimately qualified. That may be inevitable in any privacy legislation and presumably, with its broad new grant of rulemaking authority, the FTC will be able to flesh some of this out. For example, to highlight another good element, privacy groups will appreciate a continued commitment to purpose specification in Sec. 4(c), but this is cabined by language that notice of this need not be provided if “impossible or impracticable.” I worry companies can argue this will often be the case, though if I’m being more charitable Sec. 4(c)(1) is focused more on timing and punts on this information to a disclosure somewhere. (It may be a drafting error, but Sec. 4(c)(1) on purpose specification is tied to “notices required by Section 4(b),” which deals with data integrity.)

    Channeling my inner privacy advocate, I worry about putting so much on corporate accountability to get the job done. The focus of the rest of my comments are on this, as well as risk assessments and sensitive data.

    I worry that the bill’s overarching reliance on accountability affords big companies with a tremendous amount of discretion and doesn’t provide clear rules for smaller entities. I don’t dismiss internal accountability mechanisms out of hand, but additional process does not necessarily protect people. For as long as I’ve been fortunate to work on privacy issues, I’ve heard companies champion the need to move away from notice-and-choice to use limitations, but there aren’t really any use limitations here.

    Instead, there are consistent uses, which involves a multifactor test that defers to the judgment of companies. (I would be curious to learn more about whether the prohibition against violating state or federal laws or regulations is simply a restatement, or we could envision FTC regulations under the draft building on this.) Going back to my time at the Future of Privacy Forum, there was some hope that companies should be a better job articulating both the risks and benefits of data processing. Unfortunately, from the outside, an accountability approach seems to facilitate the ability of companies to minimize or dismiss privacy risks and concoct benefits out of thin air. (Many of the perceived benefits of data processing are often as hypothetical as the risks that privacy advocates are accused of emphasizing.)

    This draft does a great job of highlighting the many varieties of privacy risk, but experience has shown that many of these will be minimized by companies. I’d like to see more discussion in the revision as to what constitutes “substantial” or “foreseeable” privacy risk. Perhaps that is also a question for FTC rulemaking, but substantial and foreseeable are qualifiers that avoid a lot of the risks this draft does a good job identifying.

    The draft also seems to suggest that geolocation/health/biometric/genetic/sexual life are especially sensitive with an explicit notice requirement, but there’s not much other instruction as to how accountability mechanisms will account for this sensitivity. I’d like to see more controls around sensitive information, particularly in light of how cavalier companies have been with some of this data. In any event, the current draft seems to acknowledge some data is especially sensitivity — or likely to set off individual’s creepy radar — without doing anything else. CDT’s approach has been to suggest some data practices involving some information should be taken off the table.

    Wrapping up, I applaud the detailed discussion of “automated processing.” The complete absence of the word “algorithm” from the NTIA’s recent framework was disappointing, and dealing with these issues head-on is absolutely necessary for a forward-looking privacy law. I would be curious to know how a system can be determined to be “reasonably free from bias.” Bias issues are inherent in automated decision making, and smarter folks than I will tell you we have not really identified what constitutes “unfair” bias.

    Finally, I also give you props for attempting to scope preemption language. I’d be curious to know what we think is captured by “private contracts based on state law.” One thing that seems to be missing here — for better or worse? — are student privacy laws, which seem to be based on contracting requirements.

    Again, thanks for putting this out. There’s a lot in here, and I wouldn’t be surprised if I’ve incorrectly unpacked what this is trying to do!

    Reply
  11. Jason Cronk
    I posted this separately on my blog, privacymaverick.com (including hyperlinks), hence the formalism of the language. Basing their legislative proposal in the Fair Information Practice Principles (FIPPs), Intel looks to l looks to the past not the future of privacy. The FIPPs were developed by the OECD in the 1970’s to help harmonize international regulation on the protection of personal data. Though they have evolved and morphed, those basic principles have served as the basis for privacy frameworks, regulations and legislation world-wide. Intel’s proposal borrows heavily from the FIPPs principles: collection limitation, purpose specification, data quality, security, transparency, participation and accountability. But the FIPPs age is showing. In crafting a new law for the United States, we need to address the privacy issues for the next 50 years, not the last.

    When I started working several years ago for NCR Corporation I was a bit miffed at my title of “Data Privacy Manager.” Why must I be relegated to data privacy? There is much more to privacy than data and often controls around data are merely a proxy for combatting underlying privacy issues. If the true goal is to protect “privacy” (not data) then shouldn’t I be addressing those privacy issues directly? The EU’s General Data Protection Regulation similarly evidences this tension between goals and mechanism. What the regulators and enactors sought to rein in with the GDPR was abusive practices by organizations that affected people’s fundamental human rights, but they constrained themselves to the language of “data protection” as the means to do this, leading to often contorted results. The recitals to the regulation mention “rights and freedoms” no less than 35 times. Article 1 Paragraph 2 even states “This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.” Clearly the goal is not to protect data for its own benefit, the goal is to protect people.

    Now many people whose career revolves around data focused privacy issues may question why data protection fails at the task. Privacy concerns existed way before “data” and our information economy. Amassing data just exacerbated power imbalances that are often the root cause of privacy invasions. For those still unpersuaded whether “data protection” is indeed insufficient, I provide four quick examples where a data driven regulatory regime fails to address known privacy issues. They come from either end of Prof. Dan Solove’s taxonomy of privacy.

    Surveillance – Though Solove classifies surveillance and interrogation under the category of Information Collection, the concern around surveillance isn’t about the information collected. The issue with surveillance is it invites behavioral changes and causes anxiety in the subject being watched. It’s not the use of the information collected that’s concerning, thought that may give rise to separate privacy issues, but rather the act and method of collection. Just the awareness and non-consent of surveillance (the unwanted perception of observation in Ryan Calo’s model) triggers the violation. No information need be collected. Consider store surveillance by security personnel where no “data” is stored. Inappropriate surveillance, such as targeting ethnic minorities, causes consequences (unease, anxiety, fear, avoiding certain normal actions that might merely invite suspicion) in the surveilled population.

    Interrogation – Far from the stereotypical suspect in a darkened room with one light glaring, interrogation is about any contextually inappropriate questioning or probing for personal information. Take my favorite example of a hiring manager interviewing a female candidate and asking if she was pregnant. Inappropriate given the context of a job interview; that’s interrogation. It’s not about the answer (the “data”) or the use of the answer. The candidate needn’t answer to feel “violated” in the mere asking of the question, raising consequences of anxiety, trepidation, embarrassment or more. Again, we find the act and method of questioning is the invasion, irrespective of any data.

    Intrusion – When Pokémon Go came out, fears about what information were collected about players abounded, but one privacy issue hardly on anyone’s radar was the use of churches by the game as places for the individual to train their characters. It turned out some of the churches on the list had been converted to people’s residences thus inviting players to intrude upon those resident’s tranquility and peaceful enjoyment of their homes. I defy any privacy professional to say that asking any developer about the personal data there are processing, even under the most liberal definition of personal data, would have uncovered this privacy invasion.

    Decisional Interference – Interfering with private decisions strikes at the heart of personal autonomy. The classic examples are laws that affect family decisions, such as China’s one child policy or contraception in the United States. But there are many ways to interfere with individual’s decisions. Take the recent example of Cambridge Analytica. Yes, the researcher who collected the initial information shared people’s information with Cambridge Analytica and that was bad. Yes, Cambridge Analytica developed psychographic profile and that was problematic. But what really got the press, the politicians and others so upset was Cambridge Analytica’s manipulation of individuals. It was there attempt, successful or otherwise, to alter peoples’ perception and manipulate their decision to vote and for whom.

    None of the above examples of privacy issues are properly covered by a FIPPs based data protection regime, without enormous contortion. They deal with interactions between persons and organizations or amongst person, not personal data. Some may claim, that while true, any of these invasions, at scale, must involve data, not one-off security guards. I invite readers to do a little Gedanken experiment. Imagine a web interface with a series of questions, each reliant on the previous answers. Are you a vegetarian? No? What is your favorite meat, chicken, fish or beef? Etc. I may not store your answer (no “data” collection) but ultimately the questioning leads you to one specific page where I offer you a product or service based on your specific selection, perhaps discriminatory pricing based on the selection. Here user interface design essentially captures and profiles users but without that pesky data collecting that would invite scrutiny from the privacy office. I’m not saying some companies might be advanced enough in their thinking, but in my years of practice most privacy assessments begin with “what personal data are you collecting?”

    Now, I’ll admit I haven’t spent a time to develop a regulatory proposal but I’d at least suggest looking at Woody Hartzog’s Privacy’s Blueprint for one possible path to follow. Hartzog’s notions of obscurity, trust and autonomy as guiding privacy goals encapsulate more than a data centric world. But Hartzog doesn’t just leave these goals sitting out there with no way to accomplish them. He presents two controls that would help: signaling and increasing transaction costs. Hartzog’s proposal for signaling is that in determining the relationship between individuals and organizations and the potential for unfairness and asymmetries (in information and power), judges should look not to the legalese of the privacy notice, terms and conditions or contracts but the entirety of the interaction and interfaces. This would do more to determine whether a reasonable user fully understood the context of their interactions.

    Hartzog’s other control, transaction costs, goes into making it more expensive for organization to commit privacy violations. One prominent example of legislation that increases transaction costs is the US TPCA which bans robocalls. Robocalling technology significantly decreases the cost of calling thousands or millions of households. The TCPA doesn’t ban solicitation, but it significantly increases the costs to solicitors by requiring a paid human caller to make the call. In this way, it reduces the incidents of intrusion. Similarly, the GDPR’s ban on automated decision making increases the transaction costs by requiring human intervention. This significantly reduces the scale and speed at which a company can commit privacy violations and the size of the population affected. Many would counter, and in fact many commenters on any legislative proposal, are concerned about the effect on innovation and small companies. True, that increasing transaction costs, in the way that the TCPA does, will increase costs for small firms. That is, after all, the purpose of increasing transaction costs, but the counter-argument is do you want a two-person firm in a garage somewhere adversely affecting the privacy of millions of individuals? Would you want a small firm without any engineers thinking about safety building a bridge over which thousands of commuters traveled daily? One could argue the same for Facebook, they’ve made it so efficient to connect billions of individuals they simply don’t have the resources to deal with the scale of the problems they’ve created.

    The one area where I agree with the Intel proposal is about FTC enforcement. As our de-facto privacy enforcer it already has institutional knowledge to build on, but their enforcement needs real teeth not limp wristed consent decrees. When companies analyze compliance risk if the impact of non-compliance is cost comparable to the cost of compliance, the they are incentivized to reduce the likelihood of getting caught, not actually get in compliance with the regulation. The fine (impact) multiplied by the likelihood of getting fined must exceed the cost of compliance to drive compliance. This is what, at least in theory, the GDPR 4% seeks to accomplish. Criminal sanctions on individual actors, if enforced, may have similar results.

    There are other problems with the FIPPs. They mandate controls without grounding in the ultimate effectiveness of those controls. I can easily technically comply with the FIPPs without manifesting improving privacy. Mandating transparency (openness in the Intel proposal) without judicial ability to consider the entirety of the user experience and expectation only yields lengthy privacy notices. Even shortened notices provide less than information about what’s going on than user’s reliance on the interactions with the company.

    In high school, I participated in a mock constitution exercise where we were supposed to develop a new constitution for a new society. Unfortunately, we failed and lost the competition. Our new constitution was merely the US Constitution with a few extra amendments. As others have said we don’t need GDPR-light, we need something unique to the US. I don’t claim Hartzog’s model is the total solution, but rather than looking at the FIPPs, Intel and others proposing legislation should be looking forward for solutions for the future, not the past.

    Reply
  12. CraigV

    YES!!! I’m ecstatic that Intel is making this such a priority.! it is at the heart of a better future.

    Reply
  13. Doug Egan
    I have only two comments at this time: 1. The supervisory authority for this legislation should be the U.S. Securities and Exchange Commission (SEC), not the Federal Trade Commission (FTC). ion (FTC).

    2. This legislation should be more consistent with the General Data Protection Regulation (GDPR), not the Fair Information Practice Principles.

    Reply
    • David Hoffman
      Doug- Tell me more about why you think the SEC is the right supervisory authority? I worry that they do not have enough history with consumer protection. In contrast, the the FTC has decades of experience enforcing Section 5 of the FTC Act and already has a division of experience privacy lawyers.

      Reply
  14. Michele
    I want to see the legislation include that the consumer controls their own data and it cannot be shared without the consumers consent. Also, like the GDPR if the consumer umer chooses not to have their data shared the company must notify the consumer, if the data is shared, with whom and provides a means to have it all deleted. I do not trust that a company will only “collect data that is relevant to the purposes for which they are to be processed” and will have adequate security safeguards (as can be seen by many data breaches) or will give the consumer access to everything they collect or delete it within a timely manner. The fine for not following the law should be at least 25% of their profits or in the millions whichever is higher. I found Googling my own name quite the eye opener, much of the information was incorrect and I was shown to be related to people I did not know (outcome of identity fraud, thanks to some breaches), plus I found some of my relatives which I have asked to be removed. It would come down then 3 months later I would do the same search and the same information was up on the same sights.

    Reply
  15. David Hoffman
    Michele – Excellent points. I also agree that an individual’s consent is important. We attempted to capture that with the way we encourage organizations to seek it so that the the use is automatically captured in the Permitted Use category. In addition, there will be many situations where consent will not be possible. For example, people often post information to their social media pages about other people they engage with. Would we say that the social media website would have to get the consent from the person who is being posted about, before having the post appear on the website? If we did that, wouldn’t that unduly restrict free speech? There are many things I like about GDPR (especially how it replaced a patchwork of member state implementations of the Data Protection Directive) but I also think it is critical that we create a law that is unique for the U.S.’s ethos of innovation and entrepreneurship, while still protecting individuals.

    Reply
  16. Justin
    First I want to say that like others, I really appreciate Intel putting this detailed proposal together and creating a forum for discussion. Not enough companies are willing to do do this sort of things so publicly, and Intel deserves praise for it.

    That said . . . I have major concerns about this draft bill, and to be perfectly honest, I would vigorously oppose it if it were introduced in something resembling its current form.

    One of my biggest concerns is that there isn’t any notion of reasonable data minimization or tailored collection in the bill, nor are there any provisions for choice or control (whether opt-in or opt-out). As a result, the bill doesn’t do anything at all to limit (or let consumers limit) data collection in the first place. So long as data collection is related to or “consistent with” (?!) something mentioned somewhere in a privacy policy, it’s all good. In some cases, a company may be required to provide explicit notice upfront (though only after a subjective assessment of whether the collection entails a “significant privacy risk”), but even then, the consumer is just presented with a take-it-or-leave-it offer — there’s nothing to address or limit secondary collection/use/retention/sale that isn’t directly related to the functioning of the product.

    Instead, the only real substantive rights for consumers are the ability to retroactively request deidentification of data, but only if (1) that data is publicized or sold to third parties AND (2) the company determines the availability of the data poses a significant privacy risk AND (3) the company determines that that risk is “disproportionate” to any (undefined) public benefit of the availability of the data. Otherwise, your only right is “reasonable” access and a right to correct. I’m sorry those substantive protections are far too flimsy to provide sufficient privacy protection.

    In lieu of substantive rights and limitations, the bill focuses too much on accountability and privacy compliance programs. Without robust substantive protections, however, those process requirements don’t do consumers much good. Rather they merely impose costs without a lot of corresponding consumer benefit: yes, you have a program in place to monitor compliance, but you don’t have to comply with much! While there may be some benefit to mandating a degree of circumspection and planning, process requirements are no substitute for actual rules and responsibilities.

    The enforcement provisions also have significant weaknesses, though there is some ambiguity so perhaps I am just misunderstanding. It appears that merely certifying the existence of an accountability program would insulate a company from civil penalties for prohibited behaviors (which are few anyway!) unless the FTC provides the target written warning of repeated violations. The FTC is already criticized today for its inability to obtain penalties from first-time offenders; this recreates that prohibition in a different form. Bringing a criminal case for false certification will be extremely difficult and resource-intensive for DOJ to do, and it would (and indeed should) only happen in the most extreme of cases. Outside of extreme circumstances, commercial privacy law is best left to civil, not criminal law. I’m a little confused in that the safe harbor doesn’t apply to (6)(b)(4) which treats all violations as a violation of a Section 5 trade regulation (meaning they trigger separate penalty liability under Section 5), but I don’t believe that is the intent of the bill. (I will not get into the lack of private enforcement rights, but that’s also a major weakness.)

    FWIW, there are certainly some things I like about the bill. I very much like the bifurcated privacy policy in 3(f)(2) — a simple, consumer-focused part that explains how people can access/correct/obscure data, and then a very detailed one that obligates companies to provide “a complete description of the covered entity’s collection and processing of personal data.” That’s a significant advance for transparency, and the two-stage policy structure is probably the best treatment of privacy policy requirements I’ve seen (though I am worried that a lot of processing can be laundered through “consistent practices” without public transparency and accountability). While I disagree with the risk-based approach for privacy, the risk-based approach makes more sense for a *security* regime, so I like the intent of the affirmative security protections in 3(e) — though the use of “ensure” in 3(e)(1) could arguably be read to impose a strict liability regime, which I doubt is your intent. And I certainly approve of the grant to the FTC of rulemaking authority as well as substantially greater staff (though I might route some of the resources to OTECH or a Bureau of Technology rather than just providing for “project management, technical, and administrative support positions” to DPIP in 9(b)).

    I also won’t get into the bill’s sweeping preemption provisions, but given the weak substance of the bill, it makes the preemption provisions even more difficult to swallow (in that it would preempt the stronger (in many ways) CCPA and other state protections).

    Anyway, take this feedback in the spirit of an open and frank dialogue about how to write privacy law. It’s a very difficult endeavor, so again I applaud Intel’s public effort. And apologies in advance for anything I have mischaracterized or misunderstood — please feel free to correct anything you think I have may have gotten wrong.

    Reply
  17. Mike P
    Will this proposed bill end up covering much of what GDPR’s intent is in the EU or CCPA in California set to take place in 2020? I did not see see anything about the right to erasure in it.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *