Topic

The Safe Harbor

Comment from Joe Jerome over social media.

ViewHide Discussion

2 comments

  1. David Hoffman
    One of the comments that has come in over social media (thank you Joe Jerome) is that it seems like the safe harbor allows companies to “buy their way out out of penalties with a privacy program”. The intent was not to allow companies to “buy their way out”, but instead to reflect the reality that privacy issues can happen no matter how much a company works to demonstrate responsible data handling practices. We wanted to recognize that fact, provide a carrot to having companies take the extra step of having a corporate officer conduct a review of the program, and still allow the FTC to take away the safe harbor status if the company is a repeat offender. My first hand experience at Intel is that it makes a big difference to have corporate officers agree to sign a certification, and it is far from just a paper exercise. I borrowed the language in the bill from Sarbanes Oxley. Intel holds that privacy is a fundamental right, so it seems appropriate to use some of the same standards we use for responsible handling of financial reports. Thoughts?

  2. Peter Swire
    Before I saw this item I had already commented on the similarity to Sarbanes Oxley in the thread on Safe Harbor Certification. I agree with David Hoffman’s response to Joe Joe Jerome – Sarbanes Oxley provides an example of a strict regulatory regime, and certifications have played a central part in making it a strict regime.

Topic

Defining Personal Information

Along with many others, I have sought to make the case that there are many stages of personal or non-personal information, and that laws should recognize different obligations for different types of data.  A definition that is very broad risks setting infeasible or unwise requirements for data that is low risk and high utility, while… Read more »

ViewHide Discussion

2 comments

  1. Peter Swire
    Invitation to Jules Polonetsky, Omer Tene, or John Verdi of Future of Privacy Forum – David Hoffman’s language seems to overlap with efforts FPF has been making on this topic. ic. Could you chime in on any additional points from those FPF efforts?

  2. Omer Tene
    Peter, not sure if your comment here is pre or post Jules’ comment, which outlines FPF efforts in this respect. Intel’s definition is in line with Danny’s preference for a nce for a briefly stated, principle based statute that leaves room for agency/judicial interpretation. It also reflects an EU-style definition that has gained much traction around the world. (I’d just tweak the term “location data” by adding the word “precise” before it.
    In a way, it also tracks FPF efforts since it distinguishes between direct identifiers (“a name, an identification number, [precise] location data, an online identifier”) and a collection of indirect identifiers (” physical, physiological, genetic, biometric, mental, economic, cultural or social identity”), which could be use for identification.
    At FPF, we have found that policymakers, regulators, businesses and advocates continue to struggle with this high level definition, which inevitably leads to “all or nothing” debates about the futility or beauty of de-id.
    We therefore try to slice it thinner, to unveil the complete spectrum of identifiability (or de-id), and to calibrate different obligations to various intermediate states. Perhaps Danny will say this is a task better left for the courts. But our experience is that organizations are yearning for guidance on this most fundamental – and controversial – piece of the framework.

Topic

Privacy Risks

Thinking about privacy risks with respect to the broad range of adverse consequences to individuals as well as society as expressed in this proposed bill opens the door for thoughtful ethical design in systems. The specified privacy risks are thoughtful and generous to individuals and/or consumers. In an era of SmartTVs, smart refrigerators, smart doorbells… Read more »

ViewHide Discussion

5 comments

  1. David Hoffman
    This is a critically important topic and comment. I have two thoughts: 1. The beauty of privacy by design when done right is it creates a process for a diverse a diverse group of stakeholders in a company (engineers, lawyers, public policy experts, business people) to ask questions about the impact of the technology. While engineers may not be able to code for “societal harm”, I do think there is value in having them be part of a discussion internally about what potential societal harm might be created by what they are designing.

    2. There also should be a role for both the FTC and NTIA at Commerce. I have always thought legislation to should be flexible and written at a high level to apply to all situations. After that, there is a role for FTC regulations, FTC guidance, NTIA best practices, and NIST standards. A layered approach to privacy regulation with differing amounts of flexibility for changes to be made over time.

  2. Paula Bruening
    I appreciate the points raised in this comment, particularly that addressing certain kinds of intangible, subjective privacy risks raises serious challenges, particularly to engineers whose orientation to problem-solving may not not lend itself to this kind of analysis. However, as David points out, assessing risks as part of privacy-by-design should be a process that is undertaken by diverse company personnel. In my work with SMEs, I find that it provides an opportunity for employees to gain better insight into how data is used, the risks processing raise, and what considerations go into responsible decisions about data use – all of which benefits both the company (from a risk management standpoint) and the consumer (from the perspective of privacy). Articulating and understanding the nature of privacy risk is an issue that policymakers have long struggled with. Laying down markers in legislation that drive that discussion would be helpful, however, I would note that SMEs in particular need practical guidance – the role of the agencies in providing that will be critical.

  3. Annie Anton
    I agree that having engineers participate in the discussion about what potential harm might come from the systems they design is critical to the ethical design of systems. The Association tion for Computing Professionals (ACM) updated its Code of Ethics in July of this year. Unfortunately, a recent study by Dr. Emerson Murphy-Hill’s research team at North Carolina State University shows that the code of ethics does not appear to affect the decisions made by software developers. The study, was presented at the 2018 ACM Symposium on the Foundations of Software Engineering. Using 11 scenarios encompassing different real-life ethical challenges, participants (105 U.S. software developers with at least 5 years of experience and 63 software engineering graduate students) were asked to read the 11 scenarios and indicate how they would respond to the scenario. Reading the ACM Code of Ethics before responding to the scenarios yielded no significant difference in the results.

    The bleak findings of this recent study, suggest that it’s not just a matter of engaging engineers in discussions “about what potential societal harm might be created by what they are designing.” It’s much more fundamental that. It is truly imperative that we find ways to impress the importance of ethical design on software engineers. As an engineer, and not a lawyer, I wonder whether federal privacy legislation could help with that.

  4. Marty Abrams
    Four years of working with stakeholders on ethical assessments in Canada, Europe and Hong Kong has provided some sense of how one might build ethics in. It first begins with with organizational values that link to societal values. This cascades down to principles and then internal codes. The codes defines the “by design” process. Where data uses go beyond common understanding, there is an assessment process that is independent of the development process. Lastly, both development and assessing is subject to auditable controls. The process can be scalable

  5. Pam Dixon
    Data ethics has a last-mile problem. That is, there is a meaningful gap between ideas enshrined in ethical principles and the day-to-day of running a business. “Do no harm” is is a revered principle in human subject research; it took the Common Rule to implement this and other principles in a practical language for covered entities in the US. Similarly, data ethics requires practical governance to put ethical principles like accountability and transparency and fairness into practice on a day-to-day basis.

    We do not need to reinvent the wheel here, much work was done by Elinor Ostrom in regards to improving the institutional design of entities tasked with implementing protection frameworks using common pool resources. In her case, Ostrom worked in environmental systems where common-pool resources were water, a bay, etc. In the discussion here, data is the common pool resource. Ostrom articulated 8 general principles gleaned from a lifetime of work and study:

    -Rules are devised and managed by resource users.
    -Compliance with rules is easy to monitor.
    -Rules are enforceable.
    -Sanctions are graduated.
    -Adjudication is available at low cost.
    -Monitors and other officials are accountable to users.
    -Institutions to regulate a given common-pool resource may need to be devised at multiple levels.
    -Procedures exist for revising rules.”

    (The Commons in the New Millennium, Elinor Ostrom et al.)

    These governance principles bear consideration as practical steps that can be adapted to take ethical ideas that last mile.

Topic

Thinking and learning with data

Privacy is not just about personal protection, it is about the appropriate use of data.  American’s ability to think and learn with data has been a difference maker for innovation, and that innovation has been beneficial beyond our boarders.  New U.S. legislation needs to protect people in what has become an observational world.  However, it… Read more »

ViewHide Discussion

4 comments

  1. David Hoffman
    Marty – I completely agree with you. The work you have done over the past 15 years on accountability has provided a path forward. How do you feel about the the level of detail we have in the accountability sections of the bill? We wanted to provide enough information to make clear the obligations to create a robust privacy program, but also provide enough flexibility for different industry sectors or sizes of organizations (small and medium sized companies).

  2. Marty Abrams
    Accountability clicks in both when we are thinking and learning with data, and when we are acting with data. The issues when we are thinking with data relate data security rity and the integrity of the research one is conducting. Typically to risk to others is minimal if data is used in a secure fashion. In work that I did with Paula Bruening and Meg Lata Jones in 2013, we demonstrated that thinking with data is very much like research. The knowledge created counter balances the risk that one is using data where others have interests or maybe even rights. This is very similar to the balancing process related to legitimate interests in Europe. When acting with data, all the traditional privacy principles kick in. I believe that accountability is really connected to the section on consistent uses, and I look there, not just in the explicit accountability section, for accountability in effect. European law says that research is always a compatible purposes. Is that the same with consistency? I see the consistency process as facilitating thinking and learning with data. I would like to hear from others.

  3. David Hoffman
    That is exactly what we were going for. I didn’t want to create a separate “Lawful basis” test like GDPR, but instead believe that if the scope of the “personal nal data” and “privacy risk” definitions are done right, that you can include the right risk analysis into the Consistent Uses section. It should have the effect of encouraging companies to get meaningful consent (because you then do not have to complete the risk analysis), but also provides a mechanism for the growing number of situations where consent is not practicable or possible.

  4. Pam Dixon
    Marty, you have been doing wonderful work on accountability. It’s an important conversation. One of the important elements I hear you discussing in various fora is that data is not not just data- it now creates new data and new knowledge. I have appreciated your work in this area. I just posted some language ideas regarding knowledge creation in the topic White Box Analytics…feedback welcomed!

Topic

Welcome!

Welcome to a new form of participatory democracy.

ViewHide Discussion

One comment on “Welcome!

  1. David Hoffman
    Intel has advocated for more than 15 years for Congress to pass comprehensive U.S. privacy legislation. We do not believe there has to be a trade off between privacy and and innovation. Effective privacy regulation is critical to allow technologies like artificial intelligence to help solve the world’s greatest challenges. The combination of advances in computing power, memory and analytics create the possibility that technology can make tremendous strides in precision medicine, disease detection, driving assistance, increased productivity, workplace safety, education and more. At Intel we are developing many of these technologies and are focused on integrating artificial intelligence capability across the global digital infrastructure. At the same time, we recognize the need for a legal structure to prevent harmful uses of the technology and to preserve personal privacy so that all individuals embrace new, data-driven technologies. At Intel we know that privacy is a fundamental human right and robust privacy protection is critical to allow individuals to trust technology and participate in society.

    We have created our draft proposed law and want to use this website to facilitate discussion. Too often conversations about the language in U.S. legislation takes place solely behind closed doors. We want everyone to participate in the process. We also want to know what the best privacy experts have to say about our proposal. We have asked these experts to post to a section of our website. We are either compensating the experts for their time, or in the case of those from non-profits and advocacy entities, we are providing some general financial support for their organization. We have asked the experts to give us critical feedback, especially the portions of the bill with which they agree. We have promised that we will not edit the content of their posts. I will also participate in that discussion, and will bring in comments from our public discussion part of the website.

    After approximately two weeks, we will pause the discussions and will update the proposed bill based on the feedback. We look forward to the dialogue and creating the best bill possible to optimize for both innovation and privacy.