Privacy Risks

Thinking about privacy risks with respect to the broad range of adverse consequences to individuals as well as society as expressed in this proposed bill opens the door for thoughtful ethical design in systems. The specified privacy risks are thoughtful and generous to individuals and/or consumers. In an era of SmartTVs, smart refrigerators, smart doorbells and alarm systems, the inclusion of thoughtful risk mitigation to avoid adverse consequences that affect an individual’s private life is welcome.  Information Technologies are increasingly being placed in homes where individuals have traditionally had a basic expectation of privacy.

From a privacy engineering perspective, asking engineers to reason about and design systems that avoid causing “psychological harm,” or “significant inconvenience or loss of time” is important to do.  But, how can an engineer reason about all the possible ways in which a given technology may cause “psychological harm” to such a diverse population of users. I can imagine some engineers cringing at the sight of such “moralistic” requirements.  Codifying agreed upon privacy risks in law, however, would make explicit the need for engineers to ethically think about the goals and requirements that a system should achieve.

Intel’s leadership in drafting a bill that supports innovation while also seeking privacy protections in an effort to start a dialogue should be commended.

5 comments

  1. David Hoffman
    This is a critically important topic and comment. I have two thoughts: 1. The beauty of privacy by design when done right is it creates a process for a diverse a diverse group of stakeholders in a company (engineers, lawyers, public policy experts, business people) to ask questions about the impact of the technology. While engineers may not be able to code for “societal harm”, I do think there is value in having them be part of a discussion internally about what potential societal harm might be created by what they are designing.

    2. There also should be a role for both the FTC and NTIA at Commerce. I have always thought legislation to should be flexible and written at a high level to apply to all situations. After that, there is a role for FTC regulations, FTC guidance, NTIA best practices, and NIST standards. A layered approach to privacy regulation with differing amounts of flexibility for changes to be made over time.

  2. Paula Bruening
    I appreciate the points raised in this comment, particularly that addressing certain kinds of intangible, subjective privacy risks raises serious challenges, particularly to engineers whose orientation to problem-solving may not not lend itself to this kind of analysis. However, as David points out, assessing risks as part of privacy-by-design should be a process that is undertaken by diverse company personnel. In my work with SMEs, I find that it provides an opportunity for employees to gain better insight into how data is used, the risks processing raise, and what considerations go into responsible decisions about data use – all of which benefits both the company (from a risk management standpoint) and the consumer (from the perspective of privacy). Articulating and understanding the nature of privacy risk is an issue that policymakers have long struggled with. Laying down markers in legislation that drive that discussion would be helpful, however, I would note that SMEs in particular need practical guidance – the role of the agencies in providing that will be critical.

  3. Annie Anton
    I agree that having engineers participate in the discussion about what potential harm might come from the systems they design is critical to the ethical design of systems. The Association tion for Computing Professionals (ACM) updated its Code of Ethics in July of this year. Unfortunately, a recent study by Dr. Emerson Murphy-Hill’s research team at North Carolina State University shows that the code of ethics does not appear to affect the decisions made by software developers. The study, was presented at the 2018 ACM Symposium on the Foundations of Software Engineering. Using 11 scenarios encompassing different real-life ethical challenges, participants (105 U.S. software developers with at least 5 years of experience and 63 software engineering graduate students) were asked to read the 11 scenarios and indicate how they would respond to the scenario. Reading the ACM Code of Ethics before responding to the scenarios yielded no significant difference in the results.

    The bleak findings of this recent study, suggest that it’s not just a matter of engaging engineers in discussions “about what potential societal harm might be created by what they are designing.” It’s much more fundamental that. It is truly imperative that we find ways to impress the importance of ethical design on software engineers. As an engineer, and not a lawyer, I wonder whether federal privacy legislation could help with that.

  4. Marty Abrams
    Four years of working with stakeholders on ethical assessments in Canada, Europe and Hong Kong has provided some sense of how one might build ethics in. It first begins with with organizational values that link to societal values. This cascades down to principles and then internal codes. The codes defines the “by design” process. Where data uses go beyond common understanding, there is an assessment process that is independent of the development process. Lastly, both development and assessing is subject to auditable controls. The process can be scalable

  5. Pam Dixon
    Data ethics has a last-mile problem. That is, there is a meaningful gap between ideas enshrined in ethical principles and the day-to-day of running a business. “Do no harm” is is a revered principle in human subject research; it took the Common Rule to implement this and other principles in a practical language for covered entities in the US. Similarly, data ethics requires practical governance to put ethical principles like accountability and transparency and fairness into practice on a day-to-day basis.

    We do not need to reinvent the wheel here, much work was done by Elinor Ostrom in regards to improving the institutional design of entities tasked with implementing protection frameworks using common pool resources. In her case, Ostrom worked in environmental systems where common-pool resources were water, a bay, etc. In the discussion here, data is the common pool resource. Ostrom articulated 8 general principles gleaned from a lifetime of work and study:

    -Rules are devised and managed by resource users.
    -Compliance with rules is easy to monitor.
    -Rules are enforceable.
    -Sanctions are graduated.
    -Adjudication is available at low cost.
    -Monitors and other officials are accountable to users.
    -Institutions to regulate a given common-pool resource may need to be devised at multiple levels.
    -Procedures exist for revising rules.”

    (The Commons in the New Millennium, Elinor Ostrom et al.)

    These governance principles bear consideration as practical steps that can be adapted to take ethical ideas that last mile.