Hostname: page-component-6bf8c574d5-t27h7 Total loading time: 0 Render date: 2025-02-20T23:33:14.277Z Has data issue: false hasContentIssue false

An Architecture for Privacy in a Networked Health Information Environment

Published online by Cambridge University Press:  01 October 2008

Rights & Permissions [Opens in a new window]

Extract

As we move toward the creation of a networked health information environment, the potential of privacy intrusions increases, with potentially devastating impact on quality and access to healthcare. This paper describes the risks we face and proposes a framework to minimize those risks. In particular, it proposes nine principles to protect privacy in an information age.

Type
Special Section: The Newest Frontier: Ethical Landscapes in Electronic Healthcare
Copyright
Copyright © Cambridge University Press 2008

As we move toward the creation of a networked health information environment, the potential of privacy intrusions increases, with potentially devastating impact on quality and access to healthcare. This paper describes the risks we face and proposes a framework to minimize those risks. In particular, it proposes nine principles to protect privacy in an information age.

In presenting these principles, we begin from the premise that privacy protections should not be included in health information technology (health IT) systems as an afterthought, but as a core design—or architectural—foundation. Hence the principles we present are described as “architectural principles”: while ensuring that the tremendous potential benefits of technology are realized, they put privacy at the very heart of health IT.

We first discuss the importance of privacy. We employ conceptual and survey evidence to explain what we risk losing if we lose privacy. Then we consider the concept of privacy further and pay particular attention to the way notions of privacy have evolved over time, suggesting that privacy today (in a digital age) may require different protections than it did in the past. Next we evaluate some of those differences; we present six risks that are specific to—or at least aggravated by—the digital age. Finally, we present nine architectural principles to protect privacy. Although these principles are designed specifically to promote privacy in health systems and applications, they can also be applied more generally to other realms where IT and electronic data are extensively deployed.

What Is at Stake?

Individual Liberty and Autonomy: An International Approach

In many countries and treaties, privacy is considered a fundamental right, equivalent to other basic individual liberties such as freedom of speech and thought. Both the UN Declaration of Human Rights and the International Covenant on Civil and Political Rights, for example, recognize the right to privacy. In these treaties, privacy is recognized as a form of autonomy—a way to ensure protection from “arbitrary interference”Footnote 1 by the state or other entities. In addition, several broad, international principles exist that have been adopted (and adapted) by a variety of countries. For example, as we shall see below, the Organization for Economic Co-operation and Development (OECD) led the way in defining several principles for privacy protection, and the European Union (and individual countries) subsequently adopted these principles in its 1995 Directive on Protection of Personal Data. Interestingly, this directive differs significantly from the U.S. approach in that it takes a broad, omnibus approach to privacy protection rather than the sector- and often state-specific approaches adopted in the United States.

Understood in this broad way, as a fundamental human right, a violation of privacy can be considered a serious violation of an individual's basic rights—equivalent, perhaps, to imprisonment without trial or the denial of free expression. Naser and AlpertFootnote 2 point out that this violation is particularly serious in a medical context, where patients are often already somewhat helpless and in a position of dependence.Footnote 3 As they write: “When patients . . . disclose intimate secrets about themselves they also become more vulnerable. Patients who are ill already have a diminished sense of autonomy” (p. 22).Footnote 4 In such instances, robbing individuals of their privacy is tantamount to a serious violation of their individual liberty.

Privacy Protective Behavior in a Medical Context

In addition to violating individual rights, the loss of privacy in a medical context has additional negative consequences, some of which can be understood as collective harms. Social scientists have frequently established that surveillance—not just in the medical field, but across fields—can have a “chilling effect” on individual behavior.Footnote 5 In the medical field, this chilling effect can lead to what experts call “privacy protective behavior” (p. 49).Footnote 6 Such behavior includes hiding evidence of preexisting conditions from doctors or insurance companies, paying out of pocket for treatment, or simply avoiding treatment altogether.

Goldman, in a paper on the importance of medical privacy, lists four negative consequences of such privacy protective behavior:

(1) The patient may receive poor-quality care, risking undetected and untreated conditions.

(2) The doctor's abilities to diagnose and treat accurately are jeopardized by a lack of complete and reliable information from the patient.

(3) A doctor may skew diagnosis or treatment codes on claim forms, keep separate records for internal uses only, or send on incomplete information for claims processing to encourage a patient to communicate more fully.

(4) The integrity of the data flowing out of the doctor's office may be undermined. The information the patient provides, as well as the resulting diagnosis and treatment, may be incomplete, inaccurate, and not fully representative of the patient's care or health status. (p. 49)Footnote 7

Survey Evidence

These negative consequences are not mere hypotheticals—a large number of surveys over the years have consistently shown that the public really is concerned about breaches in confidentiality and that privacy protective behavior is a very real phenomenon. For example, as reported by Goldman and Hudson,Footnote 8 a 2000 survey of Internet users found that 75% of respondents were worried that health sites shared information without consent and that a full 17% would not even seek health information on the Web due to privacy concerns. Another poll, also conducted in 2000, found that 61% of Americans felt that “too many people have access to their medical records.”Footnote 9

The surveys also show that such concerns frequently lead to privacy protective behavior. For example, in a survey conducted by the California HealthCare Foundation, more than one out of six adults said they had done something “out of the ordinary” to hide private medical information.Footnote 10 In another survey conducted by Harris in 1993, 11% of respondents said they sometimes chose not to file an insurance claim, and 7% said they sometimes neglected to seek care in order to avoid damaging their “job prospects or other life opportunities” (p. 50).Footnote 11

Such behaviors, as noted above, do not just cause potential damage to an individual patient's health. They also impose a collective burden, leading to greater costs and public health problems that an already overstretched health system can ill afford.

Understanding Health Privacy: Definitions and Underlying Concepts

Understanding the concept of privacy is essential to helping us design better policies, practices, and technologies to protect consumer and citizen privacy. The trouble, as one observer points out, is that “privacy is a notoriously vague, ambiguous, and controversial term that embraces a confusing knot of problems, tensions, rights, and duties” (pp. 11–2).Footnote 12 In attempting to define privacy, one expert resorts to a version of Justice Potter's famous definition of pornography: “You know it when you lose it” (p. 101).Footnote 13

One of the earliest definitions of privacy was published in 1890, in a Harvard Law Review article by Louis Brandeis and Samuel Warren. In that article, entitled “The Right to Privacy,” Warren and Brandeis argued that privacy could be defined as “the right to be let alone” (p. 193).Footnote 14 They were writing about the modern press, and particularly the instantaneous photograph, which they felt invaded “the sacred precincts of private and domestic life" (p. 194).Footnote 15

More than a hundred years later, we continue to grapple with difficult privacy problems raised by technology. The now-classic definition of privacy in the information age was supplied by Alan Westin, who in his 1967 book, Privacy and Freedom, argued that “[p]rivacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” (p. 7).Footnote 16 Westin's definition of privacy is probably the most prevalent, and widely accepted today. It is sometimes referred to as “informational privacy.”Footnote 17 In 1971, the Harvard professor Arthur Miller predicted that all individuals would eventually be the subjects of a “womb-to-tomb dossier” (p. 138).Footnote 18 Westin himself argued that, in the information era, every individual is accompanied by a "data shadow" that can reveal even the most intimate and apparently mundane details about his or her life.Footnote 19

Such ideas of privacy permeate a number of fields. In recent years, they have become increasingly important in the field of health, where privacy issues have emerged as a major concern. In a wide-ranging discussion of the literature on health and privacy, Sheri Alpert identifies at least two distinct concerns. First, she finds a recurring concern in the literature over the potential “harm that can befall patients if their medical information is disclosed either in ways that exceed their expectations or if information reaches the hands of people who should not have access to it” (p. 304).Footnote 20 She cites a number of authors expressing concern over such potential misuse and argues that the primary purpose for a patient's medical data is—and should be—“the clinical diagnosis, treatment, and care of that patient” (p. 305).Footnote 21

Alpert also identifies a second, and somewhat contradictory, theme: It emphasizes the tremendous potential benefits that can be accrued through medical data. Briefly, it is anticipated that the use of medical data, particularly when enabled by electronic health records, has the potential to transform the way patients receive care, and to introduce a far greater degree of efficiency and effectiveness in our nation's medical care system.Footnote 22

One of the central challenges confronting privacy advocates is to find a balance between these two themes. Much as it is essential to protect confidentiality of information, so it is essential for our privacy and information laws to maximize the potential benefits that can be offered by medical data. The solution to achieving this balance lies in well-defined principles that protect information while at the same time permitting it to be shared in a meaningful and productive way. Such principles are outlined further below.

Health Privacy in a Digital Networked Environment: What Is Different?

Although a digital and networked environment offers much potential and many new opportunities for stronger privacy protections, it also poses several new challenges. If we are to develop effective solutions, it is essential to better understand these challenges.

New Environment, New Challenges

Commercial misuses of data. Perhaps the most serious—and probably pervasive—privacy violations in the information age stem from the potential for commercial misuse of data. In recent years, an extensive data market has developed, driven largely by data aggregators, or “data brokers,” who repackage and sell information without the knowledge or consent of the original information owner.Footnote 23 Commercial misuses of data can have several serious consequences for individuals, leading, for example, to a denial of insurance coverage or credit or to invasive unsolicited marketing programs.

Government misuses of data. In addition to commercial misuses of data, the state has also on occasion abused personal data. In 1998, police in Virginia, investigating a car theft from a parking garage near a drug treatment center, collected 200 medical records as part of their investigation; they later acknowledged their actions as an unnecessary violation of patient privacy. State welfare agencies and the Immigration and Naturalization Service have also used welfare and immigrant health records in their administration of their respective programs.Footnote 24

One particularly serious emerging category of risk stems from the increasing capability of governments to indulge in surveillance activities. A recent prominent report argues that individual pieces of information on travel and other practices that are currently being collected could lead to an international surveillance framework that “dwarfs any previous system and makes Orwell's book Nineteen Eighty-Four looks quaint.”Footnote 25 The report also points out that much of this information is collected in the name of national security. Balancing legitimate national security needs with strong privacy protections is likely to be a central challenge in coming years.

Criminal misuses of data. Both commercial and government uses of data have legitimate purposes; generally, misuses and privacy violations represent the exception rather than the norm. But digital data is also susceptible to criminal misuse, which can result in serious violations of privacy, considerable financial expense, and, as we shall see below, even physical injury and death.

Identify theft represents a particularly serious problem. In 2003, the Federal Trade Commission (FTC) estimated that 10 million Americans (nearly 5% of the adult population) were victims of some form of identity theft.Footnote 26 According to the Federal Bureau of Investigation (FBI), the Internet Crime Complaint Center (IC3), a joint project between the FBI and the National White Collar Crime Center, received more than 100,000 complaints regarding identity theft in the period between its opening in 2000 and 2005. It estimated the costs of identity theft as nearly $40 billion annually (not including credit card fraud).Footnote 27

Security breaches. Security breaches represent a growing category of risk.Footnote 28 Although not unique to the information age, digital forms of storage do possess particular vulnerabilities, including the relatively greater ease of remotely hacking a network, the ease of replication, and the sheer volume of data, which makes it harder to keep track of information. These and other factors make it far easier to steal or criminally acquire digital data. Indeed, a number of recent examples suggest that criminals are well aware of network vulnerabilities and that criminal acquisition of data is a growing risk.

Data quality issues. A digital environment also introduces potential data quality concerns—an issue closely related with current privacy concerns. For example, the wrongful inclusion of citizens on national no-fly lists or other terror databases can expose that individual to wrongful intrusion by a range of law enforcement agencies. Likewise, incorrect medical data can lead to the wrongful denial of insurance coverage to affected individuals. Although such risks remain relatively rare, they highlight the need not only to build strong privacy protections into network architecture, but also to develop remedies and means of appeal against data quality issues.

Harmful social consequences. Privacy violations can also impose very real social costs on individuals, making it difficult for them to live meaningful lives within their communities. One notable example occurred in 1998, when a San Diego pharmacist revealed a man's HIV-positive condition to his ex-wife. The man, who was locked in a custody battle with the woman in question, ultimately settled the case rather than face the stigma of his condition being made public.Footnote 29

The need to control such social consequences becomes apparent when we consider that societies use such “shaming” techniques as regular tools for law enforcement procedures. Consider, for example, the widespread use of so-called Megan's Laws to maintain public sex offender registries. The use of such legitimate (and legal) shaming techniques makes it essential to draw up strict rules to differentiate between acceptable disclosures of personal information in the public domain and unacceptable disclosures.

Defining a Comprehensive Privacy Architecture: Establishing Trust in the Network

This section presents nine architectural principles that are designed to address privacy risks in a structural and systematic manner. To an extent, these principles are derived from existing laws, statutes, and Fair Information Practices.Footnote 30 However, in many cases we have updated or modified the laws in order to strengthen their architectural nature. In addition, we have tried to ensure that the nine principles do not rely solely on law or technology, but that they employ a variety of levers—including technology, policy, and social and cultural forces—to ensure privacy protection.

Nine Architectural Principles for a Networked Environment

  1. 1) Openness

    Perhaps the most important mechanism for privacy protection in the information age, this first principle stipulates that there should be a broad and universal practice of transparency in the way data are handled. Citizens should be able to establish what information exists about them in the data market and in government databases. They should also be able to track how that information is used and by whom, and they should be able to control how that information is disseminated. Individual choice is critical; control of information rests with citizens, not with data aggregators or data users.

    It is also essential that citizens be aware of how they can exert such control. Having strict laws to ensure transparency and openness serves little purpose if citizens do not know how they can find out where information about them exists and how they can control who has access to that information. Ideally, patients should be able to give their informed consent to any use of their information.Footnote 31 Outreach and education regarding privacy are therefore critical, as is the role of civil society and consumer groups in facilitating such efforts. One possible policy option is to require all data collectors and aggregators to register with a government agency (probably the FTC), and for that agency to maintain a secureFootnote 32 “one-stop” web site where citizens can view their data shadow.

  2. 2) Purpose specification and minimization

    Data should never be collected without citizens knowing that it is being collected. Furthermore, citizens should always be aware of why that information is being collected and how it will be used. This will allow citizens to give their informed consent to any act of data collection.

    Furthermore, an important extension exists to this principle of purpose specification: Data must be used only for the originally stated reason (or, in rare cases, for other purposes with specific legal sanction: see the discussion below regarding “Use limitation”). Currently, a number of privacy violations occur when data are collected for one legitimate purpose (with citizen consent) and then resold and reused in another context, for a very different purpose. For example, clinical data may be collected to treat a patient, but may later find their way to the hands of insurers or credit agencies that could use the information to deny coverage to citizens. A strict minimization requirement can prevent such unauthorized reuses of data.

  3. 3) Collection limitation

    The collection of personal information should be done by lawful and fair means and with the knowledge and consent of citizens. There should be well-drafted and explicit permissions to ensure that data collectors state their purpose in ways that are clear and easily understood by the population for whom they are intended, without misleading language.

    Collection limitation can be seen as an extension of “purpose specification” (Principle 2, above). However, it goes beyond the requirement that data collectors specify why they are collecting information and suggests a blanket application of Principle 1 (“openness”) to all aspects and forms of data collection. For example, the principle of collection limitation requires that information only be gathered in a legal manner and in a manner that is apparent to citizens and patients. This last requirement is particularly important in a networked environment, because technology is often opaque and unclear to average users. Many users, for example, have little idea of the wealth of information that exists on their computers in the form of cookies. They may similarly not be aware of the potential abuses that occur when they submit personal information to a medical or other web site. Thus, in addition to declaring their purpose clearly (Principle 2), data collectors should also be required to declare the very fact that they are collecting information.

  4. 4) Use limitation

    As stated above, a minimization requirement would strictly limit whether data collected for one purpose could be reused in another context. Generally, we believe that such reuse should not be permissible without explicit consent of citizens.

    However, certain legal exceptions may apply, particularly in the case of national security or law enforcement. Such cases should be the exception instead of the norm and should be controlled by strict laws and sanctions. In addition, when information is reused, it is far preferable that the information in question be nonidentifiable—that is, it may consist of aggregated or demographic data, but to the greatest extent possible should not include information that could identify an individual. This allows data to be reused without representing a gross violation of an individual's privacy.

  5. 5) Individual participation and control

    An important principle of privacy protection is that an individual has a vital stake in, and thus needs to be a participant in, determining how his or her information is used. Privacy protections should be designed with this principle in mind: Citizens should be seen as key participants in processes of information collection and dissemination and not as mere subjects or passive spectators. At all stages in the information chain, they should be able to inspect and query their information, and they should be able to determine who uses that information. In addition, as we shall explore further below, they should have clear avenues to correct information.

    Such control can be facilitated through the principles of transparency and the various limitations we have outlined above. In addition, whenever possible, personal information should be collected directly from the individual in question rather than from a third party. This enhances patient control over personal information. Finally, control means that citizens should have meaningful opt-out clauses when they do not want their information to be reused or when they want to “reclaim” their information. Currently, many opt-out procedures administered by web sites and others are hopeless and cumbersome, making it nearly impossible for citizens to exert real control. In addition, opt-out provisions can be diluted when they represent all-or-nothing choices, forcing citizens to choose, for example, between privacy and inefficient service.Footnote 33 For such reasons, “opt-in” is often regarded as providing more control to the patient: It allows patients explicitly to determine when, by whom, and for what information is being used. In the event patients do not understand the conditions under which their information is being used, they can choose to request more information or refuse permission.

    It is also important to note that greater individual control may confuse existing methods of determining and allocating liability for privacy violations and medical errors. For example, practitioners may be blamed for errors stemming from an individual's refusal to release medical information. Similarly, an individual could accidentally “leak” his or her own data through a “phishing” attack or other online breach. Overall, there will certainly be new and unforeseeable liability issues raised by greater use of electronic medical records (EMRs) and greater patient control. To the extent possible, these need to be addressed beforehand, in a systematic manner, as part of any Fair Information Practice Principles.

  6. 6) Data integrity and quality

    We have seen that data corruption is a key—and new—source of privacy violation in the information age. It follows, then, that mechanisms need to be developed to address this violation and for establishing accountability among those who maintain records. Such mechanisms can include technical tools for quality control as well as regular backups and redundancy in systems and databases. In addition, citizens should have clear avenues to view all information that has been collected on them and to ensure that that information is accurate, complete, and timely. The tools could include laws drafted along the lines of the Fair Credit Reporting Act, which permits citizens to correct mistakes in their credit report.

    Citizens should also be able to ensure that information is being used for the originally stated purpose—that is, they should be able to correct errors in context as well as content. This requires that citizens be able to view not only what information exists on them, but how it is being used. A discrepancy in either can be viewed as a form of data corruption, requiring clearly articulated and publicized avenues for redress.

  7. 7) Security safeguards and controls

    Security breaches, discussed above, represent another potential source of privacy violation, and so security safeguards represent another important principle for privacy protections. Given the increasing frequency of hacking and other forms of cyber-crime, it is imperative that reasonable security safeguards be built against loss, unauthorized access, destruction, use, modification, or disclosure of personal information. In addition, all data collectors and disseminators should be mandated to immediately disclose any security breach through a direct communication to those consumers or citizens affected (i.e., not just by releasing the news to the media). Such laws, similar to California's information security breach law (Civil Code § 1798.29), will allow individuals to protect themselves through postfact remedies.

    Security represents an important example of how protections can be built into the design of technology. By implementing the right technologies and by consulting security experts at the outset key precautions can be taken at the design stage; this will increase the robustness of network security. For example, networks can be designed and built with enhanced identity management tools to ensure that access to information is limited only to those with a specific need and authorization to see it. In addition, data scrubbing, hashing techniques, real-time auditing mechanisms, and a range of other technical tools can be deployed to ensure security. The key is to supplement legal protections with technical protections; that is the only way to ensure true data privacy.

  8. 8) Accountability and oversight

    Further, it is essential that mechanisms be built to ensure that the responsibility for privacy violations is identifiable and that remedial action can be taken. Boards of directors and senior management must be held accountable for any violations; it is their responsibility to ensure steps are taken to instigate, review, or modify their organization's risk management strategy as it relates to handling patients’ information.

    Several specific steps can be taken to enhance accountability and oversight. Organizations could be mandated to create a post for Chief Privacy Officers (CPOs), who would fulfill the same duties with regard to privacy as CFOs and CTOs do with regard to finance and technology, respectively. In addition, organizations should hold regular employee training programs as well as privacy audits to monitor organizational compliance. As described above, these audits can be facilitated by technical tools that ensure clear audit trails and reveal patterns of use (and potential abuse).

  9. 9) Remedies

    This principle is closely related to Principle 8, with the exception that it probably entails greater participation by the state (in the form of legal sanctions). One of the key challenges with enforcement of privacy rights is the difficulty (often impossibility) of clearly pinning blame or even of tracing the source of a privacy violation. Solove and HoofnagleFootnote 34 point out that approximately 50% of identity theft victims do not even know how their information was accessed. Similarly, it is likely to be extremely difficult for a patient to monitor and identify violations of information contained in their EMRs. Without such information, it obviously becomes very difficult to seek remedies.

    Some of the strategies described above (e.g., audit trails) can help pin the blame more accurately. In addition, internal controls such as those described in Principle 8 are also important to monitor uses (and abuses) of information. Although such remedies are not foolproof, they do help identify a data trail.

    When it is possible to identify the source (or perpetrator) of a privacy violation, the next step is to ensure that clear legal remedies exist to address the situation. Minimum statutory punishments must be clearly articulated, as must damages for any violations.Footnote 35 Solove and Hoofnagle have also suggested that ways must be developed to avoid extensive class action litigation (e.g., by allowing state authorities to fine companies and disburse remedies to victims of privacy violations from a state-administered fund). Whatever the specific steps adopted, the important point is that enforcing sanctions and remedies is as important as establishing the protections themselves.

Conclusion

The preceding discussion has made clear the complexity of the topic at hand. Protecting medical privacy and confidentiality in a networked era involves a wide range of issues and requires the cooperation and involvement of a similar range of actors. Practitioners and patients are, of course, critical to the effective deployment of EMRs or, indeed, any other successful use of technology in healthcare. But the involvement of public health authorities, insurance companies, data marketers, civil society organizations, and a variety of other entities is also essential. In addition, governments and others at different jurisdictions—municipal, county, state, national, and international—will also have to be considered.

Each of these actors brings different perspectives to the table. These differences can be productive, representing a wealth of knowledge and experience. But they can also be problematic. The variety of experiences is accompanied by a variety of agendas, and—put more charitably—a variety of priorities. Harmonizing and doing justice to all these priorities is one of the key tasks confronting advocates of medical privacy.

Success at this task, essentially a balancing act, will require more than the somewhat piecemeal approach to privacy that currently exists. This underscores the need for a systematic and architectural solution. The foundations of this solution are the nine principles described above. Considered and applied together, these principles add up to an integrated and comprehensive approach to privacy that can help overcome the current fragmentation. It is critical that the nine principles be considered as part of one package—elevating certain principles over others will simply weaken the overall architectural solution this paper has proposed.

Of course, the principles remain just that—principles—and their precise manifestation will vary from state to state and from country to country. Yet, although they are broad enough to apply across organizations, stakeholders, and jurisdictions, they are also specific and tangible enough to have real significance and practical effect. The key is to apply them in a thorough and comprehensive manner before creating any new information network—not as an afterthought, and not as an after-the-fact band-aid solution.

References

1 United Nations. Universal Declaration of Human Rights, Article 12. Available at http://www.nps.gov/elro/teach-er-vk/documents/udhr.htm.

2 Naser C, Alpert S. Protecting the privacy of medical records: An ethical analysis (White Paper). Lexington, MA: National Coalition for Patient Rights; 1999.

3 The EU Directive mentioned above similarly treats medical violations of privacy as particularly egregious cases.

4 See note 2, Naser, Alpert 1999.

5 Alpert SA. Protecting medical privacy: Challenges in the age of genetic information. Journal of Social Issues 2003;59(2):301–22; Goffman E. Behavior in Public Places: Notes on the Social Organization of Gatherings. New York: Free Press; 1966; Westin A. Privacy and Freedom. New York: Atheneum; 1967.

6 Goldman J. Protecting privacy to improve health care. Health Affairs 1998;17:47–60.

7 See note 6, Goldman 1998.

8 Goldman J, Hudson Z. Virtually exposed: Privacy and e-health. Health Affairs 2000;19:140–8.

9 These and more survey results can be found at the Electronic Privacy Information Center (EPIC), 27 April 2007; available at http://www.epic.org/privacy/survey/ (accessed 27 May 2008).

10 See note 5, Alpert 2003.

11 See note 6, Goldman 1998.

12 Bennett CJ. Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Ithaca, NY: Cornell University Press; 1992.

13 Goldman J. Privacy and individual empowerment in the interactive age. In: Bennett C, Grant R, eds. Visions of Privacy: Policy Choices for a Digital Age. Toronto: University of Toronto Press; 1999.

14 Brandeis LD, Warren SD. The right to privacy. Harvard Law Review 1890;4:193–7.

15 See note 14, Brandeis, Warren 1890.

16 See note 5, Westin 1967.

17 The U.S. National Information Infrastructure Task Force defines the term as follows: “Information privacy is an individual's claim to control the terms under which personal information—information identifiable to an individual—is acquired, disclosed, and used.” Available at http://www.iitf.nist.gov/ipc/ipc-pubs/niiprivprin_final.html.

18 Miller A. The Assault on Privacy: Computers, Data Banks, and Dossiers. Ann Arbor: University of Michigan Press; 1971.

19 See note 5, Westin 1967.

20 See note 5, Alpert 2003.

21 See note 5, Alpert 2003.

22 See note 5, Alpert 2003.

23 Of course the illicit use of data is not particular to the networked environment. What has changed, however, is the scope of potential violations: As the network expands and as the amount of data increases, so does the possibility of confidentiality violations. In addition, a networked environment facilitates the illicit acquisition (e.g., through theft) and dissemination of data. This is in large part due to digitalization of information, which is easier to store and to steal without its original owner even noticing.

24 Health Privacy Working Group. Best Principles for Health Policy; 1999; available at http://www.healthprivacy.org/usr_doc/33807.pdf.

25 The Register; available at http://www.theregister.co.uk/2005/04/21/icam_surveillance_report/ (accessed 27 May 2008).

26 United States Senate Committee on the Judiciary, 13 Apr 2005; available at http://judiciary.senate.gov/testimony.cfm?id=1437&wit_id=4161 (accessed 27 May 2008).

27 United States Senate Committee on the Judiciary, 13 Apr 2005; available at http://judiciary.senate.gov/testimony.cfm?id=1437&wit_id=4162 (accessed 27 May 2008).

28 For a listing of recent security breaches and data violations, see Privacy Rights Clearinghouse, 20 April 2005; avaliable at http://www.privacyrights.org/ar/ChronDataBreaches.htm (accessed 27 May 2008).

29 See note 24, Health Privacy Working Group 1999.

30 In particular, we have reviewed laws in three jurisdictions: The United States, including the 1973 Fair Information Practices and the 1974 Privacy Act; the OECD, including the 1980 Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and Canada, including the 1995 Canadian Standards Association Model Code for the Protection of Personal Information. More information about these and other existing Fair Information Practices can be found at the web site for The Privacy Rights Clearinghouse, a non-profit consumer group located in California; updated Feb 2004; available at http://www.privacyrights.org/ar/fairinfo.htm (accessed 27 May 2008).

31 Any provisions for informed consent need to be drafted in such a way that ensures the sharing of information is not unduly cumbersome on data users. It is probably unrealistic to assume that patients can or should give their assent to each and every use of their medical data.

32 Valid concerns have been raised, however, that such a centralization may create additional security vulnerabilities.

33 Sometimes, it is important to recognize that the flexibility of opt-out provisions is limited by what is technologically feasible. It goes without saying that any steps or provisions taken to protect confidentiality need to take account of what is possible with our existing technology. At the same time, however, technical limitations should never be used to justify breaches of confidentiality or privacy.

34 Solove D, Hoofnagle C. A model regime of privacy protection. Public Law Research Paper No. 132. Washington, DC: George Washington University Law School; 2005; available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=681902.

35 It is also worth noting that some observers have suggested that penalties for abuses should be strengthened in order to act as a deterrent against future abuses.