We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
This essay examines how nineteenth-century American literature paved the way for the modern exposure of private life in such disparate venues as the gossip column, social media, and reality television. In particular, this essay examines the sketch form, a popular nineteenth-century prose genre that has often been characterized as a minor form in comparison to the novel. In examining the history of the sketch form, this essay shows how the sketch conveyed reservations about the interiority and exposures central to the novel form. As practiced by Washington Irving, the earliest popularizer of this genre, the sketch advocated respectful discretion, the avoidance of private matters, and social stasis, the latter of which positioned the sketch in opposition to the social mobility characteristic of the novel. Irving presented the sketch as the genre of literary discretion, but its latter practitioner, Nathaniel Parker Willis, used the sketch to divulge confidences and violate social decorum. Willis adapted the sketch to become a precursor of the gossip column and to mirror the novel form in exposing private life.
Chapter 4 presents a review of the ISO 18000-63 protocol, including data encoding and modulation, and aspects of the transponder memory structure, security, and privacy, and presents real examples of reader–transponder transactions.
Competition law is experiencing a transformation from a niche economic tool to a Swiss knife of broader industrial and social policy. Relatedly, there is a narrative that sees an expansive role for competition law in broad areas such as sustainability, privacy, and workers and labour rights, and a counternarrative that wants to deny it that role. There is rich scholarship on this area, but little empirical backing. In this article, we present the results of a comprehensive empirical research into whether new goals and objectives such as sustainability, privacy, and workers and labour rights are indeed endorsed in EU competition law and practice. We do so through an investigation into the totality of Court of Justice rulings, Commission decisions, Advocate General opinions, and public statements of the Commission. Our findings inject data into the debate and help dispel misconceptions that may arise by overly focusing on cherry-picked high-profile decisions while overlooking the rest of the EU’s institutional practice.
We find that sustainability is partially recognised as a goal whereas privacy and labour rights are not. We also show that all three goals are more recent than classic goals, that EU institutions have not engaged much with the areas of sustainability, privacy, and workers and labour rights, and that the Commission’s rhetoric is seemingly out of pace with decisional practice. We also identify trends that may bode for change, and we contextualize our analysis through the lens of the history and nature of the EU’s integration and economic constitution.
This article explores Vietnam’s distinctive approach to data privacy regulation and its implications for the established understandings of privacy law. While global data privacy regulations are premised on individual freedom and integrity of information flows, the recent Vietnamese Decree 13/2023/NĐ-CP on Personal Data Protection (herein PDPD) prioritise state oversight and centralised control over information flows to safeguard collective interests and cyberspace security. The fresh regulatory logic puts data privacy under the regulation of government agencies and moves the privacy law arena even further away from the already distant judicial power. This prompts an exploration of the nuances underlying the ways regulators and the regulated communities understand data privacy regulation. The article draws on social constructionist accounts of regulation and discourse analysis to explore the epistemic interaction between regulators and those subject to regulation during the PDPD’s drafting period. The process is highlighted by the dynamics between actors within a complex semantic network established by the state’s policy initiatives, where tacit assumptions and normative beliefs direct the way actors in various communities favour one type of thinking about data privacy regulation over another. The findings suggest that reforms to privacy laws may not result in “more privacy” to individuals and that divergences in global privacy regulation may not be easily explained by drawing merely from cultural and institutional variances.
Algorithmic human resource management (AHRM), the automation or augmentation of human resources-related decision-making with the use of artificial intelligence (AI)-enabled algorithms, can increase recruitment efficiency but also lead to discriminatory results and systematic disadvantages for marginalized groups in society. In this paper, we address the issue of equal treatment of workers and their fundamental rights when dealing with these AI recruitment systems. We analyse how and to what extent algorithmic biases can manifest and investigate how they affect workers’ fundamental rights, specifically (1) the right to equality, equity, and non-discrimination; (2) the right to privacy; and, finally, (3) the right to work. We recommend crucial ethical safeguards to support these fundamental rights and advance forms of responsible AI governance in HR-related decisions and activities.
Generative artificial intelligence (AI) has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly in the context of the AI Act, a critical but often overlooked issue lies in the friction between generative AI and data protection laws. The rise of these technologies highlights unresolved tension between safeguarding fundamental protection rights and and the vast, almost universal, of scale of data processing required for machine learning. Large language models, which scrape nearly the whole Internet rely on and may even generate personal data falling under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
Adolescents’ ability to access health care depends on sharing accurate information about concerns, needs, and conditions. Parents and other adults serve as both resources and gatekeepers in adolescents’ ability to access and manage care. Understanding information sharing between adolescents and parents, adolescents and providers, and parents and providers is thus critical. This chapter distinguishes between adolescents’ routine and self-disclosure of information. The former refers to sharing information required for the partner to perform their role. The latter refers to voluntarily sharing more information than required. Because the roles of parent and provider are distinct relative to the adolescent, disclosure decisions can conflict. These differences are discussed in the context of communication privacy management theory and the literature on legitimacy of authority. A framework for understanding information sharing processes is developed that considers stage of care, type of care, stigma/privacy associated with the condition, and the age of the adolescent.
Design is integral to every part of our justice system: from the built spaces like courtrooms and the clerk’s office counter, to the paper and digital forms litigants submit, to the rules of procedure themselves. This chapter argues that good architectural design can make our justice system more just by enhancing participants’ sense of fairness and dignity, and more efficient by contributing to mindsets from which it is easier for parties to resolve conflict. We begin by discussing historical inspirations and manifestations of courtroom and courthouse design. We then look at some of the key stakeholders who make design decisions before highlighting a few modern efforts to use design to bring dignity to patrons of courthouses. We illustrate our position by referencing a 2018 collaboration among a graduate architecture studio course at Wentworth Institute of Technology, Northeastern University School of Law’s NuLawLab, and Massachusetts Housing Court in which architecture and law students and faculty tackled spatial interventions in Boston Housing Court.
In our digitalized modern society where cyber-physical systems and internet-of-things (IoT) devices are increasingly commonplace, it is paramount that we are able to assure the cybersecurity of the systems that we rely on. As a fundamental policy, we join the advocates of multilayered cybersecurity measures, where resilience is built into IoT systems by relying on multiple defensive techniques. While existing legislation such as the General Data Protection Regulation (GDPR) also takes this stance, the technical implementation of these measures is left open. This invites research into the landscape of multilayered defensive measures, and within this problem space, we focus on two defensive measures: obfuscation and diversification. In this study, through a literature review, we situate these measures within the broader IoT cybersecurity landscape and show how they operate with other security measures built on the network and within IoT devices themselves. Our findings highlight that obfuscation and diversification show promise in contributing to a cost-effective robust cybersecurity ecosystem in today’s diverse cyber threat landscape.
Much of the internet of today is dominated by the big tech companies such as Google, Facebook, and Amazon that use it to amass profits. The chapter looks at three ways in which their pursuit of profit arbitrarily interferes with people’s access to and use of the internet. (1) Big Tech corporations operate a version of the internet that forces users to yield personal data in exchange for ‘free’ services. The chapter explains why this routine harvesting of personal data is morally problematic as it forces internet users to choose between two elements of minimally decent lives: their privacy and accessing the internet. (2) Social media platforms are among the dominant online services today. They have enhanced opportunities for exercising human rights, but their business practices also limit people in the enjoyment of these rights. The chapter suggests several ways for improving the situation. (3) Some businesses lobby for ending net neutrality. The chapter explains why net neutrality is crucial for keeping internet access free from arbitrary interferences, and argues for a version of net neutrality that allows some unequal treatment of data that does not diminish human rights.
Non-fungible tokens (NFTs) introduce unique concerns related to the privacy of personal data. To create an NFT, users upload data to publicly accessible and searchable databases. This data can encompass information essential for the creation, transfer, and storage of the NFT, as well as personal details pertaining to the creator. Additionally, users might inadvertently engage with technology crafted to gather personal data. Traditional paradigms of privacy have not evolved in tandem with advancements in NFT and blockchain technology. To pinpoint where current privacy paradigms falter, this chapter delves into an introduction of NFTs, elucidating their foundational technical mechanisms and processes. Subsequently, the chapter juxtaposes current and historical privacy frameworks with NFTs, underscoring how these models may be either overly expansive or excessively restrictive for this emerging technology. This chapter suggests that Helen Nissenbaum’s concept of “contextual integrity” might offer the requisite flexibility to cater to the distinct attributes of NFTs. In conclusion, while there is a pronounced societal drive to safeguard citizen data and privacy, the overarching aim remains the enhancement of the collective good. Balancing this objective, governments should be afforded the latitude to equate society’s privacy interests with its imperative for transparency.
The world is witnessing an increase in cross-border data transfers and breaches orchestrated by State and non-State actors. Cross-border data transfers may lead to friction among States to localize or globalize data and to provide regulatory frameworks. “Data warfare” or information-war operations are often not covered under conventional rules; however, they are categorized as acts of espionage and subject to domestic regulations. As such, the operations are used to achieve a variety of objectives, including stealing sensitive information, spreading propaganda, and causing economic damage. Notable instances of the theft of sensitive information include the recent Bangladesh government website breach, exposing 50 million records, and the Unique Identification Authority of India (UIDAI) website hack.
Regulating the “data war” under the existing principles of international law may be unsuccessful in creating robust international legal frameworks to address the associated challenges. These developments further accentuate the global divide between data-rich regions in the Global North, with strong data protection mechanisms (such as the GDPR and the California Privacy Rights Act), and regions in the Global South, where there is a lack of comprehensive data protection laws and regulatory regimes. This disparity underscores the urgent need for global cooperation for substantial international regulatory mechanisms.
This article examines the complexities surrounding data warfare; it highlights the imperative need for establishing a robust global legal framework for data protection, delving into the concept of data war. It also acknowledges the growing influence of advanced technologies like data computing and mining and their ongoing threats to the fundamental rights of individuals associated with exposed personal data. The authors address the deficiencies in international legal provisions and advocate for a global regulatory approach to data protection as a critical means of safeguarding personal freedoms and countering the escalating threats in the digital age.
This article examines what the state of the law regarding the tortious protection of the privacy of corporations tells us about the concept of a legal person. Given that non-human persons are capable of having an interest in at least their informational privacy, logic would seem to dictate that they should be recognised such a right protecting their personality. In reality, the law is most hesitant to concede the right to privacy to non-natural persons (the same being true of reputation). This suggests that, for the dominant strand of the law at least, despite the rhetoric, legal persons do not really have rights of personality; in other words, that they are not really persons.
Digital traces that people leave behind can be useful evidence in criminal courts. However, in many jurisdictions, the legal provisions setting the rules for the use of evidence in criminal courts were formulated long before these digital technologies existed, and there seems to be an increasing discrepancy between legal frameworks and actual practices. This chapter investigates this disconnect by analyzing the relevant legal frameworks in the EU for processing data in criminal courts, and comparing and contrasting these with actual court practices. The relevant legal frameworks are criminal and data protection law. Data protection law is mostly harmonized throughout the EU, but since criminal law is mostly national law, this chapter focuses on criminal law in the Netherlands. We conclude that existing legal frameworks do not appear to obstruct the collection of data for evidence, but that regulation on collection in criminal law and regulation on processing and analysis in data protection law are not integrated. We also characterize as remarkable the almost complete absence of regulation of automated data analysis – in contrast with the many rules for data collection.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
Coordinated campaigns of falsehoods are poisoning public discourse.1 Amidst a torrent of social-media conspiracy theories and lies – on topics as central to the nation’s wellbeing as elections and public health – scholars and jurists are turning their attention to the causes of this disinformation crisis and the potential solutions to it.
It’s accually obsene what you can find out about a person on the internet.1
To some, this typo-ridden remark might sound banal. We know that our data drifts around online, with digital flotsam and jetsam washing up sporadically on different websites across the internet. Surveillance has been so normalized that, these days, many people aren’t distressed when their information appears in a Google search, even if they sometimes fret about their privacy in other settings.