Technology and Regulation <p><strong>Technology and Regulation</strong> (TechReg) is a new interdisciplinary journal of law, technology and society. TechReg provides an <strong>open-access</strong> platform for disseminating original research on the <strong>legal and regulatory challenges</strong> posed by <strong>existing and emerging technologies</strong>.</p> <p>The Editor-in-Chief is Professor Ronald Leenes of the Tilburg Law School. Our <a href=""><strong>Editorial Board Committee</strong></a> comprises a distinguished panel of international experts in law, regulation, technology and society across different disciplines and domains.</p> <p>TechReg aspires to become the leading outlet for scholarly research on technology and regulation topics, and has been conceived to be as accessible as possible for both authors and readers.</p> en-US (Ronald Leenes) (Openjournals) Wed, 31 Mar 2021 00:00:00 +0200 OJS 60 Keeping up with cryptocurrencies <section class="item abstract"> <p>Invented in 2008 with Bitcoin, cryptocurrencies represent a radical technological innovation in finance and banking; one which threatened to disrupt the existing regulatory regimes governing those sectors. This article examines, from a reputation management perspective, how regulatory agencies framed their response. Through a content analysis, we compare communications from financial conduct regulators in the UK, US, and Australia. Despite the risks, challenges, and uncertainties involved in cryptocurrency supervision, we find regulators treat the technology as an opportunity to bolster their reputation in the immediate wake of the Global Financial Crisis.&nbsp;Regulators frame their response to cryptocurrencies in ways which reinforce the agency’s ingenuity and societal importance. We discuss differences in framing between agencies, illustrating how historical, political, and legal differences between regulators can shape their responses to radical innovations.</p> </section> Lauren Fahy, Scott Douglas, Judith van Erp Copyright (c) 2021 Lauren Fahy, Scott Douglas, Judith van Erp Wed, 31 Mar 2021 00:00:00 +0200 Not Hardcoding but Softcoding Data Protection <div class="main_entry"> <section class="item abstract"> <p>The delegation of decisions to machines has revived the debate on whether and how technology should and can embed fundamental legal values within its design. While these debates have predominantly been occurring within the philosophical and legal communities, the computer science community has been eager to provide tools to overcome some challenges that arise from ‘hardwiring’ law into code. What emerged is the formation of different approaches to code that adapts to legal parameters. Within this article, we discuss the translational, system-related, and moral issues raised by implementing legal principles in software. While our findings focus on data protection law, they apply to the interlinking of code and law across legal domains. These issues point towards the need to rethink our current approach to design-oriented regulation and to prefer ‘soft’ implementations, where decision parameters are decoupled from program code and can be inspected and modified by users, over ‘hard’ approaches, where decisions are taken by opaque pieces of program code.</p> </section> </div> Aurelia Tamò-Larrieux, Simon Mayer, Zaïra Zihlmann Copyright (c) 2021 Aurelia Tamò-Larrieux, Simon Mayer, Zaïra Zihlmann Thu, 06 May 2021 00:00:00 +0200 On the legal responsibility of artificially intelligent agents <p>This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they</p> <p>(a) <em>cannot </em>be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(b)<em> should not</em> be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.</p> <p>(c)<em> should not</em> be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.</p> <p>(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold <em>both</em> artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.</p> Antonia Waltermann Copyright (c) 2021 Antonia Waltermann Mon, 12 Jul 2021 00:00:00 +0200 Reviving Purpose Limitation and Data Minimisation in Personalisation, Profiling and Decision-Making Systems <div class="page" title="Page 1"> <div class="section"> <div class="layoutArea"> <div class="column"> <p>This paper determines whether the two core data protection principles of data minimi- sation and purpose limitation can be meaningfully implemented in data-driven systems. While contemporary data processing practices appear to stand at odds with these prin- ciples, we demonstrate that systems could technically use much less data than they currently do. This observation is a starting point for our detailed techno-legal analysis uncovering obstacles that stand in the way of meaningful implementation and compliance as well as exemplifying unexpected trade-offs which emerge where data protection law is applied in practice. Our analysis seeks to inform debates about the impact of data protec- tion on the development of artificial intelligence in the European Union, offering practical action points for data controllers, regulators, and researchers.</p> </div> </div> </div> </div> Michele Finck, Asia J. Biega Copyright (c) 2021 Michele Finck, Asia J. Biega Wed, 18 Aug 2021 00:00:00 +0200 The right of access to personal data: A genealogy <p>In this paper, I analyze several traditions of data protection to uncover the theoretical justification they provide for the right of access to personal data. Contrary to what is argued in most recent literature, I do not find support for the claim that the right follows from the German tradition of “informational self-determination” or Westin’s idea of “privacy as control”. Instead, there are two other less known theories of data protection which do offer a direct justification for the right of access. First, American scholars Westin and Baker developed the “due process” view according to which access helps to expose error and bias in decision-making, thereby contributing to correct decisions and allowing the people who are affected to be involved in the decision making. Second, in what I call the “power reversal” view of access, Italian legal scholar Rodotà argues that, in particular when seen from a collective point of view, the right enables social control over the processing of personal data and serves as a counterbalance to the centers of power by placing them under the control of democratic accountability.</p> René Mahieu Copyright (c) 2021 Fri, 20 Aug 2021 00:00:00 +0200