This seminar builds on our 2019 CHI paper "'I make up a silly name': Understanding Children's Perception of Privacy Risks Online": https://arxiv.org/pdf/1901.10245.pdf. The paper discussed how children (aged 6-10) could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting children’s awareness of cyber risks and the ability to protect themselves online. The talk will also discuss how our research has progressed since 2019 and how it is related to the latest development of children's data protection regulation in the UK.
Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at Oxford University. Her research focuses on investigating the impact of algorithm-based decision makings upon our everyday life, especially for families and young children. For this, she takes a human-centric approach, focusing on understanding real users' needs, in order to design technologies that can make a real impact. Currently, she is leading the KOALA project and the ReEnTrust project. She works closely with schools, children, families as well as technologists for children, to understand the technological, societal and regulatory challenges that we are facing, to inform national and international policymakers, technology designers and families. She is also part of the 100 Brilliant Women in AI and Ethics global initiative, to promote diversity and equality in this critical research area.
Note the changed time.
Note the changed time.
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this talk, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. ∙ A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. This tradeoff which ranges roughly from enumeration to sieving, is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. ∙ A quantum algorithm that runs in time 2^{0.9535n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [ADRSD15] that has a time and space complexity 2^{n+o(n)}. ∙ A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [CCL18] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound of a quantity related to the kissing number of a lattice, which is 2^{0.402n}. In practice this quantity is much smaller and is often 2^o(n) for most lattices. In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.
Yixin Shen is a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. She received her PhD from Université de Paris under the supervision of Frédéric Magniez. Her research focuses on classical and quantum algorithms for lattice-based cryptanalysis.
Note the changed time
Note the changed time.
Transgender people are marginalized, facing specific privacy concerns and high risk of online and offline harassment, discrimination, and violence. They are also known to use technology more than other groups for critical purposes such as finding accepting friends and community (which may be absent in their real lives) and gathering information on topics such as health and sexuality. In this talk, I'll discuss our recent research studying American transgender people's computer security and privacy experiences. While our questions were broadly construed, participants frequently returned to themes of activism and prosocial behavior, such as protest organization, political speech, and role-modeling transgender identities, so we focused our analysis on these themes. I'll discuss models of risk that participants described influencing many of their security and privacy decisions, and the ways that these risk perceptions may heavily influence transgender people’s defensive behaviors and self-efficacy, jeopardizing their ability to defend themselves or gain technology’s benefits. I'll then discuss currently underway NSF-funded follow-on research aiming to quantify the prevalence of these trends at a population level, discover which of these trends apply also to other marginalized groups, and design new technology that can support the needs of transgender people and other groups in light of these findings.
Note the changed time.
Ada Lerner (pronouns: she/her or they/them) is a computer scientist who focuses their research on the area of Inclusive Security and Privacy, which they define as the study of the security and privacy needs of groups which are marginalized (such as queer and trans folks) or which are critical to the functioning of our free society (such as lawyers, journalists, and activists). Their work incorporates web and network measurements, qualitative methods, and quantitative methods with interdisciplinary perspectives from psychology, feminist and queer theory, and the law. They live in Boston and have a dog named Matrix, who knows the command "transpose".
This seminar builds on our 2019 CHI paper "'I make up a silly name': Understanding Children's Perception of Privacy Risks Online": https://arxiv.org/pdf/1901.10245.pdf. The paper discussed how children (aged 6-10) could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting children’s awareness of cyber risks and the ability to protect themselves online. The talk will also discuss how our research has progressed since 2019 and how it is related to the latest development of children's data protection regulation in the UK.
Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at Oxford University. Her research focuses on investigating the impact of algorithm-based decision makings upon our everyday life, especially for families and young children. For this, she takes a human-centric approach, focusing on understanding real users' needs, in order to design technologies that can make a real impact. Currently, she is leading the KOALA project and the ReEnTrust project. She works closely with schools, children, families as well as technologists for children, to understand the technological, societal and regulatory challenges that we are facing, to inform national and international policymakers, technology designers and families. She is also part of the 100 Brilliant Women in AI and Ethics global initiative, to promote diversity and equality in this critical research area.
The talk proposes to conceive of cybersecurity through the lens of care, a notion taken from feminist Science and Technology Studies. Caring for cybersecurity emphasizes on the invisible, morally charged, and experimental practices of doing cybersecurity. Cybersecurity as a "super-wicked problem" does not reside in isolated factors or normative frameworks, but it requires tacit work and uneasy decisions to be made. It requires an analysis of the concrete practices of cybersecurity rather than evaluation or judgment of those. I propose that cybersecurity research deals with practices of care and long-term commitment rather than fixing and moving on. And this must be better resonated in research and policy.
The talk mobilizes findings from a 13-month ethnographic study in two German critical infrastructure companies. Both companies are in the midst of creating new data infrastructures accommodation data science and big data. Cybersecurity was unsettled in the process and became a core concern - which is rarely the case. This aided me as an ethnographer to observe cybersecurity in "action", but more than that controversies drew me in and asked for anthropology-informed ways of dealing with conflicts between developers and security officers, security officers and management, or developers and other developers. I argue that the notion of care was helpful in these conflicts yet again as it emphasized on mutual understanding and compromise rather than following the rule book.
I take inspiration in feminist and post-Actor Network Theory approaches that emphasize less on compliance and conformity and rather on fluidity and multiplicity. I find these approaches intriguing for cybersecurity research because they offer more nuanced understandings of conflicting accountabilities, tension and non-normativity as much as commitment and care.
Laura Kocksch studied cultural anthropology, sociology, and political science. During her undergrad years she developed an interest in studying social media technologies as re-configurating forms of locality and presence in political controversies. In the following years, she began studying technologies less as tools or mediators but as themselves social and political. From this grew her fascination with cybersecurity as a mode of governing technologies and humans alike. In the interdisciplinary phd program SecHuman – Security for Humans in Cyberspace her frustration grew with "factors research” in cybersecurity that reduce human, technological or organizational action to quasi-mathematical factors in a “system”. From her background in Anthropology and Science and Technology studies, she found approaches that focus on practices and ways of interrelating and hybridity more convincing than separating the world into isolated areas and their “factors”. Laura is currently finishing her dissertation thesis with the title "Fragile Relations - On Cybersecurity Practices in German Critical Infrastructures" at the Ruhr University in Bochum. She is founding member of the Ruhr University Science and Technology Studies lab where she explores participatory methodologies for the study of cybersecurity and environmental controversies.
This seminar reflects on our 2009 CHI paper Ethnography Considered Harmful: http://www.cs.nott.ac.uk/~pszaxc/work/CHI09.pdf The paper reviewed the current status of ethnography in systems design and focused particularly on new approaches to and understandings of ethnography that emerged as the computer moved out of the workplace. These approaches sought to implement a different kind of ethnographic study. In doing so they reconfigured the relationship ethnography has to systems design, replacing detailed empirical studies of situated action with studies that provide cultural interpretations of action and critiques of the design process itself. We hold these new approaches to and understandings of ethnography in design up to scrutiny, with the purpose of enabling designers to appreciate the differences between new and existing approaches to ethnography in systems design and the practical implications this might have for design. The paper was further elaborated in the book Deconstructing Ethnography: Towards a Social Methodology for Interactive and Ubiquitous Systems Design: https://www.springer.com/gp/book/9783319219530
Andy Crabtree is Professor of Computer Science at the University of Nottingham. A sociologist by background and training, he has worked in an interdisciplinary context sensitising IT research and systems design to the social character of computing across a broad range of sectors for over 25 years. He was the first ethnographer to be awarded a Senior Fellowship by the EPSRC, focused on privacy and accountability in the Internet of Things. He has published over 150 peer-reviewed scientific works, and 3 textbooks on Design Ethnography, is a member of the EPSRC Strategic Advisory Network and Strategic Priorities Fund Evaluation Advisory Group.
We will review recent work on Quantum Machine Learning and discuss the prospects and challenges of applying this new exciting computing paradigm to machine learning applications.
Iordanis Kerenidis (CNRS and QC Ware) received his Ph.D. from the Computer Science Department at the University of California, Berkeley, in 2004. After a two-year postdoctoral position at the Massachusetts Institute of Technology, he joined the Centre National de Recherche Scientifique in Paris as a permanent researcher. He has been the coordinator of a number of EU-funded projects including an ERC Grant, and he is the founder and director of the Paris Centre for Quantum Computing. His research is focused on quantum algorithms for machine learning and optimization, including work on recommendation systems, classification and clustering. He is currently working as the Head of Algorithms Int. at QC Ware Corp.
What is deniability? Although it might sound trivial, this question have sparked a series of debates on the privacy/security community ranging from a legal to a technical perspective. In the context of secure communications and channels, this question is notoriously difficult to approach and analyze. To answer it, one needs to look at the broader picture in which deniability applies. In this talk, we will look at how a notion of deniability can be attained by making more explicit the definitions given in the work of Canetti et al., Unger, and Walfish.
Given these prior notions, does deniability can be applied in context of secure communication and channels? In this talk, we will try to clarify what deniability means in terms of communication, and specify how it can implemented (and what it needs) in the real world.
Sofía Celi is a cryptography researcher and implementer at Cloudflare. She spends her time implementing in C and Go, and thinking about OTR, post-quantum algorithms, anonymous credentials and TLS.
Type-two constructions abound in cryptography: adversaries for encryption and authentication schemes, if active, are modeled as algorithms having access to oracles, i.e. as second-order algorithms. But how about making cryptographic schemes themselves higher-order? This paper gives an answer to this question, by first describing why higher-order cryptography is interesting as an object of study, then showing how the concept of probabilistic polynomial time algorithm can be generalized so as to encompass algorithms of order strictly higher than two, and finally proving some positive and negative results about the existence of higher-order cryptographic primitives, namely authentication schemes and pseudorandom functions.
Note the changed time.
See the attached URL for a bio. The talk will be given jointly with Ugo Dal Lago (University of Bologna & INRIA, see http://www.cs.unibo.it/~dallago/ ).
Fine-grained cryptography is concerned with adversaries that are only moderately more powerful than the honest parties. We will survey recent results in this relatively underdeveloped area of study and examine whether the time is ripe for further advances in it.
Alon Rosen is a full professor at the School of Computer Science at the Herzliya Interdisciplinary Center.
His areas of expertise are in theoretical computer science and cryptography. He has made contributions to the foundational and practical study of zero-knowledge protocols, as well as fast lattice-based cryptography, most notably in the context of collision resistant hashing and pseudo-random functions. In this context he co-introduced the ring-SIS problem and related SWIFFT hash function, as well as the Learning with Rounding problem. More recently he has been focusing on the study of the cryptographic hardness of finding a Nash equilibrium and on fine-grained cryptography.
Alon earned his PhD from the Weizmann Institute of Science (Israel) in 2003, and was a Postdoctoral Fellow at MIT (USA) in the years 2003-2005 and at Harvard University (USA) in the years 2005-2007. He is a faculty member at IDC since 2007.
The notion of zero knowledge proofs underlies many of the mechanism for obtaining verifiable, privacy-preserving delegation of computation. Loosely speaking, zero knowledge proofs are interactive proof systems that reveal nothing other than the validity of the assertion being proven.
With the rise of quantum information and the growing evidence that small-to-medium scale quantum computers may be possible in the near future, we have ample reasons to understand what possibilities quantum computing offers for privacy-preserving delegation of computation, as well as what security threats does it pose.
In this talk, I will present the first construction of zero-knowledge proofs that are sound against quantum-entangled adversaries. The talk will be self-contained, and no preliminary knowledge in quantum computing is necessary.
Based on joint work with Alessandro Chiesa, Michael Forbes, and Nicholas Spooner (JACM 2021), as well as ongoing work.
Tom Gur is an associate professor in the Department of Computer Science at the University of Warwick, and a UKRI Future Leaders Fellow. He received his Ph.D. in 2017 from the Weizmann Institute of Science, under the supervision of Oded Goldreich, and spent two years at UC Berkeley before joining the University of Warwick. He was awarded the Shimon Even Prize in Theoretical Computer Science. His research interests are primarily in the foundations of computer science and combinatorics. Specific interests include sublinear-time algorithms, complexity theory, coding theory, cryptography, quantum computing, and more.
Many modern processors expose privileged software interfaces to dynamically modify the frequency and voltage. These interfaces were introduced to cope with the ever-growing power consumption of modern computers. In this talk we show how these privileged interfaces can be exploited to undermine the system’s security. We present the Plundervolt attack – demonstrating how we can corrupt the integrity of Intel SGX computations. We also investigate whether Intel's mitigations have worked.
Kit is currently pursuing a PhD in Cyber Security at The University of Birmingham. Her research interests include embedded hardware and software based fault injections. Kit is also researching reverse engineering of hardware faults through software emulation. Kit currently leads the University’s Ethical Hacking Club, AFNOM which encourages students to learn offensive security in a friendly, informal environment.
Cryptography underpins a multitude of critical security- and privacy-enhancing technologies. Recent advances in modern cryptography promise to revolutionize finance, cloud computing and data analytics. But cryptography does not affect everyone in the same way. In this talk, I will discuss how cryptography benefits some and not others and how cryptography research supports the powerful but not the disenfranchised.
Note the changed time.
Seny Kamara is an Associate Professor of Computer Science at Brown University and Chief Scientist at Aroki Systems. Before joining Brown, he was a researcher at Microsoft Research.
His research is in cryptography and is driven by real-world problems from privacy, security, and surveillance. He has worked extensively on the design and cryptanalysis of encrypted search algorithms, which are efficient algorithms to search on end-to-end encrypted data. He maintains interests in various aspects of theory and systems, including applied and theoretical cryptography, data structures and algorithms, databases, networking, game theory, and technology policy.
This talk builds and expands on the findings from the 2017 USENIX Security paper, "When the Weakest Link is Strong: Secure Collaboration in the Case of the Panama Papers" to explore when and how security practices can and are successfully applied and adopted by groups under risk. The paper's findings suggest that the sociocultural context in which security measures are introduced has an enormous impact on their effectiveness - in this case study, transforming the users from the "weakest link" to the strongest.
Note the changed time.
Susan McGregor is an Associate Research Scholar at Columbia University’s Data Science Institute, where she also co-chairs its Center for Data, Media & Society. McGregor’s research is centered on security and privacy issues affecting journalists and media organizations. Her current projects include NSF-funded work to provide readers with stronger guarantees about digital media by integrating cryptographic signatures into digital publishing workflows, an effort to develop novel classifiers for detecting abusive and harassing speech targeting journalists on Twitter, and using artificial intelligence and computer vision to help journalists recognize unfamiliar political graphics when reporting in the field. She is a member of the World Economic Forum's Global Future Council on Media, Entertainment & Sport, and is the author of two forthcoming books: Information Security Essentials: A Guide for Reporters, Editors and Newsroom Leaders is due out from Columbia University Press in early 2021; Practical Python Data Wrangling and Data Quality will be published by O’Reilly Media in summer 2021.
We are increasingly surrounded by simple (and not so simple) devices with computational and communication capability, which assist us in everyday tasks and together comprise the idea of an Internet-of-Things. To perform their duties these devices are often required to set up ad-hoc connections to interact, and this is could often be with another device or a system where no prior trust relationship exists between the parties. Establishing a secure connection between two devices in such an unstructured environment presents some interesting research problems. Unfortunately, not all these problems can be solved with conventional cryptographic mechanisms alone, and we need to look at alternative ways to reinforce existing security mechanisms by incorporating the physical context of a device into security protocols. Distance-bounding protocols allow a verifier to both authenticate a prover and evaluate whether the latter is located in his vicinity. These protocols are of particular interest in contactless systems, e.g., electronic payment or access control systems, which are vulnerable to distance-based frauds. This talk gives a briefly introduces the use of physical context in secure mechanisms before providing an overview on distance-bounding protocols.
Gerhard Hancke received B.Eng and M.Eng degrees in Computer Engineering from the University of Pretoria, South Africa, in 2002 and 2003. He received a PhD in Computer Science from the University of Cambridge, United Kingdom, in 2009 and a LLB from the University of South Africa, South Africa, in 2014. He joined City University of Hong Kong as faculty in 2013 where he is currently an Associate Professor. Prior to this, he worked as researcher with the Smart Card and IoT Security Centre and as teaching fellow with the Department of Information Security, at Royal Holloway, University of London (RHUL). His research interests are system security and reliable communication and distributed sensing for the industrial Internet-of-Things. In 2019 he was awarded J. David Irwin Early Career Award for “research and educational contributions and impact on secure and reliable technology for the Industrial Internet-of-Things” by IEEE IES. He is currently also an Associate Editor of the IEEE Transactions on Industrial Informatics, the IEEE Open Journal of the Industrial Electronics Society, Elsevier Ad Hoc Networks and IET Smart Cities.
Algebraic structure lies at the heart of much of Cryptomania as we know it. An interesting question is the following: instead of building (Cryptomania) primitives from concrete assumptions, can we build them from simple Minicrypt primitives endowed with additional algebraic structure? In this work, we affirmatively answer this question by adding algebraic structure to the following Minicrypt primitives: one-way functions, weak unpredictable functions and weak pseudorandom functions. The algebraic structure that we consider is group homomorphism over the input/output spaces of these primitives. We show that these structured primitives can be used to construct several Cryptomania primitives in a generic manner.
Our results make it substantially easier to show the feasibility of building many cryptosystems from novel assumptions in the future. In particular, we show how to realize any CDH/DDH-based protocol with certain properties in a generic manner from input-homomorphic weak unpredictable/pseudorandom functions, and hence, from any concrete assumption that implies the existence of these structured primitives.
Our results also allow us to categorize many cryptographic protocols based on which structured Minicrypt primitive implies them. In particular, endowing Minicrypt primitives with increasingly richer algebraic structure allows us to gradually build a wider class of cryptoprimitives. This seemingly provides a hierarchical classification of many Cryptomania primitives based on the "amount" of structure inherently necessary for realizing them.
Note the changed time.
Sikhar Patranabis is a postdoc at ETH Zurich, in the Applied Cryptography group headed by Prof. Kenny Paterson since November 2019. Prior to that, he received his PhD from IIT Kharagpur, India. His research interests span all aspects of cryptography, with special focus on cryptographic complexity, database encryption, and secure implementations of cryptographic algorithms.
Cities are experimenting with new kinds of digitally augmented street furniture that recombine urban forms, like benches, pay phones and advertising billboards, with digital technologies such as free wi-fi, sensors and digital screens (Wessels & Humphry et al., 2020). The offer of free digital services is a key selling-point for local governments confronting urban digital inequalities, ageing public utilities and state withdrawal from public infrastructure investments. This talk reports on findings from two studies, the first: on LinkNYC, a city-wide implementation of smart kiosks in New York City; the second: on the design, use and governance of InLinkUK smart kiosks in Glasgow and Strawberry Energy smart benches in London, conducted as part of an international collaboration with the University of Glasgow. These studies found that precariously connected media users such as the street homeless, students and gig workers, rely on these services to maintain access but are exposed to new kinds of security and safety risks at the point of connection. These asymmetrical connections include: 'insecure connections' because of lack of support for wireless encryption in low-end Android devices; 'reduced physical safety' as kiosk and bench services are accessed in open public spaces, 'greater exposure to commercial data exploitation' and 'to police and legal enforcement' since even with privacy protections in place there is potential for data and footage to be shared with third parties including enforcement agencies. Despite users making strategic trade-offs in their engagement with these urban objects, these are not enough to overcome the asymmetries encoded in their design.
Note the changed time.
Justine Humphry is a Lecturer in Digital Cultures in the Department of Media and Communications at the University of Sydney. She researches the cultures and politics of digital media and emerging technologies with a focus on the social consequences of mobile, smart and data-driven technologies. Her research addresses the materialisation of smart cities and the datafication of urban life with a focus on the mediation of home and urban space through smart street furniture, smart voice assistants and robotics.
In this talk I will present a quantum reduction from the problem of sampling short vectors in a code to the problem of decoding its dual. Usually codes are endowed with the Hamming metric but as I will show the reduction works for a large class of metrics (including the rank and Lee metrics). Furthermore, in many cases, we are able to show that thanks to a quantum computer solving the decoding problem at distance t we can find short codewords of weight f(t) for some explicit function. Surprisingly, for codes equipped with the Hamming metric the function f is in relationship with the first linear programming bound of McEliece-Redomich-Rumsey-Welch. This result is rather intriguing and does not seem to have a simple interpretation for now.
The main technical tool used in the proof is the discrete Fourier transform. The proof follows the same ideas of Regev’s quantum reduction between the Closest Vector Problem and sampling short lattice vectors. More precisely, the proof I will present uses with codes the same techniques as Stehlé, Steinfeld, Tanaka and Xagawa ’s re-interpretation of Regev’s proof.
This is joint work with Maxime Remaud and Jean-Pierre Tillich
Thomas Debris-Alazard is a research scientist (chargé de recherche) at Inria in the Grace project-team. He was previously a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. He received his PhD from Inria under the supervision of Jean-Pierre Tillich.
In this talk, I will revisit a paper Arun Kundnani, Joris van Hoboken and I started writing in 2014 and published in 2016. In the backdrop of our writing sessions in New York were the Black Lives Matter protests that started in Ferguson and spread nationwide, and human rights advocates disputing surveillance programs targeting muslim communities in New York and New Jersey. While counter-surveillance was at the heart of all these developments, they flourished in communities and spoke to constituencies that were mostly distinct from another group that some of us were circling in: privacy advocates, progressive security engineers, and policy makers, who following Edward Snowden’s revelations of US and UK surveillance programs had been seeking to win majority support for countering surveillance. The paper studies this discrepancy by taking a closer look at the activities, discourse and solutions propose by the latter group. It describes the ways in which advocates of privacy framed the problem as the replacement of targeted surveillance with mass surveillance programs, and identified the solutions as predominantly technical and involving the use of encryption – or ‘crypto’ – as a defense mechanism. The paper further illustrated that by raising the specter of an Orwellian system of mass surveillance, shifting the discussion to the technical domain, and couching that shift in economic terms undermined a political reading that would attend to the racial, gendered, classed, and colonial aspects of the US and UK surveillance programs. We asked then: how can this specific discursive framing of counter-surveillance be re-politicized and broadened to enable a wider societal debate informed by the experiences of those subjected to targeted surveillance and associated state violence? During the talk, I hope we can revisit this question anew given how in 2020 COVID-19 has come to normalize surveillance in the name of public health, replacing the "war on terror" with the "war on the virus" and we see the rise of a fresh wave of global protests around Black Lives Matter.
Seda is currently an Associate Professor in the Department of Multi-Actor Systems at TU Delft at the Faculty of Technology Policy and Management, and an affiliate at the COSIC Group at the Department of Electrical Engineering (ESAT), KU Leuven. She is also a member of the Institute for Technology in the Public Interest and the arts initiative Constant. Her work focuses on privacy enhancing and protective optimization technologies (PETs and POTs), privacy engineering, as well as questions around software infrastructures, social justice and political economy as they intersect with computer science.
Recent high-profile attacks on the Internet of Things (IoT) have brought to the forefront the vulnerability of “smart” devices. This has resulted in IoT technologies and end devices being subjected to numerous security analyses. One source that has the potential to provide rich and definitive information about an IoT device is the IoT firmware itself. However, analysing IoT firmware is notoriously difficult, as peripheral firmware files are predominantly available as stripped binaries, without the debugging symbols that would simplify reverse engineering. In this talk, we will present an open-source tool, argXtract, that extracts configuration information from Supervisor Calls within a stripped ARM Cortex-M binary file. Through a combination of generic ARM assembly analysis and vendor-specific configurations, argXtract is able to generate call trace chains and statically “execute” a firmware file in order to retrieve and process arguments to Supervisor Calls. This enables automated bulk analysis of firmware files, to derive statistical security information. We will also present a real-world test case in which we configure argXtract to obtain Bluetooth Low Energy security configurations from Nordic Semiconductor firmware files, and execute it against a dataset of 246 firmware binaries. The results demonstrate that privacy and security vulnerabilities are prevalent in IoT.
Pallavi Sivakumaran is a final-year CDT student with the Information Security Group at Royal Holloway, University of London. Her research focuses on security and privacy concerns associated with Bluetooth Low Energy, which is a key enabling technology for the Internet-of-Things (IoT).
Recent research efforts on adversarial ML have begun to investigate problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., malware). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored.
In this talk, I will present two major contributions from our recent IEEE S&P 2020 paper [1]. First, I will present our novel reformulation of adversarial ML evasion attacks in the problem-space (also known as realizable attacks). This requires us to consider and reason about additional constraints that feature-space attacks ignore, which shed light on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present a novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generate evasive malware at scale, while evading state-of-the-art defenses.
[1] Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, Lorenzo Cavallaro. “Intriguing Properties of Adversarial ML Attacks in the Problem Space”. IEEE Symp. Security & Privacy (Oakland), 2020.
Note the changed time.
Feargus is a PhD cybersecurity student in the Information Security Group at Royal Holloway, University of London and a Visiting Scholar at the Systems Security Research Lab at King’s College London. His research explores the limitations of machine learning when applied to security settings.
Feargus was recently a visiting student at The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, and has twice interned at Facebook, with the Abusive Accounts and Compromised Accounts teams respectively, where he developed novel techniques for detecting and measuring harmful behaviour on social media platforms.
He is also the author and maintainer of TESSERACT, a framework and Python library for performing sound ML-based evaluations without experimental bias, and a core author and maintainer of TRANSCEND, a framework for detecting concept drift using conformal evaluation.
The summer of 2020 saw the largest anti-racist protests in British history. In order to understand how deeply entrenched racism is to British policing, we must look beyond the British mainland and into the colonial policing which shaped racial governance. It is in this context that we can better explore how policing and racism are resisted in 21st century Britain. This postcolonial lens will help analyse spontaneous resistance and organised radical campaigns which envision a world in which state punishment and violence is replaced with solidarity, care and co-operation.
Adam Elliott-Cooper is a research associate in sociology the University of Greenwich. He has previously worked as a researcher in the Department of Philosophy at UCL, as a teaching fellow in the Department of Sociology at the University of Warwick and as a research associate in the Department of Geography at King's College London.
Over the last several years, numerous journalists and news organizations have reported incidents in which their communications have been hacked, intercepted, or retrieved. In 2014, Google security experts found that 21 of the world’s 25 most popular media outlets were targets of state-sponsored hacking attempts, and many journalists have watched helplessly as hackers took control of their social media accounts, targeting confidential information in their internal servers. When journalists’ digital accounts are vulnerable to hacks or surveillance, news organizations, journalists, and their sources are at risk, and journalists’ ability to carry out their newsmaking function is reduced. Yet, some journalists do not believe that hacking and surveillance are significant threats, and they are not adopting information security measures to protect their data, themselves, or their sources. This research study includes 19 interviews with journalists, developers, and digital security trainers to shed light on journalists’ perceptions of information security technologies, including motivations to adopt and barriers to adoption. The findings show that motivations to adopt information security technologies hinge on the idea of protection: protection of self, story, and the journalist’s role—more so than the protection of the source, contrary to contemporary discourse about why journalists need to adopt such technologies.
Note the changed time.
Jennifer R. Henrichsen is a Ph.D. Candidate at the Annenberg School for Communication at the University of Pennsylvania. She has received fellowships from Columbia University, the Knight Foundation and First Look Media and has been a consultant twice to UNESCO. A Fulbright Research Scholar, Jennifer holds master’s degrees from the University of Pennsylvania and the University of Geneva. In 2011, she co-wrote a book, War on Words: Who Should Protect Journalists? (Praeger). She co-edited the book, Journalism After Snowden: The Future of the Free Press in the Surveillance State (Columbia University Press, 2017) and she is currently co-editing a book on national security and journalism for Oxford University Press.
How do technologies mediate security practices and how can we study them? In this talk I draw on my fieldwork in banks to show how banks rely on technologies to detect suspicious transactions that may be connected to money laundering or terrorism financing. Inspired by work at the intersection of (critical) security studies and science and technology studies, I foreground the role of digital technologies in the production of security expertise by compliance officers and intelligence analysts in banks. The research is based on multi-sited ethnography centred around ‘sites of experimentation’.
Esmé Bosma is a PhD candidate at the Department of Political Science of the University of Amsterdam and a member of project FOLLOW: Following the Money from Transaction to Trial, funded by the European Research Council (ERC) (www.projectfollow.org). For her research project she has conducted field research inside and around banks in Europe to analyse counter-terrorism financing practices by financial institutions. Her research lies at the intersection between (critical) security studies and science and technology studies. She holds a master’s degree in Political Science from the University of Amsterdam. She is co-editor of the book Secrecy and Methods in Security Research. A Guide to Qualitative Fieldwork (Routledge, 2019).
In this talk I'll give an overview of my research on the security and privacy experiences of at-risk users. The talk will center on two studies with different populations: women in South Asia [1] and survivors of intimate partner abuse [2].
[1] “‘They Don't Leave Us Alone Anywhere We Go’: Gender and Digital Abuse in South Asia.” Nithya Sambasivan, Amna Batool, Nova Ahmed, Tara Matthews, Kurt Thomas, Sane Gaytán, David Nemer, Elie Bursztein, Elizabeth Churchill, Sunny Consolvo. CHI 2019 (Best Paper)
[2] “Stories from survivors: Privacy & security practices when coping with intimate partner abuse.” Tara Matthews, Kathleen O'Leary, Anna Turner, Manya Sleeper, Jill Palzkill Woelfer, Martin Shelton, Cori Manthorne, Elizabeth F Churchill, Sunny Consolvo. CHI 2017 (Best Paper)
Note the changed time.
Tara Matthews is a consultant working on security and privacy user experience issues with tech companies. Previously, she was a Senior User Experience Researcher in Google's Security & Privacy Research & Design Group for nearly 4 years. She was also a manager and team lead. Prior to joining Google in June 2014, Tara was a Research Scientist at IBM Research - Almaden for nearly 7 years, studying and improving the design of workplace collaboration and social software. Tara earned her Ph.D. in Computer Science from the University of California, Berkeley in 2007. Her major was Human-Computer Interaction and her dissertation work informed the design and evaluation of glanceable (low attention) information visualizations.
In the 1990s the US government feared the emergence of encryption technologies that would prevent them from conducting legal intercept and signals intelligence.
To preserve their capabilities, whilst at the same time providing public key encryption to citizens, the government developed a key escrow technology, the Clipper Chip, which would allow warranted recovery of suspect's encryption keys. In parallel, the government used export regulations in an attempt to prevent strong encryption escaping their borders and reaching foreign adversaries.
Opposing government policies were the digital privacy activists, including the Cypherpunks, a group of borderline anarchist technologists. The digital privacy activists developed and disseminated encryption technologies such as PGP to undermine government policies. The privacy activists also challenged the government export regulations in the courts in an attempt to have them declared unconstitutional.
This seminar will explore the main events of the battle between the government and digital privacy activists during the 1990s.
Craig is currently studying a PhD in History & Information Security at RHUL.
Craig's research explores why the US administrations of the 1990s chose to regulate cryptography, with this being a proxy for privacy in the digital age, and how digital privacy activists such as the Cypherpunks opposed government policies.
Before studying at RHUL, Craig held the post of Chief Technology Officer at DXC Security. Craig holds Master's degrees in Cyber Security, International Security, and Classical Music.
Craig's first book, 'CryptoWars: The Fight for Privacy in the Digital Age: A Political History of Digital Encryption' will be released by Taylor and Francis in December 2020.
In this work, we consider the computer security and privacy practices and needs of recently resettled refugees in the United States. We ask: How do refugees use and rely on technology as they settle in the US? What computer security and privacy practices do they have, and what barriers do they face that may put them at risk? And how are their computer security mental models and practices shaped by the advice they receive? We study these questions through in-depth qualitative interviews with case managers and teachers who work with refugees at a local NGO, as well as through focus groups with refugees themselves. We find that refugees must rely heavily on technology (e.g., email) as they attempt to establish their lives and find jobs; that they also rely heavily on their case managers and teachers for help with those technologies; and that these pressures can push security practices into the background or make common security “best practices” infeasible. At the same time, we identify fundamental challenges to computer security and privacy for refugees, including barriers due to limited technical expertise, language skills, and cultural knowledge — for example, we find that scams as a threat are a new concept for many of the refugees we studied, and that many common security practices (e.g., password creation techniques and security questions) rely on US cultural knowledge. From these and other findings, we distill recommendations for the computer security community to better serve the computer security and privacy needs and constraints of refugees, a potentially vulnerable population that has not been previously studied in this context.
Note the changed time.
Lucy is a PhD student in Computer Science & Engineering at the Univeristy of Washington. Her research focuses on the security and privacy-related needs and practices of understudied or underserved populations.
In this talk, I will revisit a qualitative research project examining how digital activists navigate risks posed to them in online environments. I examined how a group of activists across ten different non-Western countries adapted and responded to threats posed by two types of powerful actors: both the state, and technology companies that run the social media platforms on which many activists rely to conduct their advocacy. Through a series of interviews, I examined how resistance against censorship and surveillance manifested in everyday practices, not just the use of encryption and circumvention technologies, but also the choice to use commercial social media platforms to their advantage despite considerable ambivalence about the risks they pose. Much has changed in the digital landscape since I first conducted this work: in the discussion I plan to engage with how these findings prefigured larger concerns about misinformation and digital surveillance, and illustrate the importance of balancing locally contingent interpretations of risk against the larger geopolitical backdrop in which technology companies now play an important role.
Note the changed time.
Sarah Myers West is a postdoctoral researcher at the AI Now Institute, where her research engages with the culture, politics and practices of technology developers, and incorporates both historical and ethnographic methods. Her current projects explore themes of power and resistance in the history of AI. She holds a doctorate from the Annenberg School for Communication and Journalism at the University of Southern California, where her dissertation examined the cultural history and politics of encryption technologies from the 1960s to the present day. Her work is published in journals such as New Media & Society, the International Journal of Communication, and Policy & Internet.
In this talk we investigate the problem of automating the development of adaptive chosen ciphertext attacks on systems that contain vulnerable format oracles. Rather than simply automate the execution of known attacks, we consider a more challenging problem: to programmatically derive a novel attack strategy, given only a machine-readable description of the plaintext verification function and the malleability characteristics of the encryption scheme. We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the design of the system, producing an automated attack strategy that can entirely decrypt protected messages.
Note the changed time.
Matthew D. Green is an Associate Professor at Johns Hopkins University. He works on topics in applied cryptography, including the design of privacy-preserving protocols and attacks on deployed cryptographic systems.
Since the 1960s we have been told that new computing technologies are ushering in a new era: the Computer Revolution and Knowledge Economy (1962), Global Village and One-Dimensional Man (1964), the Third (1975) and the Fourth (2015) Industrial Revolutions(s). There was never a consensus on which kind of computational techniques were behind the change. Then in the 1990s a number of humanities academics discovered the Internet. They used their understanding of its technical architecture to define the relevant properties of computation that were behind our emerging network society. Canonical scholarship emerged (Castells 1996) alongside cyberlibertarian visions (Barlow 1996) that told much the same story: computer networks were not only naturally decentralized and liberating, they were welcome solvents on the old, centralized order. When cryptography burst into public (and humanist) consciousness, it could only make things even better by further empowering the individual.
In my talk I want to offer a different characterization of the relationship between computer networks, cryptography, and their consequences for society. To do so, I go back to one of the beginnings, to Paul Baran's Distributed Adaptive Message Block Network, as outlined in his canonical On Distributed Communications--drawing in particular on the formerly classified twelfth volume of this series. Rather than envision networks as naturally distributed and open, Baran can help us better characterize them as naturally closed, encrypted, and at odds with individual liberty. I will discuss other reasons for this claim, as well as its consequences--offering an outline of what networks mean when we build cryptography into their identity and function.
I am a historian of computing, and use historical analysis to improve outcomes for STEM and tech policy organizations and research projects. I specialize in the evolution of computer network protocols, architectures, security, and technical management. I work as an Assistant Professor at the Stevens Institute of Technology, in the Science, Technology, and Society Program. I have projects underway for Google, Lockheed Martin, the National Science Foundation, ICANN, and MIT Press. Previously I was a researcher with the UCLA Computer Science Department.
Drawing on the experiences of a novel collaborative project between sociologists and computer scientists, this talk identifies a set of challenges for fieldwork that are generated by this 'wild interdisciplinarity'. Public Access Wi-Fi Service was a project funded by an ‘in-the-wild’ research programme, involving the study of digital technologies within a marginalised community, with the goal of addressing digital exclusion. I argue that similar forms of research, in which social scientists are involved in the deployment of experimental technologies within real world settings, are becoming increasingly prevalent. The fieldwork for the project was highly problematic, with the result that few users of the system were successfully enrolled. I'll analyse why this was the case, identifying three sets of issues which emerge in the juxtaposition of interdisciplinary collaboration and wild setting. I conclude with a set of recommendations for projects involving technologists and social scientists.
Murray Goulden is Assistant Professor of Sociology at the University of Nottingham, and an alumnus of the Horizon Digital Research Institute. He has worked extensively on research applying novel digital technologies to real world settings. This includes Co-I on the EPSRC TIPS2 Internet of Things project ‘Defence Against the Dark Artefacts’, and earlier Researcher Co-I roles on two EPSRC-funded projects – ‘Public Access WiFi Service’ and ‘Creating the Energy for Change’. These projects span his interests in networking, digital data, and smart energy, their role in everyday life through the reconfiguring of associated social practices, and the implications for policy making and design. He is currently the recipient of a 3 year Nottingham Research Fellowship, focused on the implications of Internet of Things technologies for patterns of life within the home.
This talk will revisit joint work with Harry Halpin and Ksenia Ermoshina, conducted in the frame of the H2020 European project NEXTLEAP (2016-2018, nextleap.eu). Due to the increased and varied deployment of secure messaging protocols, differences between what developers “believe” are the needs of their users and their actual needs can have very tangible and potentially problematic consequences. Based on 90 interviews with both high and low-risk users, as well as with several developers, of popular secure messaging applications, we mapped the design choices made by developers to threat models of both high-risk and low-risk users. Our research revealed interesting and sometimes surprising results, among which: high-risk users often consider client device seizures to be more dangerous than compromised servers; key verification is important to high-risk users, but they often do not engage in cryptographic key verification, instead using other “out of band” means; high-risk users, unlike low-risk users, often need pseudonyms and are heavily concerned over metadata collection. Developers tend to value open standards, open-source, and decentralization, but high-risk users often find these aspects less urgent given their more pressing concerns; and while, for developers, avoiding trusted third parties is an important concern, several high-risk users are in fact happy to rely on trusted third parties ‘protected’ by specific geo-political situations. We conclude by suggesting that work still needs to be done for secure messaging protocols to be aligned with real user needs, including high-risk, and with real-world threat models.
Francesca Musiani (PhD, socio-economics of innovation, MINES ParisTech, 2012), is associate research professor at the French National Center for Scientific Research (CNRS) since 2014. She is Deputy Director of the Center for Internet and Society of CNRS, which she co-founded with Mélanie Dulong de Rosnay in 2019. She is also an associate researcher at the Center for the sociology of innovation (i3/MINES ParisTech) and a Global Fellow at the Internet Governance Lab, American University in Washington, DC. Since 2006, Francesca’s research work focuses on Internet governance, in an interdisciplinary perspective merging information and communication sciences, science and technology studies (STS) and international law. Her most recent research explores, or has explored, the development and use of encryption technologies in secure messaging (H2020 European project NEXTLEAP, 2016-2018), “digital resistances” to censorship and surveillance in the Russian Internet (ANR project ResisTIC, 2018-2021), and the governance of Web archives (ANR project Web90, 2014-2017 and CNRS Attentats-Recherche project ASAP, 2016). Francesca’s theoretical work explores STS approaches to Internet governance, with particular attention paid to socio-technical controversies and to governance “by architecture” and “by infrastructure”. Francesca is the author of several journal articles and books, including Nains sans géants. Architecture décentralisée et services Internet (Dwarfs Without Giants: Decentralized Architecture and Internet Services, Presses des Mines [2015], recipient of the French Privacy and Data Protection Commission’s Prix Informatique et Libertés 2013).
In just a few years, Fully Homomorphic Encryption (FHE) has gone from a theoretical “holy grail” of cryptography to a commercial product. This is in part due to the development of Machine Learning as a Service, and the fact that our society has evolved to be data-driven. As a consequence, secure computation has become more valuable and has seen some great advances. In this talk, we will discuss some of these improvements in FHE, as well as some of the latest implementation results. We will finish by discuss one of the main challenges in FHE, the analysis of the noise growth in an FHE ciphertext.
I recently joined the ISG group at Royal Holloway as a postdoc researcher. Previously, I spent a year at Intel as a research scientist, working on Privacy-Preserving Machine Learning (PPML). Even before that, I was a PhD student at Bristol University, from where I obtained my PhD in 2018. I work on privacy-preserving machine learning, fully homomorphic encryption and more broadly, computing on encrypted data, lattice-based and post-quantum cryptography.
It is well known that older adults continue to lag behind younger adults in terms of their breadth of uptake of digital technologies, amount and quality of engagement in these tools and ability to critically engage with the online world. Can these differences be explained by older adults’ distrust of digital technologies? Is trust, therefore, a critical design consideration for appealing to older adults? In this talk I will argue that while distrust is not, in fact, determinative of non-use and therefore does not explain these differences in tech usage, it is nonetheless key for designers to understand older adult distrust in developing socially responsible technologies.
Bran is a lecturer in the Data Science Institute at Lancaster University. Her research explores the social impacts of computing, with a particular interest in trust, privacy, and ethics. Her recent work has explored these issues at both ends of the age spectrum, with projects such as IoT4Kids, looking at the privacy, security and ethical issues of enabling children to programme IoT devices; and Mobile Age, looking at developing mobile apps for older adults. Bran currently serves as a member of the ACM Europe Technology Policy Committee.
Traditionally, “provable security” was tied in the minds cryptographers to public-key cryptography, asymptotic analyses, number-theoretic primitives, and proof-of-concept designs. In this talk I survey some of the work that I have done (much of it joint with Mihir Bellare) that has helped to erode these associations. I will use the story of practice-oriented provable security as the backdrop with which to make the case for what might be called a “social constructionist” view of our field. This view entails the claim that the body of work our community has produced is less the inevitable consequence of what we aim to study than the contingent consequence of sensibilities and assumptions within our disciplinary culture.
Note the changed time.
I'm a professor of Computer Science at the University of California, Davis, USA. My research has focused on obtaining provably-good solutions to practical protocol problems. I did my undergrad work at UCD and my Ph.D. at MIT. I came to UCD in 1994, but have spent some of those years on leaves and sabbaticals, most often in Thailand. In recent years I've been increasingly concerned about ethical and social problems connected to technology, and the majority of my teaching is now on that.
Distance bounding protocols constitute a special class of authentication protocol, in which participants must verify not only the identity of their partner, but also their physical location. They are important for systems such as contactless card payments or electronic doors, to avoid scenarios in which an attacker might relay messages over a longer distance than intended. This is typically achieved by using a time-sensitive challenge-response phase, where the verifying agent estimates distance by calculating the round trip time of their challenge messages. There are some difficulties in applying traditional security verification approaches to this family of protocols. Symbolic approaches, which aim to abstract away details (such as the nature of the cryptographic primitives used), must deal with the fact that many attack scenarios are intrinsically linked with the location and timing of messages.
In this talk, we present a model for analysing distance bounding protocols. The model of Basin et al., which uses a bespoke implementation in Isabelle/HOL, is adapted to remove speed-of-light calculations for message timings. Instead, a (provably) equivalent security claim is developed that instead focuses on the precise ordering of actions during a protocol execution. This approach enables an embedding into the Tamarin prover tool, allowing for rapid automated verification. Further, we discuss extensions to the model to analyse so-called "dishonest" agents -- who generally follow their specification but are willing to temporarily deviate in order to collaborate with the network adversary. Such agents are particularly relevant for modelling "Terrorist fraud" attacks, where an adversary can be (illegally) granted a one-time key. Finally, the results of an extensive literature survey is presented, discussing common pitfalls in protocol design.
Zach is a PhD candidate at the University of Luxembourg in the field of computer security. His focus is on the development of formal models for security protocols, in order to define precise security requirements. Research interests include security for RFID and IoT devices, as well as multiparty protocols. His other interests include game development, swing dance, and locking himself inside to write his PhD thesis.
In this talk, I discuss the ethical challenges and dilemmas that arise as a result of state involvement in academic research on ‘terrorism’ and ‘extremism’. I suggest that researchers and research institutions need to be more attentive to the possibilities of co-option, compromise, conflict of interests and other ethical issues. I empirically examine the relationship between academic researchers and the security state. I highlight three key ways in which ethical and professional standards in social scientific research can be compromised: (1) Interference with the evidence base (through a lack of transparency on data and conflicts of interest); (2) Collaboration on research supporting deception by the state which undermines the ability of citizens to participate in democratic processes; and (3) Collaboration on research legitimating human rights abuses, and other coercive state practices. These issues are widespread, but neglected, across: literature on 'terrorism' and 'extremism'; literature on research ethics; and, in practical ethical safeguards and procedures within research institutions. In order to address these issues more effectively, I propose that any assessment of research ethics must consider the broader power relations that shape knowledge production as well as the societal impact of research. In focusing on the centrality of states – the most powerful actors in the field of ‘terrorism’ and ‘extremism’ – our approach moves beyond the rather narrow procedural approaches that currently predominate. I argue more attention to the power of the state in research ethics will not only help to make visible, and combat, ethically problematic issues, but will also help to protect the evidence base from contamination. I conclude by proposing a series of practical measures to address the problems highlighted.
Narzanin is a Lecturer in Criminology at the University of Exeter. Her research focuses on racism, social movements and counter-terrorism. She is currently working on a study researching the impact of counter-terrorism policy and practice on UK higher education. She is co-editor of the book What is Islamophobia? Racism, Social Movements and the State (Pluto Press, 2017) and author of Muslim Women, Social Movements and the ‘War on Terror (Palgrave Macmillan, 2015).
In this talk, I will briefly present both the EasyCrypt interactive proof assistant—whose focus is on the formalization of game-based cryptographic security proofs, before discussing its application to the SHA-3 standard. In combination with the Jasmin language—an "assembly-in-the-head" language with formalized semantics and a certified compiler—our proof is used to produce a complete high-assurance standard, with machine-checked proofs, verified reference implementations, and a verified optimized implementation for a specific platform.
I will discuss some of the challenges encountered in formalizing the security proof, and discuss the techniques afforded by the combined use of "interactive first" technologies such as Jasmin and EasyCrypt, which allow us to produce highly-efficient, yet fully verified, implementations. Some future perspectives may be discussed
I am a Senior Lecturer in the Cryptography Group and Department of Computer Science at the University of Bristol (UK). My research revolves around proving cryptographic and side-channel security properties of concrete realizations and implementations of cryptographic primitives and protocols, sometimes in the presence of partial compromise. This involves tackling problems in modelling adversaries and systems, designing and applying proof methodologies and verification tools, and generally finding less tedious ways of verifying complex properties of large (but not vast) systems and code bases.
Hybrid Authenticated Key Exchange (AKE) protocols combine keying material from different sources (for instance, post-quantum and classical secure key exchange primitives) to build protocols that are resilient to catastrophic failures of the different components. In this talk, I will present the results of a recent work with Torben Hansen and Kenny Paterson: a new hybrid key exchange protocol called Muckle - a simple one-round-trip key exchange protocol that combines preshared keys, post-quantum and classical key encapsulation mechanisms, and quantum key distribution protocols. I will also discuss a general framework HAKE for the analysis of hybrid AKE protocols, and demonstrate the security of our approach with respect to a powerful attacker, capable of fine-grained compromise of different cryptographic components. HAKE is broad enough to allow us to capture forward secrecy, multi-stage key exchange security, and post-compromise security. I will present an implementation of our Muckle protocol, instantiating our generic construction with classical and post-quantum Diffie-Hellman-based algorithmic choices and discuss the results of benchmarking exercises against our implementation.
Ben Dowling is a postdoc at ETH Zurich, in the Applied Cryptography group headed by Prof. Kenny Paterson since July 2019, and was previously a postdoc in the Information Security Group at Royal Holloway, University of London from January 2017. His research interests focus primarily in provable security of real-world cryptographic protocols, in particular, expanding the frameworks used in the analysis of security protocols to cover novel properties and dependencies not currently examined in literature.
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from richer and more personalized apps, the growing number of sensors introduces new security and privacy risks to end users and makes the task of sensor management more complex. In this talk, first, we discuss the issues around the security and privacy of mobile sensors. We investigate the available sensors on mainstream mobile devices and study the permission policies that Android, iOS and mobile web browsers offer for them. Second, we reflect the results of two workshops that we organized on mobile sensor security. In these workshops, the participants were introduced to mobile sensors by working with sensor-enabled apps. We evaluated the risk levels perceived by the participants for these sensors after they understood the functionalities of these sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the users’ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks had a strong impact on the users’ perception. We also taught the participants about the ways that they could audit their apps and their permissions. Our findings showed that when mobile users were provided with reasonable choices and intuitive teaching, they could easily self-direct themselves to improve their security and privacy. Finally, we provide recommendations for educators, app developers, and mobile users to contribute toward awareness and education on this topic.
*** I have a PhD studentship for Sep 2020 on "Cyber Security in Farm and Companion Animal Technologies" (schools of computing and agriculture) at Newcastle University. If you are interested, come and talk to me after the presentation, or email me any time.
I am a Research Fellow in Cyber Security, School of Computing, Newcastle University (NU), UK. I have a PhD in Computing Science, MSc and BSc in Computer Engineering. I work on Sensor, Mobile, and IoT Security, Security Standardisation, and Usable Security and Privacy. I work with W3C as an invited expert on sensor specifications. I am particularly interested in real-world multi-disciplinary projects. I am an advocate for Equality, Diversity and Inclusion (EDI) (a member of EDI committee in the School of Computing, Newcastle University) and particularly support women in STEM.
This talk will explore the disruptive and transformative effects of digital technology on gendered security asymmetries in Greenland. Through extended ethnographic fieldwork conducted in Greenland and Denmark, research findings emerged through in-depth interviews, collaborative mappings and field observations with 51 participants. Employing a critical feminist lens, the paper identifies how Greenlandic women develop digital security practices to respond to Greenland's ecologically, politically and socially induced transformation processes. By connecting individual security concerns of Greenlandic women with the broader regional context, the findings highlight how digital technology has created transitory spaces in which collective security is cultivated, shaped and challenged. The contribution to security scholarship is therefore threefold: (1) identification and acknowledgement of gendered effects of increased usage of digital technology in remote and hard-to-reach communities, (2) a broader conceptualisation of digital security and (3) a recommendation for more contextualised, pluralistic digitalisation design.
This talk is based on: Wendt, Nicola, Rikke Bjerg Jensen and Lizzie Coles-Kemp. "Civic Empowerment through Digitalisation: the Case of Greenlandic Women." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems - CHI'20, New York, 2020. ACM Press.
Nicola is a PhD candidate supervised across the Information Security Group (Dr Rikke Bjerg Jensen) and the Geography Department (Prof Klaus Dodds) at Royal Holloway and funded by the Leverhulme Trust. In her PhD she focuses on identity formation within an increasingly digitalised public sphere in Greenland and, through this, explores gendered notions of security. Ethnographic in nature and using community-based participatory research methods, Nicola’s research investigates the intersection of digital technology and social practices, looking at how experiences of technological transitions are negotiated against a backdrop of historic and contemporary inequalities. She received her BA in International Relations from the University of Groningen and her MA from the Universities of Uppsala and Strasbourg.
Academic research on machine learning-based malware classification appears to leave very little room for improvement, boasting F1 performance figures of up to 0.99. Is the problem solved? In this talk, we argue that there is an endemic issue of inflated results due to two pervasive sources of experimental bias: spatial bias, caused by distributions of training and testing data not representative of a real-world deployment, and temporal bias, caused by incorrect splits of training and testing sets (e.g., in cross-validation) leading to impossible configurations. To overcome this issue, we propose a set of space and time constraints for experiment design. Furthermore, we introduce a new metric that summarizes the performance of a classifier over time, i.e., its expected robustness in a real-world setting. Finally, we present an algorithm to tune the performance of a given classifier. We have implemented our solutions in TESSERACT, an open source evaluation framework that allows a fair comparison of malware classifiers in a realistic setting. We used TESSERACT to evaluate two well-known malware classifiers from the literature on a dataset of 129K applications, demonstrating the distortion of results due to experimental bias and showcasing significant improvements from tuning.
The main results of this talk are published in: - Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, Lorenzo Cavallaro . TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. USENIX Security Symposium, 2019.
Fabio Pierazzi is currently a Lecturer (Assistant Professor) in Computer Science at King's College London, where he is also a member of the Cybersecurity (CYS) group. His research expertise is on statistical methods for malware analysis and intrusion detection, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity). Before joining King’s College London as a Lecturer in Sep 2019, he obtained his Ph.D. in Computer Science in 2017 from University of Modena and Reggio Emilia, Italy, under the supervision of Prof. Michele Colajanni; he spent most of 2016 as a Visiting Researcher at the University of Maryland, College Park, USA, under the supervision of Prof. V.S. Subrahmanian; between Oct 2017 and Sep 2019, he has been a Post-Doctoral Researcher in the Systems Security Research Lab (S2Lab), first at Royal Holloway University of London and then at King’s College London, under the supervision of Prof. Johannes Kinder and Prof. Lorenzo Cavallaro. Home page: https://fabio.pierazzi.com
It is just over ten years since the first academic work on Automatic Exploit Generation (AEG). In this talk I will provide a brief history of the topic, and explain the current state of the art and open problems. I will then discuss our most recent work on greybox exploit generation against language interpreters. Language interpreters, such as those for Python, PHP, Javascript etc., are typically large and complex applications and difficult to analyse using whitebox methods, such as symbolic execution. In this work we have sought to create an entirely greybox pipeline for AEG. To do so we have broken down the exploit generation problem into several subproblems, constructed greybox solutions for each, and chained these solutions together to produce exploits. Our current implementation can produce exploits for the Python and PHP interpreters, and I will outline our ongoing efforts to extend this to Javascript interpreters.
Sean Heelan is a co-founder/CTO of Optimyze and a PhD candidate at the University of Oxford. In the former role he develops products for increasing the efficiency of large-scale, cloud based systems, and in the latter he is investigating automated approaches to exploit generation. Previously he ran Persistence Labs, a reverse engineering tooling company, and worked as a Senior Security Researcher at Immunity Inc. At Immunity he lead a team under DARPA's Cyber Fast Track programme, investigating hybrid approaches to vulnerability detection using a mix of static and dynamic analyses.
Much attention in cyber security has turned to new technologies and new materialities of information. These overlook the fact that much of security attention in everyday life is oriented around more conventional objects of security, such as documents. In this talk, I discuss why scholars should take documents and other everyday materialities more seriously. I build my argument based on ethnographic fieldwork conducted in the South Korean corporate world between 2011 and 2017. First, I suggest that even as organizations are increasingly paperless, documents nevertheless persist as focal objects, serving as idealised informational containers. Second, I suggest that digital security is not distinct from older material forms, such as paper; in contrast, new digital infrastructures are increasingly developed to protect older forms, such as cloud storage. Third, documents fit within social practices of protection beyond formal demands of information protection. I demonstrate how Korean employees I researched with treated documents with extra protection beyond legal requirements. These arguments point to new ways of thinking about how 'everyday' dimensions of security and securitisation are mediated by specific material objects and practices.
Michael Prentice was trained as a linguistic and cultural anthropologist at the University of Michigan, Ann Arbor. His doctoral research focused on the role of genres of communication in modern workplaces, and how they come to articulate ideas of democracy, progress, and global management. He has carried out field research in the South Korean corporate world since 2011. His book manuscript looks at efforts to reform hierarchy in the Korean corporate world. At Manchester, he is a research fellow with the Digital Trust & Security initiative, focused on issues around workplace security. In particular, he is interested in addressing issues surrounding the effects of securitization on everyday work life.
Underground communities attract people interested in illicit activities and easy-money making methods. In this joint talk, we will discuss the role of these forums in two different activities: eWhoring and the use of malware for illicit cryptocurrency mining.
On the one hand, eWhoring is the term used by offenders to refer an online fraud where they imitate partners in cyber-sexual encounters. Using all sort of social engineering skills, offenders aim at scamming their victims into paying for sexual-related material of a third-party person. We have analysed material and tutorials posted in underground forums to sed light into this previously-unknown deviant activity.
On the other hand, illicit crypto-mining uses stolen resources to mine cryptocurrencies for free. This threat is now pervasive and growing rapidly. Our talk will cover how this ecosystem is evolving, how much harm it is causing, and how can it be stopped. Our measurement shows that criminals have illicitly mined about 4.4% of the Monero cryptocurrency (we estimate that this accounts for 58 million USD). We also observe that there is a considerably small number of actors that hold sway this crime. Furthermore, we note that there is an increasing level of support offered by criminals in underground markets, that allow other criminals to run inexpensive malware-driven mining campaigns. This explains why this threat has grown sharply in 2018.
Guillermo Suarez-Tangil is a Lecturer of Computer Science at King's College London (KCL). His research focuses on systems security and malware analysis and detection. In particular, his area of expertise lies in the study of smart malware, ranging from the detection of advanced obfuscated malware to automated analysis of targeted malware. Before joining KCL, he has been senior research associate at University College London (UCL) where he has explored the use of program analysis to study malware. He has also been actively involved in other research directions aiming at detecting and preventing of Mass-Marketing Fraud (MMF).
Prior to that, he held a post-doctoral position at Royal Holloway, University of London (RHUL) where he was part of the development team of CopperDroid, a tool to dynamically test malware that uses machine learning to model malicious behaviours. He also holds a solid expertise on building novel data learning algorithms for malware analysis. He obtained his PhD on smart malware analysis in Carlos III University of Madrid with distinction and received the Best National Student Academic Award---a competitive award given to the best Thesis in the field of Engineering between 2014-2015 with about 1% acceptance rate (about 100 Cum Laude Thesis were invited to compete for the only award).
Sergio Pastrana is Visiting Professor at Universidad Carlos III de Madrid. He got his PhD in June 2014 by the same institution. His thesis analyzed the effectiveness of Intrusion Detection Systems and Networks in the presence of adversaries, and also the problems derived by the use of classical Machine Learning and AI tools in adversarial environments. After completion of his PhD, he spent two post-doctoral years working in a research project related to security in the Internet of Things (SPINY). His research was focused on the design and evaluation of protocols and systems adapted to the IoT world, as well as attacks and defensed designed for embedded devices.
From October 2016 to October 2018, he worked as Research Associate (postdoctoral researcher) in the Cambridge Cybercrime Centre from the University of Cambridge. His research focused on the analysis of online communities focused on deviant and criminal topics. His first goal was to gather massive amount of data from various forums where these communities interact. For that purpose, he developed a web crawler designed with ethical and technical issues in the forefront. The analysis of these data allow to understand how new forms of cybercrime operate, and it has been or is being used by at least 15 research institutions. His research has been published in prestigious international conferences such as WWW, IMC or RAID, and also in high impact international journals.
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.
Equipped with these newly developed tools, we revisit the “CS proofs” paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate 80 bits of soundness, we highlight the following new implications:
There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $2^128$. At the time of writing, this is the shortest SNARK with public-coin setup.
There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
Mr. Lai is a PhD candidate in the Friedrich-Alexander University Erlangen-Nuremberg advised by Prof. Dominique Schröder. He received his MPhil degree in Information Engineering in 2016, his BSc degree in Mathematics and BEng degree in Information Engineering in 2014, all from the Chinese University of Hong Kong. His recent research interests include succinct zero-knowledge proofs, privacy-preserving cryptocurrencies, searchable encryption, and password-based cryptography.
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market.
In this talk, I will show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans, using an autonomous malware. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. The attack is implemented using a 3D conditional GAN, and the exploitation framework (CT-GAN) is completely automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds.
To evaluate the attack, we will focus on injecting and removing lung cancer in CT scans. We found that three expert radiologists and a state-of-the-art deep learning screening AI were highly susceptible to this attack. Moreover, I will show how this attack can be applied to other medical conditions such as brain tumors. To evaluate the threat, we will explore the attack surface of a modern radiology network and I will demonstrate one attack vector: a covert pen-test I performed on an active hospital to intercept and manipulate CT scans.
Finally, I will conclude by discussing the root causes of this threat, and countermeasures which can be implemented immediately to mitigate it.
Yisroel Mirsky is a post doctoral fellow in the Institute for Information Security & Privacy at Georgia Tech (Georgia Institute of Technology). He received his PhD from Ben-Gurion University in 2018 where he is still affiliated as a security researcher. His main research interests include online anomaly detection, adversarial machine learning, isolated network security, and blockchain. Yisroel has published his research in some of the best cyber security conferences: USENIX, NDSS, Euro S&P, Black Hat, DEF CON, CSF, AISec, etc. His research has also been featured in many well-known media outlets (Popular Science, Scientific American, Wired, Wall Street Journal, Forbes, BBC…). One of Yisroel's recent publications exposed a vulnerability in the USA's 911 emergency services infrastructure. The research was shared with the US Department of Homeland Security and subsequently published in the Washington Post.
The advent of blockchain protocols brought to light a number of applications that could benefit from a large scale Byzantine resilient consensus system. At the same time a number of significant challenges were put forth in terms of scalability, energy efficiency, privacy, and the relevant threat model that such protocols may be proven secure for. In this talk I will give an overview of recent and ongoing research in the area of designing distributed ledgers based on blockchain protocols focusing on results such as the Ouroboros proof of stake blockchain protocols (Crypto'17, Eurocrypt'18, ACM-CCS'18, IEEE S&P'19) as well as other related constructions aiming to improve the interoperability and the incentive structure of distributed ledgers.
Aggelos Kiayias is chair in Cyber Security and Privacy and director of the Blockchain Technology Laboratory at the University of Edinburgh. He is also the Chief Scientist at blockchain technology company IOHK. His research interests are in computer security, information security, applied cryptography and foundations of cryptography with a particular emphasis in blockchain technologies and distributed systems, e-voting and secure multiparty protocols as well as privacy and identity management. His research has been funded by the Horizon 2020 programme (EU), the European Research Council (EU), the Engineering and Physical Sciences Research Council (UK), the Secretariat of Research and Technology (Greece), the National Science Foundation (USA), the Department of Homeland Security (USA), and the National Institute of Standards and Technology (USA). He has received an ERC Starting Grant, a Marie Curie fellowship, an NSF Career Award, and a Fulbright Fellowship. He holds a Ph.D. from the City University of New York and he is a graduate of the Mathematics department of the University of Athens. He has over 100 publications in journals and conference proceedings in the area. He has served as the program chair of the Cryptographers’ Track of the RSA conference in 2011 and the Financial Cryptography and Data Security conference in 2017, as well as the general chair of Eurocrypt 2013.
We introduce a formal quantitative notion of “bit security” for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with k-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a k-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems.
This is joint work with Daniele Micciancio
Michael studied computer science at TU Darmstadt and graduated with a MSc in 2012. He then started his PhD at UCSD under the supervision of Daniele Micciancio with a focus on lattice algorithms and graduated in 2017. Since then he has been a post doc at IST Austria in the Cryptography group of Krzysztof Pietrzak.
The problem of making computing systems trustworthy is often framed in terms of ensuring that users can trust systems. In contrast, my research illustrates that trustworthy computing intrinsically relies upon social trust in the operation of systems, as much as in the use of systems. Drawing from cases including the Border Gateway Protocol, DNS, and the PGP key server pool, I will show how the trustworthiness of the Internet's infrastructural technologies relies upon interpersonal and institutional trust within the communities of the Internet's technical operations personnel. Through these cases, I will demonstrate how a sociotechnical perspective can aid in the analysis and development of trustworthy computing systems by foregrounding operational trust alongside user trust and technological design.
Ashwin J. Mathew is a lecturer in the Department of Digital Humanities at King's College, London. He is an ethnographer of Internet infrastructure, studying the technologies and technical communities involved in the operation of the global Internet. His research shows how the stability of global Internet infrastructure relies upon a social infrastructure of trust within the Internet's technical communities. In his work, he treats Internet infrastructure as culture, power, politics, and practice, just as much as technology.
He holds a Ph.D. from the UC Berkeley School of Information, and won the 2016 iConference Doctoral Dissertation Award for his research into network operator communities across North America and South Asia. His subsequent research into trust relationships and organisational problems in information security has been funded by the UC Berkeley Center for Long-Term Cybersecurity. Prior to his doctoral work, he spent a decade as a programmer and technical architect in companies such as Adobe Systems and Sun Microsystems.
Scholars argue that contemporary movements in the age of social media are leaderless and self-organised. However, the concept of connective leadership has been put forward to highlight the need for movements to have figures who connect entities together. This paper presents a qualitative research of grassroots human rights groups in risky context to address the question of how leadership is performed in information and communication technology-enabled activism. The paper reconceptualises connective leadership as decentred, emergent and collectively performed, and provides a broader and richer account of leaders’ roles, characteristics and challenges. These challenges contribute to the critical literature on the role of ICTs in collective action.
Evronia Azer is an Assistant Professor at the Centre for Business in Society, Faculty of Business and Law, Coventry University. She has recently submitted her PhD thesis titled: “Information and Communication Technology (ICT)-Enabled Collective Action in Critical Context: A Study of Leadership, Visibility and Trust”, at Royal Holloway’s School of Business and Management. During her PhD, she received different awards for her research, including the Civil Society Scholar Award from Open Society Foundations in 2016. With a background in software engineering, Evronia is broadly interested in how technology can provide innovative and creative solutions for societies’ problems; ICT4D, and specifically interested in ICTs in collective action, and data privacy and surveillance.
Cryptographic operations are generally quite costly when performed only in software. In order to improve the performance of a system, such operations can be performed via hardware accelerators. There are different techniques for hardware acceleration: Hardware/software co-design, instruction set extensions for processors, hardware-only implementations, etc. In addition to hardware acceleration of cryptographic operations, computational complexity of cryptography and cryptanalysis problems can also be decreased by dedicated hardware architectures especially on reconfigurable hardware platforms. The talk will start with an overview of hardware aspects of cryptography (and a bit of cryptanalysis). How and when do we use hardware acceleration in cryptography? What are different design techniques? Following this, two new cryptographic hardware architectures which are specifically designed to be very compact and perform efficiently on reconfigurable platforms will be presented. In the first design, AES-GCM algorithm is implemented using mostly some certain blocks (DSP and BRAM) of a Field Programmable Gate Array (FPGA); and in the second design, the new Troika hash function is implemented nearly only on BRAM blocks of an FPGA for compactness.
Elif Bilge Kavun is a Lecturer in Cybersecurity at the Department of Computer Science, The University of Sheffield since January 2019, co-affiliated with the Security of Advanced Systems Research Group. Previously, she was a Digital Design Engineer for Crypto Cores at the Digital Security Solutions division, Infineon (Munich, Germany) and a research assistant at Horst Goertz Institute for IT Security, Ruhr University Bochum (Bochum, Germany). She completed a PhD in Embedded Security in 2015 at the Faculty of Electrical Engineering and Information Technology, Ruhr University Bochum (Bochum, Germany). Her research interests are in hardware security, design and implementation of cryptographic primitives, lightweight cryptography, secure processors, and side-channel attacks and countermeasures.
Feminist theorists of international relations (IR) have long argued that binaries of public/private reinforce the subsidiary status given to gendered insecurities, so that these security problems are ‘individualised’ and taken out of the public and political domain. This talk will outline the relevance of feminist critiques of security studies and argue that the emerging field of cybersecurity risks recreating these dynamics by omitting or dismissing gendered technologically-facilitated abuse such as ‘revenge porn’ and intimate partner violence (IPV). I will present a review of forty smart home security analysis papers to show the threat model of IPV is almost entirely absent in this literature. I conclude by outlining some suggestions for cybersecurity research and design, particularly my work on “abusability testing”, and reaffirming the importance of critical studies of information architecture.
Julia Slupska is a doctoral student at the Centre for Doctoral Training in Cybersecurity. Her research focuses on the ethical implications of conceptual models of cybersecurity. Currently, she is studying cybersecurity in the context of intimate partner violence and the use of simulations in political decision-making. Previously, she completed the MSc in Social Science of the Internet on the role of metaphors in international cybersecurity policy. Before joining the OII, Julia worked on an LSE Law project on comparative regional integration and coordinated course on Economics in Foreign Policy for the Foreign and Commonwealth Office. She also works as a freelance photographer.
Vast amounts of information of all types is collected daily about people by governments, corporations and individuals. The information is collected, for example, when users register to or use online applications, receive health related services, use their mobile phones, utilize search engines, or perform common daily activities. As a result, there is an enormous quantity of privately-owned records that describe individuals finances, interests, activities, and demographics. These records often include sensitive data and may violate the privacy of the users if published.The common approach to safeguarding user information, or data in general, is to limit access to the storage (usually a database) by using and authentication and authorization protocol. This way, only users with legitimate permissions can access the user data. However, even in these cases some of the data is required to stay hidden or accessible only to a specific subset of authorized users. Our talk focuses on possible malicious behavior by users with both partial and full access to queries over data. We look at privacy attacks that meant to gather hidden information and show methods that rely mainly on the underlying data structure, query types and behavior, and data format of the database. We will show how to identify the potential weaknesses and attack vectors for various scenarios and data types, and offer defenses against them.
Joint CS/ISG seminar.
Michael Segal is a Professor of Communication Systems Engineering at Ben-Gurion University of the Negev, known for his work in ad-hoc and sensor networks. Segal has published over 160 scientific papers and he is serving as the Editor-in-Chief for the Journal of Computer and System Sciences. Michael Segal is a past head of the Department (2005-2010) and also held a visiting professorship at Cambridge and Liverpool Universities. Prof. Segal tackles are fundamental optimization problems that have applications in transportation, station placement, communication, facility location, graph theory, statistics, selection, geometric pattern matching, layout of VLSI circuits and enumeration. His research has been funded by many academic and industrial organizations including Israeli Science Foundation, US Army Research Office, Deutche Telecom, IBM, France Telecom, INTEL, Israeli Innovation Agency, General Motors and many others.
Many voter-verifiable, coercion-resistant schemes have been proposed, but even the most carefully designed voting systems necessarily leak information via the announced result. In corner cases, this may be problematic. For example, if all the votes go to one candidate then all vote privacy evaporates. The mere possibility of candidates getting no or few votes could have implications for security in practise: if a coercer demands that a voter cast a vote for such an unpopular candidate, then the voter may feel obliged to obey, even if she is confident that the voting system satisfies the standard coercion resistance definitions. With complex ballots, there may also be a danger of "Italian" style (aka "signature") attacks: the coercer demands the voter cast a ballot with a very specific, identifying pattern of votes.
Here we propose an approach to tallying end-to-end verifiable schemes that avoids revealing all the votes but still achieves whatever confidence level in the announced result is desired. Now a coerced voter can claim that the required vote must be amongst those that remained shrouded. Our approach is based on the well-established notion of Risk-Limiting Audits (RLA), but here applied to the tally rather than to the audit. We show that this approach counters coercion threats arising in extreme tallies and ``Italian'' attacks.
The approach can be applied to most end-to-end verifiable schemes, but for the purposes of illustration I will outline the Selene scheme, that provides a particularly transparent form of voter-verification. This also allows me to describe an extension of the idea to Risk-Limiting Verification (RLV), where not all vote trackers are revealed, thereby enhancing the coercion mitigation properties of Selene.
Peter Ryan is full Professor of Applied Security at the University of Luxembourg since Feb 2009. Since joining the University of Luxembourg he has grown the APSIA (Applied Security and Information Assurance) group that is now more than 25 strong. He has around 25 years of experience in cryptography, information assurance and formal verification. He pioneered the application of process calculi to modelling and analysis of secure systems, in particular presenting the first process algebraic characterization of non-interference taking account of non-determinism (CSFW 1990). While at the Defense Research Agency, he initiated and led the ``Modelling and Analysis of Security Protocols'' project that pioneered the application of process algebra (CSP) and model-checking tools (FDR) to the analysis of security protocols.
He has published extensively on cryptography, cryptographic protocols, security policies, mathematical models of computer security and, most recently, voter-verifiable election systems. He is the creator of the (polling station) Prêt à Voter and, with V. Teague, the (internet) Pretty Good Democracy verifiable voting schemes. He was also co-designer of the vVote system, based on Prêt à Voter that was used successfully in Victoria State in November 2015. Most recently he developed the voter-friendly E2E verifiable scheme Selene. With Feng Hao, he also developed the OpenVote boardroom voting scheme and the J-PAKE password based authenticated key establishment protocol.
Prior to taking up the Chair in Luxembourg, he held a Chair at the University of Newcastle. Before that he worked at the Government Communications HQ (GCHQ), the Defense Research Agency (DRA) Malvern, the Stanford Research (SRI) Institute, Cambridge UK and the Software Engineering Institute, CMU Pittsburgh.
He was awarded a PhD in mathematical physics from the University of London in 1982. Peter Ryan sits on or has sat on the program committees of numerous, prestigious security conferences, notably: IEEE Security and Privacy, IEEE Computer Security Foundations Workshop/Symposium (CSF), the European Symposium on Research in Computer Security (ESORICS), Workshop on Issues in Security (WITS). He is General Chair of ESORICS 2019. He was (co-)chair of WITS'04 and co-chair of ESORICS'04, Frontiers of Electronic Elections (FEE) 2005 Workshop on Trustworthy Elections (WOTE) 2007, VoteId 2009 and of ESORICS 2015. In 2016 he founded the Verifiable Voting Workshops, held in association with Financial Crypto. From 1999 to 2007 he was the President of the ESORICS Steering Committee. In 2013 he was awarded the ESORICS Outstanding Service Award.
He is a Visiting Professor at Surrey University and the ENS Paris.