Seminar
FALL 2025 CALENDARSeminars are held at E2-506 every Tuesday, 15:30-16:30 PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
11/19/2024 |
Tim Alberdingk Thijm
(Formal Verification Engineer @ Apple Inc.) |
Kirigami, the verifiable art of network cutting
Networks continue to experience costly outages due to misconfigured control planes. Avoiding misconfigurations is challenging: control planes often use notoriously inscrutable distributed routing protocols. A potential solution is control plane verification: however, most control plane verification tools analyze the entire network at once and scale poorly as networks grow in size and complexity. To address these limitations, we explore modular control plane verification, where the user divides their network into fragments to verify independently, and annotates each fragment with an interface. These interfaces summarize each fragment’s routing announcements (a.k.a. routes). We prove that if each fragment guarantees its interface, assuming the interfaces of its neighbors, then properties of the fragments’ routes hold of the monolithic network’s.We present Kirigami, a formal model of network fragments where users specify interfaces as cuts dividing the network into fragments, annotating fragment boundaries with nodes’ stable routes. We define a Satisfiability Modulo Theories (SMT)-based procedure for checking interfaces, implemented as an extension to the NV (monolithic) verification tool. Kirigami’s SMT checks are up to five orders of magnitude faster than NV’s for a range of industrial topologies with synthesized network policies. Unfortunately, Kirigami’s interfaces are not change-resilient if network updates change fragments’ stable routes. However, if we naïvely extend Kirigami’s interfaces to use over-approximate sets of routes instead of exact routes, the resulting verification procedure allows unsound circular reasoning. To prevent circularity, we introduce a new model, Timepiece, where interfaces are defined using temporal invariants inspired by temporal logic. We implement a new SMT-based checking procedure, which scales to verify networks with thousands of nodes in minutes (compared to hours for a monolithic baseline) and allows users to write change-resilient interfaces.
﹀
|
11/12/2024 |
Li Xue
(PhD candidate at UCSC) |
Intelligent and Dynamic Cross-layer Device fingerprinting
Advancements in AI and machine learning, coupled with the growing number of wireless devices, have led to the emergence of distributed services and applications.User applications and operations increasingly rely on advanced communication technologies, with 5G, 6G, and future networks emerging as critical infrastructure for secure, real-time connectivity.However, the future networks presents complex challenges, including ensuring personalized services, addressing security and privacy concerns, and maintaining quality of service.Traditional methods of device/flow identification are insufficient against sophisticated cyber threats, necessitating a new approach for secure next-generation networks. This research aims to enhance wireless network security by providing an adaptive method. For device identification and cyberattack detection. It will demonstrate for securing devices and services in dynamic network environments, and deliver a low-cost solution for resource-constrained environment.
﹀
|
11/05/2024 |
Pratyush Mishra
(Assistant Professur UPenn) |
Hekaton - Horizontally-Scalable zkSNARKs via Proof Aggregation
Zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs) allow a prover to convince a verifier of the correct execution of a large computation in private and easily-verifiable manner. These properties make zkSNARKs a powerful tool for adding accountability, scalability, and privacy to numerous systems such as blockchains and verifiable key directories. Unfortunately, the time and space required by existing zkSNARKs scales poorly with the size of the computation, and so they cannot handle real-world instances of the foregoing applications. In this talk, I will describe Hekaton, a zkSNARK that can efficiently scale to large computations. Hekaton is constructed via a new "distribute-and-aggregate” framework. We implement Hekaton and evaluate its performance on a compute cluster. Our experiments show that Hekaton achieves strong horizontal scalability and is able to prove large computations of up to 2^35 gates in under an hour. We apply Hekaton to two applications of real-world interest: proofs of batched insertion for a verifiable key directory and proving correctness of RAM computations. In both cases, Hekaton is able to scale to handle realistic workloads with better efficiency than prior work.
﹀
|
10/29/2024 |
Diego Ortiz and Luis Burbano
(Ph.D. Students at UCSC) |
Toward attack-resilient Autonomous ground and air vehicles
Intelligent mechanisms implemented in autonomous vehicles allow them to operate with diminished human intervention. Driving assist and pre-collision alerts reduce traffic jams and accidents for ground vehicles. Meanwhile, autonomous controllers enable aerial vehicles such as drones to operate with minimal human supervision. However, verifying their correct functionality is difficult due to complex environmental interactions. This problem is exacerbated in adversarial conditions, where an attacker can manipulate the environment surrounding autonomous vehicles to exploit any vulnerabilities. In the first part of this talk, we present a scenario-based framework with a formal method to identify the impact of malicious drivers interacting with autonomous ground vehicles to identify vulnerabilities in these systems preemptively. To defend autonomous systems from these attackers, we need to develop recovery strategies that protect autonomous vehicles from harm due to an attacker's actions. Thus, in the second part, we explore using LLMs as common sense agents that create a recovery plan for the system after it recognizes a potential attack or anomaly.
﹀
|
10/15/2024 |
Juan Lozano
(Ph.D. Candidate at UCSC) |
ICSNet - A Hybrid-Interaction Honeynet for Industrial Control Systems
Industrial Control Systems (ICS) manage several critical infrastructures such as the electrical grid and water treatment plants. ICS have been the target of cyberattacks designed to disrupt the operation of critical infrastructure, risking the safety of the system. Honeypots and honeynets are used to gather intelligence on novel threats against ICS and to help us prepare for future attacks. We introduce ICSNet, a hybrid-interaction honeynet that improves on the state of the art of ICS honeynets by developing a new modular architecture that integrates high-fidelity physical process simulations, more industrial protocols, and high-fidelity device fingerprints. We show that ICSNet can successfully represent different ICS environments while interacting with the industrial assets in the physical simulation, giving attackers a convincing view of an ICS.
﹀
|
10/08/2024 |
Sebastián Castro
(Ph.D. Candidate at UCSC) |
Ghost in the SAM- Stealthy, Robust, and Privileged Persistence through Invisible Accounts
Persistence attacks allow adversaries to maintain access to compromised systems. Despite their wide use by most threat campaigns, they remain understudied in the academic literature. In this paper, we study the concepts and requirements for local invisible accounts and then show new OS-level attacks by implementing these ideas on Windows. In particular, we propose three general design objectives for successful persistence attack vectors. Then, we show how to implement these objectives by bypassing functionality provided by Windows to manage identities. To do this, we first reverse-engineer parts of Windows's authentication and authorization process and propose two attacks: RID Hijacking and Suborner. We show that these attacks affect all versions of Windows since XP and Server 2003, and in combination, they can create stealthy, robust, and privileged accounts.
﹀
|
SPRING 2024 CALENDARSeminars are held at E2-506 every Thursday, 13:30-14:30 PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
06/06/2024 |
Abhinav Aggarwal
(AWS Marketing Technologies) |
Reconstructing Test Labels from Noisy Loss Scores
Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset. In a recent line of research, label inference was introduced as the problem of reconstructing the ground truth labels of this private dataset from just the (possibly perturbed) cross-entropy loss function values evaluated at chosen prediction vectors (without any other access to the hidden dataset). In this talk, we discuss the label inference problem statement, and formally study the necessary and sufficient conditions under which label inference is possible from any (noisy) loss function value. Using tools from analytic number theory, we show that a broad class of commonly used loss functions, including general Bregman divergence-based losses and multiclass cross-entropy with common activation functions like sigmoid and softmax, it is possible to design label inference attacks that succeed even for arbitrary noise levels and using only a single query from the adversary. We formally study the computational complexity of label inference and show that while in general, designing adversarial prediction vectors for these attacks is co-NPhard, once we have these vectors, the attacks can also be carried out through a lightweight augmentation to any neural network model, making them look benign and hard to detect. Our observations provide a deeper understanding of the vulnerabilities inherent in modern machine learning and could be used for designing future trustworthy ML.
﹀
|
05/30/2024 |
Ioannis Liagouris
(Assistant Professor at BU) |
Building systems for secure multi-party computation
Research in the crypto community has shown that any function can be securely computed by a group of participants in a distributed fashion such that each party learns the function’s output and nothing more. This fascinating idea started as a theoretical curiosity in the 1980s but since then it has evolved into a powerful tool to realize use cases like the gender wage gap study in the Boston area. In this talk, I will present our recent work on systems for cryptographically secure multi-party computation (MPC). I will focus on Secrecy (NSDI'23) and TVA (Security'23), two novel frameworks that support relational and time series analytics under MPC. Secrecy and TVA allow multiple data holders to contribute their data towards a joint analysis in the cloud, while keeping the data siloed even from the cloud providers. At the same time, they enable cloud providers to offer their services to clients who would have otherwise refused to perform a computation altogether or insisted that it be done on private infrastructure. Our work takes a novel approach to optimizing MPC execution by co-designing multiple layers of the system stack and exposing the MPC costs to the execution engine. I will conclude with a broader vision about a new generation of systems for secure analytics with provable and configurable guarantees.
﹀
|
05/09/2024 |
Aeva Black
(Section Chief at CISA) |
Insecure by Design - profit-driven insecurity in the history of open source
None.
﹀
|
05/01/2024 |
Danny Yuxing Huang
(Assistant Professor at NYU) |
Smart Home Security and Privacy
Homes are getting smarter with IoT devices, but security/privacy issues still prevail. Users and non-users (“by-standers”) face various challenges, ranging from data loss, outdated firmware in the supply chain, to intimate partner surveillance. Oftentimes, these problems happen in private settings beyond the reach of many researchers. Furthermore, there are nuances that are difficult to investigate in the lab and/or through interviews/surveys alone.
We want to facilitate real-world studies of smart home security/privacy in situ. We want to help non-technical users and non-users navigate these security/privacy risks.
To this end, we are developing a global smart home testbed with actual smart home devices and networks. In the past four years, our prototype has 6K+ voluntary users worldwide who shared with us the network traffic of 60K+ Internet-connected devices. So far, we have built a small community of security/privacy, networking, and HCI researchers who are using our infrastructure for empirical research — both observational and interventional. We have also attracted journalists from the New York Times and Washington Post to use our data for investigative journalism in privacy and technology. Applications of our testbed could even go beyond security/privacy, e.g., to healthcare.
This event is shared with CSE Colloquium.
﹀
|
04/25/2024 |
Alessandro Palumbo
(Associate Professor at CentraleSupeléc) |
Features Analysis of Threats in Microprocessors - Attacks Detection & Mitigation Techniques.
Software-exploitable Hardware Trojan Horses can be inserted into Microprocessors allowing attackers to run their own software or to gain unauthorized privileges. On the other hand, observing some features of the Microprocessor (apparently unrelated to its program run), a malicious user may gain information to steal secrets and private data. As a consequence, the devices that are built in safe foundries could also be attacked. Implementing Hardware Security Modules that look at the runtime Microprocessor behavior is a new approach to detecting whether attacks are running. Why do we need hardware modules to protect against attacks? Aren’t software solutions enough? It’s extremely challenging for software to protect from vulnerabilities close to hardware; Hardware Security Modules operate at the circuit level. Consequently, they are well-suited to detect and defend against low-level attacks.
﹀
|
04/18/2024 |
Yu Zheng
(Ph.D. candidate in CUHK) |
Communication-Efficient MPC Protocols for Secure Machine Learning
Training often involves multiple data sources. For data privacy, secure multiparty computation (MPC) techniques could be applied, which forms a bottleneck since it is highly dependent on the bandwidth and latency of the underlying communication network, especially for updating the model through backward propagation. Moreover, nonlinear functions over a continuous domain and graph topology in the training process are more challenging to compute securely and efficiently. This talk involves a suite of communication-efficient secure protocols for these training operations, bringing the following contributions.
1. We exploit mathematical relations rarely explored in cryptography to simplify computation in conventional neural network. We approximate Sigmoid by Fourier series and build a secret-sharing-based protocol from trigonometric identities. For Softmax, we approximate it as a whole by solving the initial value problem for ordinary differential equations via the Euler formula.
2. We build a suite of protocols for training graph neural networks operating on structural training data. Specifically, we divide the matrix into structural and numerical parts and propose a permutation protocol to transform the sparse matrix multiplication into a series of compact ones.
﹀
|
04/03/2024 |
Amrita Roy Chowdhury
(CRA/CCC CIFellow at UCSD) |
Data Privacy in the Decentralized Era
Data is today generated on smart devices at the edge, shaping a decentralized data ecosystem comprising multiple data owners (clients) and a service provider (server). Clients interact with the server with their personal data for specific services, while the server performs analysis on the joint dataset. However, the sensitive nature of the involved data, coupled with inherent misalignment of incentives between clients and the server, breeds mutual distrust. Consequently, a key question arises: How to facilitate private data analytics within a decentralized data ecosystem, comprising multiple distrusting parties?
My research shows a way forward by designing systems that offer strong and provable privacy guarantees while preserving complete data functionality. I accomplish this by systematically exploring the synergy between cryptography and differential privacy, exposing their rich interconnections in both theory and practice. In this talk, I will focus on two systems, CryptE and EIFFeL, which enable privacy-preserving query analytics and machine learning, respectively.
﹀
|
04/02/2024 |
Sasy Sajin
(Ph.D. Candidate at UW) |
Oblivious Algorithms for Privacy-Preserving Computations
People around the world use data-driven online services every day. However, data center attacks and data breaches have become a regular and rising phenomenon. How, then, can one reap the benefits of data-driven statistical insights without compromising the privacy of individuals' data?
In this talk, I will first present an overview of three disparate approaches towards privacy-preserving computations today, namely homomorphic cryptography, distributed trust, and secure hardware. These ostensibly unconnected approaches have one unifying element: oblivious algorithms. I will discuss the relevance and pervasiveness of oblivious algorithms in all the different models for privacy-preserving computations. Finally, I highlight the performance and security challenges in deploying such privacy-preserving solutions, and present three of my works that mitigate these obstacles through the design of novel efficient oblivious algorithms.
﹀
|
WINTER 2024 CALENDARSeminars are held at E2-180 every Thursday, 13:30-14:30 PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
03/18/2024 |
Teodora Baluta
(Ph.D. Candidate at NUS) |
New Algorithmic Tools for Rigorous Machine Learning Security Analysis
Machine learning security is an emerging area with many open questions lacking systematic analysis. In this talk, I will present three new algorithmic tools to address this gap: (1) algebraic proofs, (2) causal reasoning, and (3) sound statistical verification. Algebraic proofs provide the first conceptual mechanism to resolve intellectual property disputes over training data. I show that stochastic gradient descent, the de-facto training procedure for modern neural networks, is a collision-resistant computation under precise definitions. These results open up connections to lattices, which are mathematical tools used for cryptography presently. I will also briefly mention my efforts to analyze causes of empirical privacy attacks and defenses using causal models and to devise statistical verification procedures with ‘probably approximately correct’ (PAC)-style soundness guarantees.
﹀
|
03/14/2024 |
Ramakrishnan Sundara Raman
(Ph.D. Candidate at U-M) |
Global Investigation of Network Connection Tampering
As the Internet's user base and criticality of online services continue to expand daily, powerful adversaries like Internet censors are increasingly monitoring and restricting Internet traffic. These adversaries, powered by advanced network technology, perform large-scale connection tampering attacks seeking to prevent users from accessing specific online content, compromising Internet availability and integrity. In recent years, we have witnessed recurring censorship events affecting Internet users globally, with far-reaching social, financial, and psychological consequences, making them important to study. However, characterizing tampering attacks at the global scale is an extremely challenging problem, given intentionally opaque practices by adversaries, varying tampering mechanisms and policies across networks, evolving environments, sparse ground truth, and safety risks in collecting data.
﹀
|
03/12/2024 |
Julia Len
(Ph.D. Candidate at CU) |
Designing Secure-by-Default Cryptography for Computer Systems
Designing cryptography that protects against all the threats seen in deployment can be surprisingly hard to do. This frequently translates into mitigations which offload important security decisions onto practitioners or even end users. The end result is subtle vulnerabilities in our most important cryptographic protocols. In this talk, I will present an overview of my work in two major areas on designing cryptography for real-world applications that targets security by default: (1) symmetric encryption and (2) key transparency for end-to-end encrypted systems. I will describe my approach of understanding real-world threats to provide robust, principled defenses with strong assurance against these threats in practice. My work includes introducing a new class of attacks exploiting symmetric encryption in applications, developing new theory to act as guidance in building better schemes, and designing practical cryptographic protocols. This work has seen impact through updates in popular encryption tools and IETF draft standards and through the development of protocols under consideration for deployment.
﹀
|
03/07/2024 |
Leilani H. Gilpin
(Assistant Professor at UCSC) |
Explaining and Generating Errors for Safety-Critical Systems
Autonomous systems are prone to errors and failures without knowing why. In critical domains like driving, these autonomous counterparts must be able to recount their actions for safety, liability, and trust. An explanation: a model-dependent reason or justification for the decision of the autonomous agent being assessed, is a key component for post-mortem failure analysis, but also for pre-deployment verification. I will present a framework that uses a model and common sense knowledge to detect and explain unreasonable vehicle scenarios, even if it has not seen that error before. In the second part of the talk, I will motivate the use of explanations as a testing framework for autonomous systems. I will conclude by discussing new challenges at the intersection of XAI and autonomy towards autonomous vehicles systems that are explainable by design.
﹀
|
02/29/2024 |
Themis Melissaris
(Snowflake) |
Elastic Cloud Services - Scaling Snowflake's Control Plane
This talk will cover the design and operation of Snowflake's Elastic Cloud Services (ECS) layer that manages cloud resources at global scale to meet the needs of the Snowflake Data Cloud. It provides the control plane to enable elasticity, availability, fault tolerance and efficient execution of customer workloads. ECS runs on multiple cloud service providers and provides capabilities such as cluster management, safe code rollout and rollback, management of pre-started pools of running VMs, horizontal and vertical autoscaling, throttling of incoming requests, VM placement, load-balancing across availability zones and cross-cloud and cross-region replication. We showcase the effect of these capabilities through empirical results on systems that execute millions of queries over petabytes of data on a daily basis.
﹀
|
02/22/2024 |
Yuanchao Xu
(Assistant Professor at UCSC) |
Data Enclave - A Data-Centric Trusted Execution Environment
Trusted Execution Environments (TEEs) protect sensitive applications in the cloud with the minimal trust in the cloud provider. Existing TEEs with integrity protection however lack support for data management primitives, causing data sharing between enclaves either insecure or cumbersome. This paper proposes a new data abstraction for TEEs, data enclave. As a data-centric abstraction, a data enclave is decoupled from an enclave's existence, is equipped with flexible secure permission controls, and cryptographically isolated. It eliminates the hurdles for enclaves to cooperate efficiently, and at the same time, enables dynamic shrinking of the height of the integrity tree for performance. This paper presents this new abstraction, its properties, and the architecture support. Experiments on synthetic benchmarks and three real-world applications all show that data enclaves can help improve the efficiency of enclaves and inter-enclave cooperations significantly while enhancing the security protection.
﹀
|
02/15/2024 |
Ioannis Angelakopoulos
(Ph.D. Candidate at BU) |
FirmSolo - Enabling dynamic analysis of binary Linux-based IoT kernel modules
The Linux-based firmware running on Internet of Things (IoT) devices is complex and consists of user level programs as well as kernel level code. Both components have been shown to have serious security vulnerabilities, and the risk linked to kernel vulnerabilities is particularly high, as these can lead to full system compromise. However, previous work only focuses on the user space component of embedded firmware. In this talk, I present FirmSolo, a system designed to incorporate the kernel space into firmware analysis. FirmSolo, configures and builds custom kernels that can load IoT binary kernel modules within an emulated environment and expose these kernel modules to dynamic analysis. I evaluated FirmSolo on a dataset of 1470 firmware images containing 56,688 kernel modules where it loaded 64% of the kernel modules. To demonstrate FirmSolo's utility in downstream analysis, I integrated FirmSolo with two example dynamic analysis systems, the TriforceAFL kernel fuzzer and Firmadyne. The TriforceAFL experiments on a subset of 75 kernel modules revealed 19 previously unknown bugs on 11 proprietary kernel modules. With Firmadyne I confirmed the presence of these bugs in 84 firmware images and also confirmed the previously known memory corruption vulnerability of the closed-source Kcodes NetUSB kernel module across 15 firmware images.
﹀
|
02/01/2024 |
Priyanka Mondal
(Ph.D. Candidate at UCSC) |
I/O-Efficient Dynamic Searchable Encryption meets Forward & Backward Privacy
We focus on the problem of I/O-efficient Dynamic Searchable Encryption (DSE), i.e., schemes that perform well when executed with the dataset on-disk. Towards this direction, for HDDs, schemes have been proposed with good locality (i.e., low number of performed non-continuous memory reads) and read efficiency (the number of additional memory locations read per result item). Similarly, for SSDs, schemes with good page efficiency (reading as few pages as possible) have been proposed. However, the vast majority of these works are limited to the static case (i.e. no dataset modifications) and the only dynamic scheme fails to achieve forward and backward privacy, the defacto leakage standard in the literature. In fact, prior related works (Bost [CCS’16] and Minaud and Reichle [CRYPTO’22]) claim that I/O-efficiency and forward-privacy are two irreconcilable notions. Contrary to that, in this work, we “reconcile” for the first time forward and backward privacy with I/O-efficiency for DSE both for HDDs and SSDs. We propose two families of DSE constructions which also improve the state-of-the-art (non I/O-efficient) both asymptotically and experimentally. Indeed, some of our schemes improve the in-memory performance of prior works. At a technical level, we revisit and enhance the lazy de-amortization DSE construction by Demertzis et al. [NDSS’20], transforming it into an I/O-preserving one. Importantly, we introduce an oblivious-merge protocol that merges two equal-sized databases without revealing any information, effectively replacing the costly oblivious data structures with more lightweight computations.
﹀
|
FALL 2023 CALENDARSeminars are held at E2-180 every Thursday, 13:30-14:30 PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
11/09/2023 |
Kyle Fredrickson
(Ph.D. Candidate at UCSC) |
Deferred Broadcast - A New Family of Anonymity Systems and the Case for Asynchronicity
Existing anonymity systems suffer from three primary problems— they are vulnerable to statistical attacks in the long term, they impose global bandwidth restrictions, and they do not support multiple channels efficiently without message loss. While many systems have solved two problems, none have solved all three simultaneously. We propose a new family of anonymity system that can solve these problems simultaneously while offering greater user flexibility.
﹀
|
10/26/2023 |
Hampei Sasahara
(Assistant Professor at Tokyo Institute of Technology) |
Adversarial Decision-making in Dynamical Control Systems
While Industrial control systems historically operated in isolation from the internet,recent technological development has driven a convergence between ICSs and internet-based environments, such as cloud computing, breaking the isolation. This shift exposes ICSs to the same attack vectors prevalent in cyberattacks. This talk showcases speaker's recent stidies on control system security. Our exploration begins by presenting a design method of model-based attack detectors that can maintain its detection capability even when subsystems are disconnected for attack containment. Subsequently, we delve into a fundamental principle of model-based defense techniques that rely on Bayesian inference. The final topic revolves around adversarial attacks taking the form of slight perturbations on data, which have been investigated mostly in image classification using neural network. Numerical experiments demonstrate a similar vulnerability in data-driven control, and the results provide insights to enhance the robustness.
﹀
|
10/19/2023 |
Alvaro Cardenas
(Associate Professor at UCSC) |
A Tale of Two Industroyers
In less than a decade, Ukraine has suffered from three cyber attacks attempting to cause electrical outages. On December 23, 2015, in the middle of freezing weather, Ukraine suffered the first blackout caused by cyber attacks. In this first incident, attackers gained remote access to the industrial networks of power companies, and a remote adversary operated the human-machine interface of operators, opening circuit breakers manually. A year later, on December 17, 2016, a fifth of Ukraine's capital Kyiv experienced another blackout. This time, the target was a transmission utility, and unlike the previous year when remote human attackers opened the circuit breakers, the attack in 2016 was launched automatically by the first known example of industrial malware targeting the power grid: Industroyer. Finally, on April 8, 2022, in the first months of the Russian invasion of Ukraine, operators discovered another malware tailored to attack circuit breakers automatically. This new piece of malware was called Industroyer 2, and it represented yet another attempt to target Ukraine's power grid.In this talk we will summarize our work in analyzing the malware to understand how it targeted industrial networks, as well as consider what future potential damages this type of malware may create in the future.
﹀
|
SPRING 2023 CALENDARSeminars are held at E2-180 every Thursday, 12:00-13:30 PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
06/08/2023 |
Salvador Mendoza
(Director of R&D at Metabase Q) |
Payment Systems: The Art of a Proper Penetration Testing
One of the most pressing issues in today's payment industry is the lack of rigorous security testing mechanisms for EMV, NFC, and tokenization protocols. Despite their widespread utilization, it's alarming that only a fraction of red teams possess a comprehensive understanding of how to effectively test the security of these protocols. We will talk about methodologies, procedures and how to design tools to test different environments. Learn and develop more sophisticated testing strategies to secure daily transactions, it is crucial for payment system industries. The future of these technologies depends on our ability to adapt and strengthen these security measures.
﹀
|
06/01/2023 |
Xiaoxue Zhang
(Ph.D. Candidate at UCSC) |
A Cross-chain Payment Channel Network
Blockchain interoperability and throughput scalability are two crucial problems that limit the wide adoption of blockchain applications. Payment channel networks (PCNs) provide a promising solution to the inherent scalability problem of blockchain technologies, allowing off-chain payments between senders and receivers via multi-hop payment paths. In this work, we present a cross-chain PCN, called XHub, that extends PCNs to support multi-hop paths across multiple blockchains and resolves both interoperability and throughput scalability. XHub achieves service availability, transaction atomicity, and auditability. Users who correctly follow the protocols will succeed in making payments or get profits from doing the services. In addition, trustworthy information about hubs will be managed in a decentralized manner and available to all users. This work is an important step towards the big picture of a decentralized transaction system that connects a wide scope of users in different blockchains.
﹀
|
05/11/2023 |
Mihai Christodorescu
(Research Scientist at Google) |
Robust Learning against Relational Adversaries
Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation needs not be bounded by small ℓp-norms. Motivated by attacks in program analysis and security tasks, we investigate relational adversaries, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation.
Inspired by the insights, we propose normalize-and-predict, a learning framework that leverages input normalization to achieve provable robustness. The framework solves the pain points of adversarial training against relational adversaries and can be combined with adversarial training for the benefits of both approaches. Guided by our theoretical findings, we apply our framework to source code authorship attribution and malware detection. Results of both tasks show our learning framework significantly improves the robustness of models against relational adversaries. In the process, it outperforms adversarial training, the most noteworthy defense mechanism, by a wide margin.
﹀
|
05/04/2023 |
Peter Rindal
(Research Scientist at Visa Research) |
Secret-Shared Joins with Multiplicity from Aggregation Trees
We present novel protocols to compute SQL-like join operations on secret shared database tables with non-unique join keys. Previous approaches to the problem had the restriction that the join keys of both the input tables must be unique or had quadratic overhead. Our work lifts this restriction, allowing one or both of the secret shared input tables to have an unknown and unbounded number of repeating join keys while achieving efficient O(n log n) asymptotic communication/computation and O(log n) rounds of interaction, independent of the multiplicity of the keys.
We present two join protocols, \ProtoUni and \ProtoDup. The first, \ProtoUni is optimized for the case where one table has a unique primary key while the second, \ProtoDup is for the more general setting where both tables contain duplicate keys. Both protocols require O(n log n) time and O(log n) rounds to join two tables of size n. Our framework for computing joins requires an efficient sorting protocol and generic secure computation for circuits. We concretely instantiate our protocols in the honest majority three-party setting.
Our join protocols are built using an efficient method to compute structured aggregations over a secret shared input vector V in D^n. If the parties have another secret-shared vector of control bits B in {0, 1}^n to partition V into sub-vectors (that semantically relates to the join operations). A structured aggregation computes a secret shared vector $V' in D^n where every sub-vector (V_b,...,V_e) (defined by the control bits) is aggregated as V_i'=V_b + ...+ V_i for i in {b,...,e} according to some user-defined operator +. Critically, the b,e indices that partition the vector are secret. It's trivial to compute aggregations by sequentially processing the input vector and control bits. This would require O(n) rounds and would be very slow due to network latency. We introduce Aggregation Trees as a general technique to compute aggregations in O(\log n) rounds. For our purpose of computing joins, we instantiate + in {copy previous value, add}, but we believe that this technique is quite powerful and can find applications in other useful settings.
﹀
|
04/27/2023 |
Alin Tomescu
(Research Scientist at Aptos Labs) |
UTT: Fast, Accountable, Anonymous Payments without zkSNARKs
We present (accountable-and)-UnTraceable Transactions (UTT), a system for decentralized ecash with accountable privacy.
UTT is the first ecash system that obtains three critical properties: (1) it provides decentralized trust by implementing the ledger, bank, auditor, and registration authorities via threshold cryptography and Byzantine Fault Tolerant infrastructure; (2) it balances accountability and privacy by implementing anonymity budgets: users can anonymously send payments, but only up to a limited amount of currency per month. Past this point, transactions can either be made public or subjected to customizable auditing rules; (3) by carefully choosing cryptographic building blocks and co-designing the cryptography and decentralization, UTT is tailored for high throughput and low latency. With a combination of optimized cryptographic building blocks and vertical scaling (optimistic concurrency control), UTT can provide almost 1,000 payments with accountable privacy per second, with latencies of around 100 milliseconds and less. Through horizontal scaling (multiple shards), UTT can scale to tens of thousands of such transactions per second. With 60 shards we measure over 10,000 transactions with accountable privacy per second, with latencies around 500 milliseconds.
We formally define and prove the security of UTT using an MPC-style ideal functionality. Along the way, we define a new MPC framework that captures the security of reactive functionalities in a stand-alone setting, thus filling an important gap in the MPC literature. Our new framework is compatible with practical instantiations of cryptographic primitives and provides a trade-off between concrete efficiency and provable security that may be also useful for future work.
﹀
|
04/05/2023 |
Kyle Fredrickson
(Ph.D. Student at UCSC) |
A Primer on Anonymous Communication
“With enough metadata you don’t really need content.” This quote from former NSA general counsel, Stuart Baker, succinctly summarizes the techniques that interested parties worldwide use when confronted with encrypted communications. Though end-to-end encryption hides message contents, it does nothing to protect communications metadata, e.g., sender, receiver, and timing, leaving a significant side channel for adversaries to exploit. The purpose of anonymous communications systems is to go beyond end-to-end encryption by protecting metadata as well as content. This talk will cover the current state of the art solutions as well as the challenges associated with them.
﹀
|
WINTER 2023 CALENDARSeminars are held at E2-180 every Wednesday at 1PM PT. |
||
---|---|---|
Date | Speaker | Title and Abstract |
03/22/2023 |
Vassilis N. Ioannidis
(Applied Scientist in Amazon Search AI) |
Research and Development of GNNs with application to Fraud Detection
Fraud detection aims at preventing money or property from being obtained through false pretenses. Fraud detection has many applications at Amazon where users and sellers collude to promote specific products. In this talk, we will review how one can formulate fraud detection as an ML task and solve it by using Graph Neural Networks. GNNs have seen a lot of academic interest in recent years and have shown a lot of promise for many real-world applications from fraud and abuse detection to recommendations. Yet, industry-wide adoption of GNN techniques to these problems have been lagging behind. As such, there is a strong need for tools and frameworks that help researchers develop GNNs for large scale graph machine learning problems, and help machine learning practitioners deploy these models for production use cases.
﹀
|
03/15/2023 |
Priyanka Mondal
(Ph.D. Candidate at UCSC) |
Applying Consensus and Replication Securely with FLAQR
Availability is crucial to the security of distributed systems, but guaranteeing availability is hard, especially when participants in the system may act maliciously. Quorum replication protocols provide both integrity and availability: data and computation is replicated at multiple independent hosts, and a quorum of these hosts must agree on the output of all operations applied to the data. Unfortunately, these protocols have high overhead and can be difficult to calibrate for a specific application's needs. Ideally, developers could use highlevel abstractions for consensus and replication to write faulttolerant code by that is secure by construction. This paper presents Flow-Limited Authorization for Quorum Replication (FLAQR), a core calculus for building distributed applications with heterogeneous quorum replication protocols while enforcing end-to-end information security. Our type system ensures that well-typed FLAQR programs cannot fail (experience an unrecoverable error) in ways that violate their typelevel specifications. We present noninterference theorems that characterize FLAQR's confidentiality, integrity, and availability in the presence of consensus, replication, and failures, as well as a liveness theorem for the class of majority quorum protocols under a bounded number of faults.
﹀
|
03/08/2023 |
Luis Salazar
(Ph.D. Candidate at UCSC) |
A Sandbox for Understanding Nation-State Malware Attacking the Power Grid
We discuss the threat of nation-state malware attacks on critical infrastructure, specifically focusing on the "Industroyer" malware and its impact on the power grid. This malware runs on a compromised Windows-based host and attacks industrial control devices via industrial control protocols. While there have been efforts in analyzing this malware, those efforts focus on the execution of the malware itself, not the effects on the target devices. To reveal the ultimate goal of the malware, we propose a sandbox concept to test and deceive these types of malware, aiding its dynamic analysis. With this sandbox, we create a simulated power grid environment to execute and study the malware sample. We also propose alternative uses for the sandbox, such as countermeasures development and honeypot schemes.
﹀
|
03/01/2023 |
Amin Karbas
(Ph.D. Student at UCSC) |
Oblivious RAMs and Data Structures
An introduction to Oblivious RAMs (ORAMs) and Oblivious Data Structures (ODSs), followed by some current problems.
﹀
|
02/22/2023 |
Diego Ortiz
(Ph.D. Student at UCSC) |
Semi-Automated Synthesis of Driving Rules
Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we must also monitor for deviations from various societal ``common sense' rules to identify attacks against autonomous systems. This work provides a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.
﹀
|
Dustin Richmond
(Assistant Professor at UCSC) |
bPentimento: Data Remanence in Cloud FPGAs
Cloud FPGAs strike an alluring balance between computational efficiency, energy efficiency, and cost. It is the flexibility of the FPGA architecture that enables these benefits, but that very same flexibility that exposes new security vulnerabilities. We show that a remote attacker can recover “FPGA pentimenti” - long-removed secret data belonging to a prior user of a cloud FPGA. The sensitive data constituting an FPGA pentimento is an analog imprint from bias temperature instability (BTI) effects on the underlying transistors. We demonstrate how this slight degradation can be measured using a time-to-digital (TDC) converter when an adversary programs one into the target cloud FPGA. This technique allows an attacker to ascertain previously safe information on cloud FPGAs, even after it is no longer explicitly present. Notably, it can allow an attacker to (1) extract proprietary details from an encrypted FPGA design image available on the cloud marketplace and (2) recover information from a previous user of a cloud FPGA. We demonstrate the ability to extract design details and recover previous cloud FPGA user information on the cloud platform. Our experiments show that BTI degradation (burn-in) and recovery are measurable and constitute a security threat to commercial cloud FPGAs.
﹀
|