publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- The Road to Trust: Building Enclaves within Confidential VMsWenhao Wang, Linke Song, Benshan Mei, and 6 more authorsIn Network and Distributed System Security Symposium, 2025
Integrity is critical for maintaining system security, as it ensures that only genuine software is loaded onto a machine. Although confidential virtual machines (CVMs) function within isolated environments separate from the host, it is important to recognize that users still encounter challenges in maintaining control over the integrity of the code running within the trusted execution environments (TEEs). The presence of a sophisticated operating system (OS) raises the possibility of dynamically creating and executing any code, making user applications within TEEs vulnerable to interference or tampering if the guest OS is compromised. To address this issue, this paper introduces NestedSGX, a framework which leverages virtual machine privilege level (VMPL), a recent hardware feature available on AMD SEV-SNP to enable the creation of hardware enclaves within the guest VM. Similar to Intel SGX, NestedSGX considers the guest OS untrusted for loading potentially malicious code. It ensures that only trusted and measured code executed within the enclave can be remotely attested. To seamlessly protect existing applications, NestedSGX aims for compatibility with Intel SGX by simulating SGX leaf functions. We have also ported the SGX SDK and the Occlum library OS to NestedSGX, enabling the use of existing SGX toolchains and applications in the system. Performance evaluations show that context switches in NestedSGX take about 32,000 – 34,000 cycles, approximately 1.33× – 1.54× higher than that of Intel SGX. NestedSGX incurs minimal overhead in most real-world applications, with an average overhead below 2% for computation and memory intensive workloads and below 15.68% for I/O intensive workloads.
2024
- Cabin: Confining Untrusted Programs within Confidential VMsBenshan Mei, Saisai Xia, Wenhao Wang, and 1 more authorIn International Conference on Information and Communications Security, 2024
Confidential computing safeguards sensitive computations from untrusted clouds, with Confidential Virtual Machines (CVMs) providing a secure environment for guest OS. However, CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses. The imprecise control over the read/write access in the page table has allowed attackers to exploit vulnerabilities. The lack of security hierarchy leads to insufficient separation between untrusted applications and guest OS, making the kernel susceptible to direct threats from untrusted programs. This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology. Cabin shields untrusted processes to the user space of a lower virtual machine privilege level (VMPL) by introducing a proxy-kernel between the confined processes and the guest OS. Furthermore, we propose execution protection mechanisms based on fine-gained control of VMPL privilege for vulnerable programs and the proxy-kernel to minimize the attack surface. We introduce asynchronous forwarding mechanism and anonymous memory management to reduce the performance impact. The evaluation results show that the Cabin framework incurs a modest overhead (5% on average) on Nbench and WolfSSL benchmarks.
- SVSM-KMS: Safeguarding Keys for Cloud Services with Encrypted VirtualizationBenshan Mei, Wenhao Wang, and Dongdai LinIn International Conference on Science of Cyber Security, 2024
In recent years, numerous instances of data breaches have emerged due to the inadvertent or intentional disclosure of cryptographic keys. To address this issue, this paper proposes SVSM-KMS, which utilizes AMD’s latest Encrypted Virtualization technology (AMD SEV-SNP) to deliver an efficient and seamless integrated secure key management service. We realized multilayered defense by integrating our mechanism within a privileged layer of a confidential virtual machine (CVM), thereby minimizing the trusted computing base (TCB) to prevent key leakage from compromised CVMs. Notably, we incorporated a zero-copy mechanism between the most privileged service module and the least privileged user applications, eliminating redundant data copies. To facilitate seamless integration, we propose a proxy server for existing cloud services. A prototype of SVSM-KMS has been developed based on the latest AMD SEV-SNP hardware platform. Evaluation results indicate that the performance of the Encrypted Virtualization-empowered SVSM-KMS is on par with Hadoop KMS, highlighting the practicality.
2020
- Safe sample screening for regularized multi-task learningBenshan Mei, and Yitian XuKnowledge-Based Systems, 2020
As a machine learning paradigm, multi-task learning (MTL) attracts increasing attention recently. It can improve the overall performance by exploiting the correlation among different tasks. It is especially helpful in dealing with small sample learning problems. As a classic multi-task learner, regularized multi-task learning (RMTL) inspired lots of multi-task learning researches in the past. Massive researches have shown the performance of RMTL when compared to single-task learners, i.e., support vector machine. However, the training complexity will be considerably large when training large datasets. To tackle such a problem, we propose safe screening rules for an improved regularized multi-task support vector machine (IRMTL). By statically detecting and removing inactive samples from multiple tasks simultaneously before solving the reduced optimization problem, both rules reduce the training time significantly without incurring performance degradation of the proposed method. The experimental results on 13 benchmark datasets and an image dataset also clearly demonstrate the effectiveness of safe screening rules for IRMTL.
- Multi-task ν-twin support vector machinesBenshan Mei, and Yitian XuNeural Computing and Applications, 2020
Twin support vector machine (TWSVM) is proved to be better than support vector machine (SVM) in most cases, since it only deals with two smaller quadratic programming problems, which leads to high computational efficiency. It is proposed to solve a single-task learning problem, just like many other machine learning algorithms. However, a learning task may have relationships with other tasks in many practical problems. Training those tasks independently may neglect the underlying information among all tasks, while such information may be useful to improve the overall performance. Inspired by the multi-task learning theory, we propose two novel multi-task ν-TWSVMs. Both models inherit the merits of multi-task learning and ν-TWSVM. Meanwhile, they overcome the shortcomings of other multi-task SVMs and multi-task TWSVMs. Experimental results on three benchmark datasets and two popular image datasets also clearly demonstrate the effectiveness of our methods.
2019
- Multi-task least squares twin support vector machine for classificationBenshan Mei, and Yitian XuNeurocomputing, 2019
With the bloom of machine learning, pattern recognition plays an important role in many aspects. However, traditional pattern recognition mainly focuses on single task learning (STL), and the multi-task learning (MTL) has largely been ignored. Compared to STL, MTL can improve the performance of learning methods through the shared information among all tasks. Inspired by the recently proposed directed multi-task twin support vector machine (DMTSVM) and the least squares twin support vector machine (LSTWSVM), we put forward a novel multi-task least squares twin support vector machine (MTLS-TWSVM). Instead of two dual quadratic programming problems (QPPs) solved in DMTSVM, our algorithm only needs to deal with two smaller linear equations. This leads to simple solutions, and the calculation can be effectively accelerated. Thus, our proposed model can be applied to the large scale datasets. In addition, it can deal with linear inseparable samples by using kernel trick. Experiments on three popular multi-task datasets show the effectiveness of our proposed methods. Finally, we apply it to two popular image datasets, and the experimental results also demonstrate the validity of our proposed algorithm.