A Caltech Library Service

DensePure: Understanding Diffusion Models towards Adversarial Robustness

Xiao, Chaowei and Chen, Zhongzhu and Jin, Kun and Wang, Jiongxiao and Nie, Weili and Liu, Mingyan and Anandkumar, Anima and Li, Bo and Song, Dawn (2022) DensePure: Understanding Diffusion Models towards Adversarial Robustness. . (Unpublished)

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Xiao, Chaowei0000-0002-7043-4926
Chen, Zhongzhu0000-0003-4998-4293
Jin, Kun0000-0002-5293-2745
Liu, Mingyan0000-0003-3295-9200
Anandkumar, Anima0000-0002-6974-6797
Li, Bo0000-0002-8019-8891
Song, Dawn0000-0001-9745-6802
Additional Information:Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method. REPRODUCIBILITY STATEMENT. For theoretical analysis, all necessary assumptions are listed in B.1 and the complete proofs are included in B.2. The experimental setting and datasets are provided in section 5. The pseudo-code for DensePure is in C.1 and the fast sampling procedures are provided in C.2.
Record Number:CaltechAUTHORS:20221221-004727985
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118559
Deposited By: George Porter
Deposited On:22 Dec 2022 18:32
Last Modified:02 Jun 2023 01:29

Repository Staff Only: item control page