$$ \newcommand{\bone}{\mathbf{1}} \newcommand{\bbeta}{\mathbf{\beta}} \newcommand{\bdelta}{\mathbf{\delta}} \newcommand{\bepsilon}{\mathbf{\epsilon}} \newcommand{\blambda}{\mathbf{\lambda}} \newcommand{\bomega}{\mathbf{\omega}} \newcommand{\bpi}{\mathbf{\pi}} \newcommand{\bphi}{\mathbf{\phi}} \newcommand{\bvphi}{\mathbf{\varphi}} \newcommand{\bpsi}{\mathbf{\psi}} \newcommand{\bsigma}{\mathbf{\sigma}} \newcommand{\btheta}{\mathbf{\theta}} \newcommand{\btau}{\mathbf{\tau}} \newcommand{\ba}{\mathbf{a}} \newcommand{\bb}{\mathbf{b}} \newcommand{\bc}{\mathbf{c}} \newcommand{\bd}{\mathbf{d}} \newcommand{\be}{\mathbf{e}} \newcommand{\boldf}{\mathbf{f}} \newcommand{\bg}{\mathbf{g}} \newcommand{\bh}{\mathbf{h}} \newcommand{\bi}{\mathbf{i}} \newcommand{\bj}{\mathbf{j}} \newcommand{\bk}{\mathbf{k}} \newcommand{\bell}{\mathbf{\ell}} \newcommand{\bm}{\mathbf{m}} \newcommand{\bn}{\mathbf{n}} \newcommand{\bo}{\mathbf{o}} \newcommand{\bp}{\mathbf{p}} \newcommand{\bq}{\mathbf{q}} \newcommand{\br}{\mathbf{r}} \newcommand{\bs}{\mathbf{s}} \newcommand{\bt}{\mathbf{t}} \newcommand{\bu}{\mathbf{u}} \newcommand{\bv}{\mathbf{v}} \newcommand{\bw}{\mathbf{w}} \newcommand{\bx}{\mathbf{x}} \newcommand{\by}{\mathbf{y}} \newcommand{\bz}{\mathbf{z}} \newcommand{\bA}{\mathbf{A}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bC}{\mathbf{C}} \newcommand{\bD}{\mathbf{D}} \newcommand{\bE}{\mathbf{E}} \newcommand{\bF}{\mathbf{F}} \newcommand{\bG}{\mathbf{G}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bI}{\mathbf{I}} \newcommand{\bJ}{\mathbf{J}} \newcommand{\bK}{\mathbf{K}} \newcommand{\bL}{\mathbf{L}} \newcommand{\bM}{\mathbf{M}} \newcommand{\bN}{\mathbf{N}} \newcommand{\bP}{\mathbf{P}} \newcommand{\bQ}{\mathbf{Q}} \newcommand{\bR}{\mathbf{R}} \newcommand{\bS}{\mathbf{S}} \newcommand{\bT}{\mathbf{T}} \newcommand{\bU}{\mathbf{U}} \newcommand{\bV}{\mathbf{V}} \newcommand{\bW}{\mathbf{W}} \newcommand{\bX}{\mathbf{X}} \newcommand{\bY}{\mathbf{Y}} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\calA}{\mathcal{A}} \newcommand{\calB}{\mathcal{B}} \newcommand{\calC}{\mathcal{C}} \newcommand{\calD}{\mathcal{D}} \newcommand{\calE}{\mathcal{E}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calG}{\mathcal{G}} \newcommand{\calH}{\mathcal{H}} \newcommand{\calI}{\mathcal{I}} \newcommand{\calJ}{\mathcal{J}} \newcommand{\calK}{\mathcal{K}} \newcommand{\calL}{\mathcal{L}} \newcommand{\calM}{\mathcal{M}} \newcommand{\calN}{\mathcal{N}} \newcommand{\calO}{\mathcal{O}} \newcommand{\calP}{\mathcal{P}} \newcommand{\calQ}{\mathcal{Q}} \newcommand{\calR}{\mathcal{R}} \newcommand{\calS}{\mathcal{S}} \newcommand{\calT}{\mathcal{T}} \newcommand{\calU}{\mathcal{U}} \newcommand{\calV}{\mathcal{V}} \newcommand{\calW}{\mathcal{W}} \newcommand{\calX}{\mathcal{X}} \newcommand{\calY}{\mathcal{Y}} \newcommand{\calZ}{\mathcal{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\F}{\mathbb{F}} \newcommand{\Q}{\mathbb{Q}} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\nnz}[1]{\mbox{nnz}(#1)} \newcommand{\dotprod}[2]{\langle #1, #2 \rangle} \newcommand{\ignore}[1]{} \let\Pr\relax \DeclareMathOperator*{\Pr}{\mathbf{Pr}} \newcommand{\E}{\mathbb{E}} \DeclareMathOperator*{\Ex}{\mathbf{E}} \DeclareMathOperator*{\Var}{\mathbf{Var}} \DeclareMathOperator*{\Cov}{\mathbf{Cov}} \DeclareMathOperator*{\stddev}{\mathbf{stddev}} \DeclareMathOperator*{\avg}{avg} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\size}{size} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\dist}{dist} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\spn}{span} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\diag}{diag} \newcommand{\PTIME}{\mathsf{P}} \newcommand{\LOGSPACE}{\mathsf{L}} \newcommand{\ZPP}{\mathsf{ZPP}} \newcommand{\RP}{\mathsf{RP}} \newcommand{\BPP}{\mathsf{BPP}} \newcommand{\P}{\mathsf{P}} \newcommand{\NP}{\mathsf{NP}} \newcommand{\TC}{\mathsf{TC}} \newcommand{\AC}{\mathsf{AC}} \newcommand{\SC}{\mathsf{SC}} \newcommand{\SZK}{\mathsf{SZK}} \newcommand{\AM}{\mathsf{AM}} \newcommand{\IP}{\mathsf{IP}} \newcommand{\PSPACE}{\mathsf{PSPACE}} \newcommand{\EXP}{\mathsf{EXP}} \newcommand{\MIP}{\mathsf{MIP}} \newcommand{\NEXP}{\mathsf{NEXP}} \newcommand{\BQP}{\mathsf{BQP}} \newcommand{\distP}{\mathsf{dist\textbf{P}}} \newcommand{\distNP}{\mathsf{dist\textbf{NP}}} \newcommand{\eps}{\epsilon} \newcommand{\lam}{\lambda} \newcommand{\dleta}{\delta} \newcommand{\simga}{\sigma} \newcommand{\vphi}{\varphi} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\ot}{\otimes} \newcommand{\zo}{\{0,1\}} \newcommand{\co}{:} %\newcommand{\co}{\colon} \newcommand{\bdry}{\partial} \newcommand{\grad}{\nabla} \newcommand{\transp}{^\intercal} \newcommand{\inv}{^{-1}} \newcommand{\symmdiff}{\triangle} \newcommand{\symdiff}{\symmdiff} \newcommand{\half}{\tfrac{1}{2}} \newcommand{\bbone}{\mathbbm 1} \newcommand{\Id}{\bbone} \newcommand{\SAT}{\mathsf{SAT}} \newcommand{\bcalG}{\boldsymbol{\calG}} \newcommand{\calbG}{\bcalG} \newcommand{\bcalX}{\boldsymbol{\calX}} \newcommand{\calbX}{\bcalX} \newcommand{\bcalY}{\boldsymbol{\calY}} \newcommand{\calbY}{\bcalY} \newcommand{\bcalZ}{\boldsymbol{\calZ}} \newcommand{\calbZ}{\bcalZ} $$

2022

  1. Tim Brooks, Aleksander Holynski, and Alexei A. Efros
    Nov 2022

    Paper Abstract

    We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models – a language model (GPT-3) and a text-to-image model (Stable Diffusion) – to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.

Three Important Things

1. InstructPix2Pix

In this paper, the authors tackle the problem of instruction-based image editing. The setting is that given an image and an instruction, output an image that has the changes from the instruction applied, without altering the original style and contents of the image that was not asked to be changed.

Some of their results:

They take a supervised learning approach, and use latent diffusion models (which operate in the latent space of a pretrained variational encoder instead of image space for faster training) to generate the edited image. How they obtained the dataset is explained in the next section. Their training objective closely resembles that of the latent diffusion model, where \(\mathcal{E}\) is the VAE encoder, \(\epsilon_\theta\) is the network to train which learns to predict the noise added to the latent \(z_t\) conditioned on the source image conditioning \(c_I\) and text edit instruction \(c_T\). The training objective is hence

\[\left.L=\mathbb{E}_{\mathcal{E}(x), \mathcal{E}\left(c_I\right), c_T, \epsilon \sim \mathcal{N}(0,1), t}\left[\| \epsilon-\epsilon_\theta\left(z_t, t, \mathcal{E}\left(c_I\right), c_T\right)\right) \|_2^2\right],\]

where minimizing it means that \(\epsilon_\theta\) is doing a better job at predicting the noise.

They bootstrapped training from a pre-trained Stable Diffusion checkpoint. In order to incorporate the image conditioning \(c_I\), they added an additional input channel to the first convolutional layer to take in the encoded image conditioning \(\mathcal{E}(c_I)\). To take in the text editing instructions, they re-used the text conditioning mechanism initially meant for captions.

2. Generating a Dataset

Because of the difficulty of obtaining a large-scale dataset that has the desired images, text editing instructions, and the images after the edit, the authors had to generate their own dataset.

To generate instructions, they fed GPT-3 a caption of the image, and asked it to generate a plausible editing instruction. They then asked it to determine what a reasonable output caption for the image after the edits might be, which results in an input-output caption pair.

To generate the paired images, they used Stable Diffusion and asked it to generate images for both the input and output caption. However, just using vanilla Stable Diffusion could result in the two images having wildly different characteristics and styles, as the figure below shows. Instead, they make use of the Prompt-to-Prompt plugin, which ensures similarity between the two images that are generated.

The overall data generation pipeline is thus summarized in the figure below:

3. Classifier-free Guidance for Two Conditionings

Classifier-free diffusion guidance is a method of guiding the diffusion process towards generating images with higher likelihood under some implicit classifier.

In the usual context, we only have one conditional \(c\). Then \(e_\theta(z_t, c)\) denotes the predicted noise conditioned on \(c\), and \(e_\theta(z_t, \varnothing)\) is the predicted noise with no conditioning. Then classifier-free diffusion guidance is where we modify our noise prediction to be \(\tilde{e}_\theta(z_t, c)\), which is extrapolated away in the direction towards the conditional and away from the unconditional:

\[\tilde{e_\theta}\left(z_t, c\right)=e_\theta\left(z_t, \varnothing\right)+s \cdot\left(e_\theta\left(z_t, c\right)-e_\theta\left(z_t, \varnothing\right)\right)\]

They extend this idea to the case where we have 2 conditionals, \(c_I\) and \(c_T\), with scaling factors \(s_I\) and \(s_T\) influencing how strongly the input image and text instructions respectively should influence the final image:

\[\begin{aligned} \tilde{e_\theta}\left(z_t, c_I, c_T\right)= & e_\theta\left(z_t, \varnothing, \varnothing\right) \\ & +s_I \cdot\left(e_\theta\left(z_t, c_I, \varnothing\right)-e_\theta\left(z_t, \varnothing, \varnothing\right)\right) \\ & +s_T \cdot\left(e_\theta\left(z_t, c_I, c_T\right)-e_\theta\left(z_t, c_I, \varnothing\right)\right) \end{aligned}\]

Controlling these two knobs hence allows us to balance how faithful the output is to the original source image versus the edit instructions, as the following figure shows:

Most Glaring Deficiency

The biggest limitation is that the performance of their approach is limited by the performance of the Prompt-to-Prompt technique, since this is where its training dataset comes from. Therefore, it is not a technique whose limitations can be easily addressed on its own. In a sense, the main contribution of the paper is UX-based by giving a new instruction-based interface for image editing, with the previous alternative being simply using Prompt-to-Prompt in Stable Diffusion and coming up with the caption yourself.

Conclusions for Future Work

For machine learning models to be more widely adopted, usability is key. For instance, LLM mass adoption never really took off until OpenAI’s ChatGPT made it intuitive and easy for people to use.

This paper leverages existing techniques and provides a new instruction-based text interface for editing images, by creatively finding ways to condition on both text and image inputs. Such an approach could be inspiration for adapting other models to have an interface that is more natural for the user.