Neural Compression in PyTorch

Created
Nov 12, 2022
Description
Featured
Category
Deep Learning
Tags
In this blog post, I’ll cover some of the basic about how to implement an entropy model in PyTorch. There are many useful resources including the following libraries.
  • Tensorflow Compression
  • CompressAI

Overall Compression Pipeline

The basic pipeline for compression usually follows the 3 parts systems:
  1. Encoder: transforming input data into a latent space,
  1. Decoder: converting from a latent space back to the original input space,
  1. Entropy Model: modeling the marginal probability of the latents
In the simplest case, we assume the uniform distribution of the latents. For this case, the overall compression model will lead to a standard auto-encoder model. However, in many other real-world scenarios, latents do not have uniform distributions, and knowing the distributions can often help with the compression. Additionally, the distributions can sometimes be hard to explicitly represent, and in this case we’d like to be able to create a learnable entropy model which helps with learning the distribution of latents.

Entropy Model

In this section, we focus primarily on how to implement an entropy model which is used for modeling the marginal probability of the latents.
Given a latent tensor, we’d like to be able to do the following
  1. Compress the tensor into bitstream
    1. Decompress a bitstream back into the tensor