π£ Stay updated! Subscribe to our newsletter for the latest releases, tutorials, and tips directly from the Albumentations team.
Docs | Discord | Twitter | LinkedIn
This repository is no longer actively maintained. The last update was in June 2025, and no further bug fixes, features, or compatibility updates will be provided.
All development has moved to AlbumentationsX, the next-generation successor to Albumentations.
Note: AlbumentationsX uses dual licensing (AGPL-3.0 / Commercial). The AGPL license has strict copyleft requirements - see details below.
- β Forever free for all uses including commercial
- β No licensing fees or restrictions
- β No bug fixes - Even critical bugs won't be addressed
- β No new features - Missing out on performance improvements
- β No support - Issues and questions go unanswered
- β No compatibility updates - May break with new Python/PyTorch versions
Best for: Projects that work fine with the current version and don't need updates
- β
Drop-in replacement - Same API, just
pip install albumentationsx
- β Active development - Regular updates and new features
- β Bug fixes - Issues are actively addressed
- β Performance improvements - Faster execution
- β Community support - Active Discord and issue tracking
β οΈ Dual licensed:- AGPL-3.0: Free ONLY for projects licensed under AGPL-3.0 (not compatible with MIT, Apache, BSD, etc.)
- Commercial License: Required for proprietary use AND permissive open-source projects
Best for: Projects that need ongoing support, updates, and new features
β οΈ AGPL License Warning: The AGPL-3.0 license is NOT compatible with permissive licenses like MIT, Apache 2.0, or BSD. If your project uses any of these licenses, you CANNOT use the AGPL version of AlbumentationsX - you'll need a commercial license.
# Uninstall original
pip uninstall albumentations
# Install AlbumentationsX
pip install albumentationsx
That's it! Your existing code continues to work without any changes:
import albumentations as A # Same import!
transform = A.Compose([
A.RandomCrop(width=256, height=256),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.2),
])
- π¦ AlbumentationsX Repository: https://.com/albumentations-team/AlbumentationsX
- π° Commercial Licensing: https://albumentations.ai/pricing
- π¬ Discord Community: https://discord.gg/AKPrrDYNAt
Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.
Here is an example of how you can apply some pixel-level augmentations from Albumentations to create new images from the original one:
- Complete Computer Vision Support: Works with all major CV tasks including classification, segmentation (semantic & instance), object detection, and pose estimation.
- Simple, Unified API: One consistent interface for all data types - RGB/grayscale/multispectral images, masks, bounding boxes, and keypoints.
- Rich Augmentation Library: 70+ high-quality augmentations to enhance your training data.
- Fast: Consistently benchmarked as the fastest augmentation library, with optimizations for production use.
- Deep Learning Integration: Works with PyTorch, TensorFlow, and other frameworks. Part of the PyTorch ecosystem.
- Created by Experts: Built by developers with deep experience in computer vision and machine learning competitions.
- Albumentations
Vladimir I. Iglovikov | Kaggle Grandmaster
Mikhail Druzhinin | Kaggle Expert
Alexander Buslaev | Kaggle Master
Eugene Khvedchenya | Kaggle Grandmaster
Albumentations requires Python 3.9 or higher. To install the latest version from PyPI:
pip install -U albumentations
Other installation options are described in the documentation.
The full documentation is available at https://albumentations.ai/docs/.
import albumentations as A
import cv2
# Declare an augmentation pipeline
transform = A.Compose([
A.RandomCrop(width=256, height=256),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.2),
])
# Read an image with OpenCV and convert it to the RGB colorspace
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Augment an image
transformed = transform(image=image)
transformed_image = transformed["image"]
Please start with the introduction articles about why image augmentation is important and how it helps to build better models.
If you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to the set of articles that has an in-depth description of this task. We also have a list of examples on applying Albumentations for different use cases.
Check the online demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have a list of all available augmentations and their targets.
Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. For volumetric data (volumes and 3D masks), these transforms are applied independently to each slice along the Z-axis (depth dimension), maintaining consistency across the volume. The list of pixel-level transforms:
- AdditiveNoise
- AdvancedBlur
- AutoContrast
- Blur
- CLAHE
- ChannelDropout
- ChannelShuffle
- ChromaticAberration
- ColorJitter
- Defocus
- Downscale
- Emboss
- Equalize
- FDA
- FancyPCA
- FromFloat
- GaussNoise
- GaussianBlur
- GlassBlur
- HEStain
- HistogramMatching
- HueSaturationValue
- ISONoise
- Illumination
- ImageCompression
- InvertImg
- MedianBlur
- MotionBlur
- MultiplicativeNoise
- Normalize
- PixelDistributionAdaptation
- PlanckianJitter
- PlasmaBrightnessContrast
- PlasmaShadow
- Posterize
- RGBShift
- RandomBrightnessContrast
- RandomFog
- RandomGamma
- RandomGravel
- RandomRain
- RandomShadow
- RandomSnow
- RandomSunFlare
- RandomToneCurve
- RingingOvershoot
- SaltAndPepper
- Sharpen
- ShotNoise
- Solarize
- Spatter
- Superpixels
- TextImage
- ToFloat
- ToGray
- ToRGB
- ToSepia
- UnsharpMask
- ZoomBlur
Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. For volumetric data (volumes and 3D masks), these transforms are applied independently to each slice along the Z-axis (depth dimension), maintaining consistency across the volume. The following table shows which additional targets are supported by each transform:
- Volume: 3D array of shape (D, H, W) or (D, H, W, C) where D is depth, H is height, W is width, and C is number of channels (optional)
- Mask3D: Binary or multi-class 3D mask of shape (D, H, W) where each slice represents segmentation for the corresponding volume slice
Transform | Image | Mask | BBoxes | Keypoints | Volume | Mask3D |
---|---|---|---|---|---|---|
Affine | β | β | β | β | β | β |
AtLeastOneBBoxRandomCrop | β | β | β | β | β | β |
BBoxSafeRandomCrop | β | β | β | β | β | β |
CenterCrop | β | β | β | β | β | β |
CoarseDropout | β | β | β | β | β | β |
ConstrainedCoarseDropout | β | β | β | β | β | β |
Crop | β | β | β | β | β | β |
CropAndPad | β | β | β | β | β | β |
CropNonEmptyMaskIfExists | β | β | β | β | β | β |
D4 | β | β | β | β | β | β |
ElasticTransform | β | β | β | β | β | β |
Erasing | β | β | β | β | β | β |
FrequencyMasking | β | β | β | β | β | β |
GridDistortion | β | β | β | β | β | β |
GridDropout | β | β | β | β | β | β |
GridElasticDeform | β | β | β | β | β | β |
HorizontalFlip | β | β | β | β | β | β |
Lambda | β | β | β | β | β | β |
LongestMaxSize | β | β | β | β | β | β |
MaskDropout | β | β | β | β | β | β |
Morphological | β | β | β | β | β | β |
Mosaic | β | β | β | β | ||
NoOp | β | β | β | β | β | β |
OpticalDistortion | β | β | β | β | β | β |
OverlayElements | β | β | ||||
Pad | β | β | β | β | β | β |
PadIfNeeded | β | β | β | β | β | β |
Perspective | β | β | β | β | β | β |
PiecewiseAffine | β | β | β | β | β | β |
PixelDropout | β | β | β | β | β | β |
RandomCrop | β | β | β | β | β | β |
RandomCropFromBorders | β | β | β | β | β | β |
RandomCropNearBBox | β | β | β | β | β | β |
RandomGridShuffle | β | β | β | β | β | β |
RandomResizedCrop | β | β | β | β | β | β |
RandomRotate90 | β | β | β | β | β | β |
RandomScale | β | β | β | β | β | β |
RandomSizedBBoxSafeCrop | β | β | β | β | β | β |
RandomSizedCrop | β | β | β | β | β | β |
Resize | β | β | β | β | β | β |
Rotate | β | β | β | β | β | β |
SafeRotate | β | β | β | β | β | β |
ShiftScaleRotate | β | β | β | β | β | β |
SmallestMaxSize | β | β | β | β | β | β |
SquareSymmetry | β | β | β | β | β | β |
ThinPlateSpline | β | β | β | β | β | β |
TimeMasking | β | β | β | β | β | β |
TimeReverse | β | β | β | β | β | β |
Transpose | β | β | β | β | β | β |
VerticalFlip | β | β | β | β | β | β |
XYMasking | β | β | β | β | β | β |
3D transforms operate on volumetric data and can modify both the input volume and associated 3D mask.
Where:
- Volume: 3D array of shape (D, H, W) or (D, H, W, C) where D is depth, H is height, W is width, and C is number of channels (optional)
- Mask3D: Binary or multi-class 3D mask of shape (D, H, W) where each slice represents segmentation for the corresponding volume slice
Transform | Volume | Mask3D | Keypoints |
---|---|---|---|
CenterCrop3D | β | β | β |
CoarseDropout3D | β | β | β |
CubicSymmetry | β | β | β |
Pad3D | β | β | β |
PadIfNeeded3D | β | β | β |
RandomCrop3D | β | β | β |
- Platform: macOS-15.1-arm64-arm-64bit
- Processor: arm
- CPU Count: 16
- Python Version: 3.12.8
- Number of images: 2000
- Runs per transform: 5
- Max warmup iterations: 1000
- albumentations: 2.0.4
- augly: 1.0.0
- imgaug: 0.4.0
- kornia: 0.8.0
- torchvision: 0.20.1
Number shows how many uint8 images per second can be processed on one CPU thread. Larger is better. The Speedup column shows how many times faster Albumentations is compared to the fastest other library for each transform.
Transform | albumentations 2.0.4 | augly 1.0.0 | imgaug 0.4.0 | kornia 0.8.0 | torchvision 0.20.1 | Speedup (Alb/fastest other) |
---|---|---|---|---|---|---|
Affine | 1445 Β± 9 | - | 1328 Β± 16 | 248 Β± 6 | 188 Β± 2 | 1.09x |
AutoContrast | 1657 Β± 13 | - | - | 541 Β± 8 | 344 Β± 1 | 3.06x |
Blur | 7657 Β± 114 | 386 Β± 4 | 5381 Β± 125 | 265 Β± 11 | - | 1.42x |
Brightness | 11985 Β± 455 | 2108 Β± 32 | 1076 Β± 32 | 1127 Β± 27 | 854 Β± 13 | 5.68x |
CLAHE | 647 Β± 4 | - | 555 Β± 14 | 165 Β± 3 | - | 1.17x |
CenterCrop128 | 119293 Β± 2164 | - | - | - | - | N/A |
ChannelDropout | 11534 Β± 306 | - | - | 2283 Β± 24 | - | 5.05x |
ChannelShuffle | 6772 Β± 109 | - | 1252 Β± 26 | 1328 Β± 44 | 4417 Β± 234 | 1.53x |
CoarseDropout | 18962 Β± 1346 | - | 1190 Β± 22 | - | - | 15.93x |
ColorJitter | 1020 Β± 91 | 418 Β± 5 | - | 104 Β± 4 | 87 Β± 1 | 2.44x |
Contrast | 12394 Β± 363 | 1379 Β± 25 | 717 Β± 5 | 1109 Β± 41 | 602 Β± 13 | 8.99x |
CornerIllumination | 484 Β± 7 | - | - | 452 Β± 3 | - | 1.07x |
Elastic | 374 Β± 2 | - | 395 Β± 14 | 1 Β± 0 | 3 Β± 0 | 0.95x |
Equalize | 1236 Β± 21 | - | 814 Β± 11 | 306 Β± 1 | 795 Β± 3 | 1.52x |
Erasing | 27451 Β± 2794 | - | - | 1210 Β± 27 | 3577 Β± 49 | 7.67x |
GaussianBlur | 2350 Β± 118 | 387 Β± 4 | 1460 Β± 23 | 254 Β± 5 | 127 Β± 4 | 1.61x |
GaussianIllumination | 720 Β± 7 | - | - | 436 Β± 13 | - | 1.65x |
GaussianNoise | 315 Β± 4 | - | 263 Β± 9 | 125 Β± 1 | - | 1.20x |
Grayscale | 32284 Β± 1130 | 6088 Β± 107 | 3100 Β± 24 | 1201 Β± 52 | 2600 Β± 23 | 5.30x |
HSV | 1197 Β± 23 | - | - | - | - | N/A |
HorizontalFlip | 14460 Β± 368 | 8808 Β± 1012 | 9599 Β± 495 | 1297 Β± 13 | 2486 Β± 107 | 1.51x |
Hue | 1944 Β± 64 | - | - | 150 Β± 1 | - | 12.98x |
Invert | 27665 Β± 3803 | - | 3682 Β± 79 | 2881 Β± 43 | 4244 Β± 30 | 6.52x |
JpegCompression | 1321 Β± 33 | 1202 Β± 19 | 687 Β± 26 | 120 Β± 1 | 889 Β± 7 | 1.10x |
LinearIllumination | 479 Β± 5 | - | - | 708 Β± 6 | - | 0.68x |
MedianBlur | 1229 Β± 9 | - | 1152 Β± 14 | 6 Β± 0 | - | 1.07x |
MotionBlur | 3521 Β± 25 | - | 928 Β± 37 | 159 Β± 1 | - | 3.79x |
Normalize | 1819 Β± 49 | - | - | 1251 Β± 14 | 1018 Β± 7 | 1.45x |
OpticalDistortion | 661 Β± 7 | - | - | 174 Β± 0 | - | 3.80x |
Pad | 48589 Β± 2059 | - | - | - | 4889 Β± 183 | 9.94x |
Perspective | 1206 Β± 3 | - | 908 Β± 8 | 154 Β± 3 | 147 Β± 5 | 1.33x |
PlankianJitter | 3221 Β± 63 | - | - | 2150 Β± 52 | - | 1.50x |
PlasmaBrightness | 168 Β± 2 | - | - | 85 Β± 1 | - | 1.98x |
PlasmaContrast | 145 Β± 3 | - | - | 84 Β± 0 | - | 1.71x |
PlasmaShadow | 183 Β± 5 | - | - | 216 Β± 5 | - | 0.85x |
Posterize | 12979 Β± 1121 | - | 3111 Β± 95 | 836 Β± 30 | 4247 Β± 26 | 3.06x |
RGBShift | 3391 Β± 104 | - | - | 896 Β± 9 | - | 3.79x |
Rain | 2043 Β± 115 | - | - | 1493 Β± 9 | - | 1.37x |
RandomCrop128 | 111859 Β± 1374 | 45395 Β± 934 | 21408 Β± 622 | 2946 Β± 42 | 31450 Β± 249 | 2.46x |
RandomGamma | 12444 Β± 753 | - | 3504 Β± 72 | 230 Β± 3 | - | 3.55x |
RandomResizedCrop | 4347 Β± 37 | - | - | 661 Β± 16 | 837 Β± 37 | 5.19x |
Resize | 3532 Β± 67 | 1083 Β± 21 | 2995 Β± 70 | 645 Β± 13 | 260 Β± 9 | 1.18x |
Rotate | 2912 Β± 68 | 1739 Β± 105 | 2574 Β± 10 | 256 Β± 2 | 258 Β± 4 | 1.13x |
SaltAndPepper | 629 Β± 6 | - | - | 480 Β± 12 | - | 1.31x |
Saturation | 1596 Β± 24 | - | 495 Β± 3 | 155 Β± 2 | - | 3.22x |
Sharpen | 2346 Β± 10 | - | 1101 Β± 30 | 201 Β± 2 | 220 Β± 3 | 2.13x |
Shear | 1299 Β± 11 | - | 1244 Β± 14 | 261 Β± 1 | - | 1.04x |
Snow | 611 Β± 9 | - | - | 143 Β± 1 | - | 4.28x |
Solarize | 11756 Β± 481 | - | 3843 Β± 80 | 263 Β± 6 | 1032 Β± 14 | 3.06x |
ThinPlateSpline | 82 Β± 1 | - | - | 58 Β± 0 | - | 1.41x |
VerticalFlip | 32386 Β± 936 | 16830 Β± 1653 | 19935 Β± 1708 | 2872 Β± 37 | 4696 Β± 161 | 1.62x |
To create a pull request to the repository, follow the documentation at CONTRIBUTING.md
If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:
@Article{info11020125,
AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
TITLE = {Albumentations: Fast and Flexible Image Augmentations},
JOURNAL = {Information},
VOLUME = {11},
YEAR = {2020},
NUMBER = {2},
ARTICLE-NUMBER = {125},
URL = {https://www.mdpi.com/2078-2489/11/2/125},
ISSN = {2078-2489},
DOI = {10.3390/info11020125}
}
Never miss updates, tutorials, and tips from the Albumentations team! Subscribe to our newsletter.