Evaluasi Invariansi Augmentasi pada CLIP dan DINOv2

Authors

  • Nahumi Nugrahaningsih Jurusan Teknik Informatika Universitas Palangka Raya

DOI:

https://doi.org/10.47111/jointecoms.v5i3.25262

Keywords:

Augmentasi citra, CLIP, DINOv2, Invariansi augmentasi, Model fondasi visi

Abstract

Vision foundation models (VFM) semakin banyak digunakan sebagai encoder visual dalam berbagai task computer vision. Meskipun demikian, stabilitas representasi visual yang dihasilkan oleh model pre-trained terhadap berbagai transformasi citra masih belum sepenuhnya dipahami. Penelitian ini menganalisis sensitivitas augmentasi pada dua VFM, yaitu CLIP ViT-B/32 dan DINOv2 ViT-B/14, ketika digunakan dalam kondisi frozen. Eksperimen dilakukan pada CIFAR-10 dengan lima jenis augmentasi citra: horizontal flip, random crop, color jitter, Gaussian blur, dan kombinasi augmentasi. Stabilitas representasi diukur menggunakan cosine similarity antara embedding citra asli dan citra hasil augmentasi serta intra-class embedding variance. Perbedaan antar model dianalisis menggunakan Wilcoxon signed-rank test dengan koreksi Benjamini–Hochberg false discovery rate, dan pengaruh jenis augmentasi diuji menggunakan Friedman test. Hasil menunjukkan bahwa CLIP secara konsisten memiliki augmentation invariance yang lebih tinggi dibandingkan DINOv2 pada seluruh kondisi augmentasi (p < 0.001). Perbedaan terbesar muncul pada Gaussian blur dengan effect size besar (r = 0.866), sedangkan perbedaan terkecil terjadi pada color jitter (r = 0.139). Hasil ini menunjukkan adanya trade-off antara kekayaan representasi dan stabilitas terhadap augmentasi pada vision foundation models dalam kondisi frozen. Temuan ini memberikan pemahaman empiris mengenai perilaku representasi visual pada dua model yang banyak digunakan dalam berbagai pipeline computer vision.

Downloads

Download data is not yet available.
DOI: 10.47111/jointecoms.v5i3.25262 DOI URL: https://doi.org/10.47111/jointecoms.v5i3.25262
Views: 19 | Downloads: 9

References

[1] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 60, 2019, doi: 10.1186/s40537-019-0197-0.

[2] R. Bommasani et al., “On the Opportunities and Risks of Foundation Models,” 2021.

[3] A. Radford et al., “Learning Transferable Visual Models From Natural Language Supervision,” in Proceedings of the International Conference on Machine Learning, 2021.

[4] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, and et al., “DINOv2: Learning Robust Visual Features without Supervision,” Transactions on Machine Learning Research, 2023.

[5] L. Ericsson, H. Gouk, and T. M. Hospedales, “Why Do Self-Supervised Models Transfer? On the Impact of Invariance on Downstream Tasks,” in Proceedings of the British Machine Vision Conference, 2022.

[6] R. Chavhan, J. Stuehmer, and C. Heggan, “Amortised Invariance Learning for Contrastive Self-Supervision,” in Proceedings of the International Conference on Learning Representations, 2023.

[7] X. Xu and J. Triesch, “CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning,” in Lecture Notes in Computer Science, 2023.

[8] Y. Cai, Y. Liu, Z. Zhang, and J. Q. Shi, “CLAP: Isolating Content from Style Through Contrastive Learning with Augmented Prompts,” in Lecture Notes in Computer Science, 2025.

[9] M. Singha, A. Jha, and B. Banerjee, “GOPRO: Generate and Optimize Prompts in CLIP using Self-Supervised Learning,” in Proceedings of the British Machine Vision Conference, 2023.

[10] C. E. Bonhage, J. L. Mueller, A. D. Friederici, and C. J. Fiebach, “Combined eye tracking and fMRI reveals neural basis of linguistic predictions during sentence comprehension,” Cortex, vol. 68, pp. 33–47, 2015.

[11] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” 2009.

[12] Y. Guo, D. Stutz, and B. Schiele, “Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.

[13] M. Huang, W. Yu, and L. Zhang, “DF3Net: Dual Frequency Feature Fusion Network with Hierarchical Transformer for Image Inpainting,” Information Fusion, 2024.

[14] M. F. Aslan, B. Aslan, and K. Sabanci, “Frequency-Domain Vision Transformers: Architectures, Applications, and Open Challenges,” Applied Sciences, 2026.

Downloads

Published

2025-09-30

How to Cite

Nugrahaningsih, N. (2025). Evaluasi Invariansi Augmentasi pada CLIP dan DINOv2. Journal of Information Technology and Computer Science, 5(3), 320–327. https://doi.org/10.47111/jointecoms.v5i3.25262

Most read articles by the same author(s)

1 2 > >> 

Similar Articles

You may also start an advanced similarity search for this article.