DINOv3 VITS16 CoreML INT8

CoreML conversion of facebook/dinov3-vits16-pretrain-lvd1689m optimized for Apple Silicon.

Model Details

  • Base Model: facebook/dinov3-vits16-pretrain-lvd1689m
  • Framework: CoreML
  • Precision: INT8
  • Input Size: 224×224
  • Model Size: 21.1 MB

Usage

Python (CoreML)

import coremltools as ct
import numpy as np
from PIL import Image

# Load model
model = ct.models.MLModel("dinov3_vits16_224x224_int8.mlpackage")

# Prepare image
image = Image.open("image.jpg").resize((224, 224))

# Extract features
output = model.predict({"image": image})
features = output["features"]  # Shape: [1, embed_dim, grid_size, grid_size]

Swift/iOS

import CoreML

// Load model
guard let model = try? MLModel(contentsOf: modelURL) else {
    fatalError("Failed to load model")
}

// Prepare image
guard let image = UIImage(named: "image.jpg") else {
    fatalError("Failed to load image")
}

// Extract features
let input = try MLFeatureValue(image: image.cgImage!)
let output = try model.prediction(from: [input])
let features = output.featureValue(for: "features")?.multiArrayValue

Performance

Performance metrics on Apple Silicon:

CoreML Performance

  • Throughput: 110.43 FPS
  • Latency: 9.06 ± 4.22 ms
  • Min Latency: 7.99 ms
  • Max Latency: 38.58 ms

Model Specifications

  • Precision: INT8
  • Input Size: 224×224
  • Model Size: 21.1 MB

License

This model is released under the DINOv3 License. See LICENSE.md for details.

Citation

@article{dinov3,
  title={DINOv3: A Versatile Vision Foundation Model},
  author={Meta AI Research},
  journal={arXiv preprint arXiv:2508.10104},
  year={2025}
}

Reference: DINOv3 Paper

Key contributions:

  • Gram anchoring strategy for high-quality dense feature maps
  • Self-supervised learning on 1.689B images
  • Superior performance on dense vision tasks
  • Versatile across tasks and domains without fine-tuning

Demo Images

Input Image

Demo Input Image
*Sample input image for feature extraction demonstration*

Feature Visualization

Feature Comparison Visualization

The visualization shows:

  • PCA projection of high-dimensional features (RGB visualization)
  • Feature channel activations showing spatial patterns
  • Gram matrix analysis for object similarity detection
  • Side-by-side comparison with PyTorch reference implementation

This comprehensive visualization demonstrates that CoreML conversion preserves the semantic structure and feature quality of the original DINOv3 model.

Powered By DINOv3

🌟 This model is powered by DINOv3 🌟

Converted by Aegis AI for optimized Apple Silicon deployment.

Related Models


Last updated: 2025-11-03

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for SharpAI/dinov3-vits16-pretrain-lvd1689m-coreml-INT8-224x224