onnxruntime::download::vision::image_classification

Enum ImageClassification

Source
pub enum ImageClassification {
Show 13 variants MobileNet, ResNet(ResNet), SqueezeNet, Vgg(Vgg), AlexNet, GoogleNet, CaffeNet, RcnnIlsvrc13, DenseNet121, Inception(InceptionVersion), ShuffleNet(ShuffleNetVersion), ZFNet512, EfficientNetLite4,
}
Expand description

Image classification model

This collection of models take images as input, then classifies the major objects in the images into 1000 object categories such as keyboard, mouse, pencil, and many animals.

Source: https://github.com/onnx/models#image-classification-

Variants§

§

MobileNet

Image classification aimed for mobile targets.

MobileNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. MobileNet models are also very efficient in terms of speed and size and hence are ideal for embedded and mobile applications.

Source: https://github.com/onnx/models/tree/master/vision/classification/mobilenet

Variant downloaded: ONNX Version 1.2.1 with Opset Version 7.

§

ResNet(ResNet)

Image classification, trained on ImageNet with 1000 classes.

ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.

Source: https://github.com/onnx/models/tree/master/vision/classification/resnet

§

SqueezeNet

A small CNN with AlexNet level accuracy on ImageNet with 50x fewer parameters.

SqueezeNet is a small CNN which achieves AlexNet level accuracy on ImageNet with 50x fewer parameters. SqueezeNet requires less communication across servers during distributed training, less bandwidth to export a new model from the cloud to an autonomous car and more feasible to deploy on FPGAs and other hardware with limited memory.

Source: https://github.com/onnx/models/tree/master/vision/classification/squeezenet

Variant downloaded: SqueezeNet v1.1, ONNX Version 1.2.1 with Opset Version 7.

§

Vgg(Vgg)

Image classification, trained on ImageNet with 1000 classes.

VGG models provide very high accuracies but at the cost of increased model sizes. They are ideal for cases when high accuracy of classification is essential and there are limited constraints on model sizes.

Source: https://github.com/onnx/models/tree/master/vision/classification/vgg

§

AlexNet

Convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.

Source: https://github.com/onnx/models/tree/master/vision/classification/alexnet

Variant downloaded: ONNX Version 1.4 with Opset Version 9.

§

GoogleNet

Convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2014.

Source: https://github.com/onnx/models/tree/master/vision/classification/inception_and_googlenet/googlenet

Variant downloaded: ONNX Version 1.4 with Opset Version 9.

§

CaffeNet

Variant of AlexNet, it’s the name of a convolutional neural network for classification, which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.

Source: https://github.com/onnx/models/tree/master/vision/classification/caffenet

Variant downloaded: ONNX Version 1.4 with Opset Version 9.

§

RcnnIlsvrc13

Convolutional neural network for detection.

This model was made by transplanting the R-CNN SVM classifiers into a fc-rcnn classification layer.

Source: https://github.com/onnx/models/tree/master/vision/classification/rcnn_ilsvrc13

Variant downloaded: ONNX Version 1.4 with Opset Version 9.

§

DenseNet121

Convolutional neural network for classification.

Source: https://github.com/onnx/models/tree/master/vision/classification/rcnn_ilsvrc13

Variant downloaded: ONNX Version 1.4 with Opset Version 9.

§

Inception(InceptionVersion)

Google’s Inception

§

ShuffleNet(ShuffleNetVersion)

Computationally efficient CNN architecture designed specifically for mobile devices with very limited computing power.

Source: https://github.com/onnx/models/tree/master/vision/classification/shufflenet

§

ZFNet512

Deep convolutional networks for classification.

This model’s 4th layer has 512 maps instead of 1024 maps mentioned in the paper.

Source: https://github.com/onnx/models/tree/master/vision/classification/zfnet-512

§

EfficientNetLite4

Image classification model that achieves state-of-the-art accuracy.

It is designed to run on mobile CPU, GPU, and EdgeTPU devices, allowing for applications on mobile and loT, where computational resources are limited.

Source: https://github.com/onnx/models/tree/master/vision/classification/efficientnet-lite4

Variant downloaded: ONNX Version 1.7.0 with Opset Version 11.

Trait Implementations§

Source§

impl Clone for ImageClassification

Source§

fn clone(&self) -> ImageClassification

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ImageClassification

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl From<ImageClassification> for AvailableOnnxModel

Source§

fn from(model: ImageClassification) -> Self

Converts to this type from the input type.

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more