You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Description
Add useClassification docs.
### Type of change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [x] Documentation update (improves or adds clarity to existing
documentation)
### Checklist
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [x] I have updated the documentation accordingly
- [ ] My changes generate no new warnings
---------
Co-authored-by: Norbert Klockiewicz <[email protected]>
Co-authored-by: chmjkb <[email protected]>
Image classification is the process of assigning a label to an image that best describes its contents. For example, when given an image of a puppy, the image classifier should assign the puppy class to that image.
7
+
8
+
:::info
9
+
Usually, the class with the highest probability is the one that is assigned to an image. However, if there are multiple classes with comparatively high probabilities, this may indicate that the model is not confident in its prediction.
10
+
:::
11
+
12
+
:::caution
13
+
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-efficientnet-v2-s). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library
A string that specifies the location of the model binary. For more information, take a look at [loading models](../fundamentals/loading-models.md) page.
|`forward`|`(input: string) => Promise<{ [category: string]: number }>`| Executes the model's forward pass, where `input` can be a fetchable resource or a Base64-encoded string. |
58
+
|`error`| <code>string | null</code> | Contains the error message if the model failed to load. |
59
+
|`isGenerating`|`boolean`| Indicates whether the model is currently processing an inference. |
60
+
|`isReady`|`boolean`| Indicates whether the model has successfully loaded and is ready for inference. |
61
+
62
+
## Running the model
63
+
64
+
To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns a promise, which can resolve either to error or an object containing categories with their probabilities.
65
+
66
+
:::info[Info]
67
+
Images from external sources are stored in your application's temporary directory.
`useObjectDetection` is a hook that lets you seamlessly integrate object detection into your React Native application. Currently, the SSDLite320Large model with MobileNetv3 backbone is supported.
6
+
Object detection is a computer vision technique that identifies and locates objects within images or video. It’s commonly used in applications like image recognition, video surveillance or autonomous driving.
7
+
`useObjectDetection` is a hook that allows you to seamlessly integrate object detection into your React Native applications.
8
+
9
+
:::caution
10
+
It is recommended to use models provided by us, which are available at our [Hugging Face repository](https://huggingface.co/software-mansion/react-native-executorch-ssdlite320-mobilenet-v3-large). You can also use [constants](https://github.com/software-mansion/react-native-executorch/blob/69802ee1ca161d9df00def1dabe014d36341cfa9/src/constants/modelUrls.ts#L28) shipped with our library.
modelSource:SSDLITE_320_MOBILENET_V3_LARGE, // alternatively, you can use require(...)
15
20
});
16
21
17
22
...
@@ -28,30 +33,53 @@ function App() {
28
33
<summary>Type definitions</summary>
29
34
30
35
```typescript
36
+
interfaceBbox {
37
+
x1:number;
38
+
x2:number;
39
+
y1:number;
40
+
y2:number;
41
+
}
42
+
43
+
interfaceDetection {
44
+
bbox:Bbox;
45
+
label:keyoftypeofCocoLabel;
46
+
score:number;
47
+
}
48
+
49
+
interfaceObjectDetectionModule {
50
+
error:string|null;
51
+
isReady:boolean;
52
+
isGenerating:boolean;
53
+
forward: (input:string) =>Promise<Detection[]>;
54
+
}
31
55
```
32
56
</details>
33
57
34
58
### Arguments
35
59
36
60
`modelSource`
37
61
38
-
A String that specifies the path to the model file. You can download the model from our HuggingFace repository.
39
-
For SSDLite, you can add it to your assets directory, and use `require()`. If you prefer to download the model
40
-
the model in runtime instead of bundling it, you can use the constants that we ship with the library.
62
+
A string that specifies the path to the model file. You can download the model from our [HuggingFace repository](https://huggingface.co/software-mansion/react-native-executorch-ssdlite320-mobilenet-v3-large/tree/main).
63
+
For more information on that topic, you can check out the [Loading models](https://docs.swmansion.com/react-native-executorch/fundamentals/loading-models) page.
41
64
42
65
### Returns
43
66
44
67
The hook returns an object with the following properties:
|`forward`|`(input: string) => Promise<Detection[]>`|A function that accepts an image (url, b64) and returns an array of `Detection` objects.|
73
+
|`error`| <code>string | null</code> | Contains the error message if the model loading failed.|
74
+
|`isGenerating`|`boolean`| Indicates whether the model is currently processing an inference.|
75
+
|`isReady`|`boolean`| Indicates whether the model has successfully loaded and is ready for inference.|
53
76
54
-
### Detection object
77
+
78
+
## Running the model
79
+
80
+
To run the model, you can use the `forward` method. It accepts one argument, which is the image. The image can be a remote URL, a local file URI, or a base64-encoded image. The function returns an array of `Detection` objects. Each object contains coordinates of the bounding box, the label of the detected object, and the confidence score. For more information, please refer to the reference or type definitions.
81
+
82
+
## Detection object
55
83
The detection object is specified as follows:
56
84
```typescript
57
85
interfaceBbox {
@@ -67,34 +95,33 @@ interface Detection {
67
95
score:number;
68
96
}
69
97
```
70
-
The `bbox` property contains information about the bounding box of detected objects. It is represented as two points, one on the left bottom part of the bounding box (x1, y1), the second one as the topright part (x2, y2).
71
-
The label property contains the name of the detected object, which is one of `CocoLabels`. The `score`is a confidence score of the detected object.
98
+
The `bbox` property contains information about the bounding box of detected objects. It is represented as two points: one at the bottom-left corner of the bounding box (`x1`, `y1`) and the other at the top-right corner (`x2`, `y2`).
99
+
The `label` property contains the name of the detected object, which corresponds to one of the `CocoLabels`. The `score`represents the confidence score of the detected object.
72
100
73
-
### Running the model
74
101
75
-
To run the model, you can use the `forward` method. It accepts one argument, which is the image. It can be either a remote URL,
76
-
a local file or base64 encoded image. The function returns an array of `Detection` objects. Each one contains coordinates
77
-
of the bounding box, the label of the detected object and confidence score. For more information, please refer to the reference or example.
Copy file name to clipboardexpand all lines: docs/docs/computer-vision/useStyleTransfer.mdx
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: useStyleTransfer
3
-
sidebar_position: 1
3
+
sidebar_position: 3
4
4
---
5
5
6
6
Style transfer is a technique used in computer graphics and machine learning where the visual style of one image is applied to the content of another. This is achieved using algorithms that manipulate data from both images, typically with the aid of a neural network. The result is a new image that combines the artistic elements of one picture with the structural details of another, effectively merging art with traditional imagery. React Native ExecuTorch offers a dedicated hook `useStyleTransfer`, for this task. However before you start you'll need to obtain ExecuTorch-compatible model binary.
0 commit comments