Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Polygon tracking feature with Cutie #8261

Closed
wants to merge 4 commits into from

Conversation

aashutoshpy
Copy link

@aashutoshpy aashutoshpy commented Aug 6, 2024

Motivation and context

In relation to the issue of integrating Video Object Segmentation for Polygon Tracking that has been brought up several times, this pull request proposes following

  • Integrate Cutie, a successor to XMem++ used previously by Polygon tracking feature with XMem tracker #7829, to enable the polygon tracking feature.
  • Help with multi-object segmentation in videos.
  • Lay ground-work for integration of recently introduced Segment Anything 2.0

Related issues are as follows

How has this been tested?

It is still a work in progress. I am new to cvat development and looking for help/suggestion in the following

  • Once draw is complete (Done button is pressed) nothing happens, what changes should be made to tools-control.tsx to send the shapes to tracker?
  • Long term memory might slow it down as we need to restore weight before every inference, is there a better way to achieve this? It might also be buggy, needs to be tested.
  • Right now I have separate buttons for Rectangle tracking and Polygon Tracking, this should be done using the same button based on which Tracker is being used.
  • Enable automatic video object segmentation to use mask from a frame (or a few frames) and propagate it to all other frames with single click, and to videos from other camera views.

Checklist

  • I submit my changes into the develop branch
  • I have created a changelog fragment
  • I have updated the documentation accordingly
  • I have added tests to cover my changes
  • I have linked related issues (see GitHub docs)
  • I have increased versions of npm packages if it is necessary
    (cvat-canvas,
    cvat-core,
    cvat-data and
    cvat-ui)

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

Summary by CodeRabbit

  • New Features

    • Introduced polygon tracking capabilities, allowing users to track objects using polygons instead of just rectangles.
    • Added new buttons for tracking polygons, enhancing user interaction.
    • Implemented a comprehensive evaluation configuration for model assessments.
    • Introduced structured logging configurations for improved monitoring.
  • Enhancements

    • Enhanced model architecture configurations for better pixel data and object recognition tasks.
    • Implemented serverless function configurations for video object segmentation, leveraging GPU resources for improved performance.
  • Bug Fixes

    • Resolved issues related to state management and shape processing within the model handler.

Copy link
Contributor

coderabbitai bot commented Aug 6, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This update enhances the CVAT framework by introducing a polygon tracking functionality in the annotation tools, allowing users to track objects using polygons instead of just rectangles. Additionally, a new evaluation configuration and logging setups are introduced for model evaluation and performance monitoring, improving the overall usability and efficiency of the system for video object segmentation tasks.

Changes

Files Change Summary
cvat-ui/src/components/annotation-page/standard-workspace/controls-side-bar/tools-control.tsx Updated shape handling from rectangles to polygons; modified UI buttons for tracking options.
serverless/pytorch/redefine/cutie/nuclio/config/eval_config.yaml Introduced configuration for model evaluation, including datasets and memory management parameters.
serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom-no-rank.yaml New logging configuration for Python tasks in a serverless environment.
serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom.yaml Structured logging setup for Hydra-based tasks with console and file output configurations.
serverless/pytorch/redefine/cutie/nuclio/config/model/base.yaml Comprehensive model architecture configuration for pixel data processing and object recognition.
serverless/pytorch/redefine/cutie/nuclio/function-gpu.yaml Configuration for a serverless function using GPU for video object segmentation with detailed build steps.
serverless/pytorch/redefine/cutie/nuclio/function.yaml Serverless function configuration for the Cutie tracker with Python environment setup.
serverless/pytorch/redefine/cutie/nuclio/main.py Implemented serverless function logic for image processing and model handling.
serverless/pytorch/redefine/cutie/nuclio/model_handler.py ModelHandler class for processing images, managing model states, and handling conversions between polygons and masks.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ToolsControl
    participant ModelHandler

    User->>ToolsControl: Initiate Polygon Tracking
    ToolsControl->>ModelHandler: Send Image and Polygon Data
    ModelHandler->>ModelHandler: Process Image for Tracking
    ModelHandler->>ToolsControl: Return Updated Shapes
    ToolsControl->>User: Display Updated Tracking Information
Loading

Assessment against linked issues

Objective Addressed Explanation
Support polygon tracking for video segmentation (##5465)
Enhance multi-object tracking capabilities (##4915) The implementation does not explicitly address multi-camera tracking.

🐰 In fields of green, with polygons bright,
I hop and I skip, with joy and delight.
Tracking shapes is now easy as pie,
For every frame, let the masks fly high!
With tools so fine, our work's a great treat,
A rabbit’s new dance, oh, isn't it sweet! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Outside diff range, codebase verification and nitpick comments (1)
serverless/pytorch/redefine/cutie/nuclio/main.py (1)

1-3: Add a brief description of the file.

Consider adding a brief description at the beginning of the file to provide context about its purpose.

# SPDX-License-Identifier: MIT
#
# This file contains the main logic for the Cutie model handler.
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between f3dcc3e and 26a6834.

Files ignored due to path filters (1)
  • serverless/pytorch/redefine/cutie/nuclio/sample/sample.jpg is excluded by !**/*.jpg
Files selected for processing (9)
  • cvat-ui/src/components/annotation-page/standard-workspace/controls-side-bar/tools-control.tsx (3 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/config/eval_config.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom-no-rank.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/config/model/base.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/function-gpu.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/function.yaml (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/main.py (1 hunks)
  • serverless/pytorch/redefine/cutie/nuclio/model_handler.py (1 hunks)
Additional context used
yamllint
serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom-no-rank.yaml

[error] 22-22: no new line character at the end of file

(new-line-at-end-of-file)

serverless/pytorch/redefine/cutie/nuclio/config/hydra/job_logging/custom.yaml

[error] 22-22: no new line character at the end of file

(new-line-at-end-of-file)

serverless/pytorch/redefine/cutie/nuclio/function.yaml

[error] 72-72: no new line character at the end of file

(new-line-at-end-of-file)

Additional comments not posted (29)
serverless/pytorch/redefine/cutie/nuclio/config/model/base.yaml (6)

1-2: LGTM! Pixel statistics are correctly defined.

The pixel mean, standard deviation, and dimension are standard values for image normalization.

Also applies to: 4-4


10-12: LGTM! Encoder settings are correctly defined.

The pixel and mask encoder types and dimensions are appropriate for the model configuration.

Also applies to: 14-16


18-19: LGTM! Positional encoding settings are correctly defined.

The scale and temperature values are appropriate for positional encoding.


21-41: LGTM! Transformer settings are correctly defined.

The embedding dimensions, feed-forward dimensions, number of heads, blocks, queries, and attention settings are appropriate for the transformer configuration.


43-46: LGTM! Summarizer settings are correctly defined.

The embedding dimensions, number of summaries, and positional encoding are appropriate for the summarizer configuration.


48-54: LGTM! Loss and decoder settings are correctly defined.

The sensory and query loss settings with weights and the upsampling dimensions for the decoder are appropriate for the configuration.

Also applies to: 56-58

serverless/pytorch/redefine/cutie/nuclio/function.yaml (6)

1-13: LGTM! Metadata is correctly defined.

The name, namespace, annotations, and help message are appropriate for the function configuration.


15-23: LGTM! Specifications are correctly defined.

The description, runtime, handler, event timeout, and environment variables are appropriate for the function configuration.


24-55: LGTM! Build directives are correctly defined.

The image, base image, and various run commands for setting up the environment and installing dependencies are appropriate for the function configuration.


56-62: LGTM! Triggers are correctly defined.

The HTTP trigger with max workers and attributes is appropriate for the function configuration.


63-65: LGTM! Resource limits are correctly defined.

The GPU limits are appropriate for the function configuration.


67-72: LGTM! Platform settings are correctly defined.

The restart policy and mount mode are appropriate for the function configuration.

Tools
yamllint

[error] 72-72: no new line character at the end of file

(new-line-at-end-of-file)

serverless/pytorch/redefine/cutie/nuclio/function-gpu.yaml (6)

1-13: LGTM! Metadata is correctly defined.

The name, namespace, annotations, and help message are appropriate for the function configuration.


15-23: LGTM! Specifications are correctly defined.

The description, runtime, handler, event timeout, and environment variables are appropriate for the function configuration.


24-58: LGTM! Build directives are correctly defined.

The image, base image, and various run commands for setting up the environment and installing dependencies are appropriate for the function configuration.


59-63: LGTM! Triggers are correctly defined.

The HTTP trigger with max workers and attributes is appropriate for the function configuration.


64-68: LGTM! Resource limits are correctly defined.

The GPU limits are appropriate for the function configuration.


70-75: LGTM! Platform settings are correctly defined.

The restart policy and mount mode are appropriate for the function configuration.

serverless/pytorch/redefine/cutie/nuclio/config/eval_config.yaml (5)

1-4: LGTM!

The defaults section correctly references the base configurations and overrides.


6-10: LGTM!

The hydra section correctly specifies the run directory and output subdirectory.


11-23: LGTM!

The main configuration parameters are correctly specified and make sense in the context of the evaluation configuration.


25-34: LGTM!

The dataset-specific parameters are correctly specified and make sense in the context of the evaluation configuration.


54-147: LGTM!

The dataset configurations are correctly specified and make sense in the context of the evaluation configuration.

serverless/pytorch/redefine/cutie/nuclio/model_handler.py (3)

15-29: LGTM!

The convert_polygon_to_mask function correctly converts a polygon to a mask.


31-73: LGTM!

The convert_mask_to_polygon function correctly converts a mask to a polygon.


74-93: LGTM!

The prepare_numpy_image function correctly converts a numpy image to a torch tensor with the correct shape and dtype.

cvat-ui/src/components/annotation-page/standard-workspace/controls-side-bar/tools-control.tsx (3)

507-507: LGTM!

The change correctly updates the shape type from ShapeType.RECTANGLE to ShapeType.POLYGON.


595-595: LGTM!

The change correctly updates the shape type filter from shapeType === 'rectangle' to shapeType === 'polygon'.


1049-1074: LGTM!

The changes correctly update the user interface to include separate buttons for "Track Rectangle" and "Track Polygon."

Comment on lines +1 to +22
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-eval.log
mode: w
root:
level: INFO
handlers: [console, file]

disable_existing_loggers: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a new line character at the end of the file.

The file lacks a new line character at the end, which is a best practice for text files.

+ 
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-eval.log
mode: w
root:
level: INFO
handlers: [console, file]
disable_existing_loggers: false
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-eval.log
mode: w
root:
level: INFO
handlers: [console, file]
disable_existing_loggers: false
Tools
yamllint

[error] 22-22: no new line character at the end of file

(new-line-at-end-of-file)

Comment on lines +1 to +22
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s][r${oc.env:LOCAL_RANK}] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-rank${oc.env:LOCAL_RANK}.log
mode: w
root:
level: INFO
handlers: [console, file]

disable_existing_loggers: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a new line character at the end of the file.

The file lacks a new line character at the end, which is a best practice for text files.

+ 
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s][r${oc.env:LOCAL_RANK}] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-rank${oc.env:LOCAL_RANK}.log
mode: w
root:
level: INFO
handlers: [console, file]
disable_existing_loggers: false
# python logging configuration for tasks
version: 1
formatters:
simple:
format: '[%(asctime)s][%(levelname)s][r${oc.env:LOCAL_RANK}] - %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simple
# absolute file path
filename: ${hydra.runtime.output_dir}/${now:%Y-%m-%d_%H-%M-%S}-rank${oc.env:LOCAL_RANK}.log
mode: w
root:
level: INFO
handlers: [console, file]
disable_existing_loggers: false
Tools
yamllint

[error] 22-22: no new line character at the end of file

(new-line-at-end-of-file)

Comment on lines +12 to +16
def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
context.user_data.model = model
context.logger.info("Init context...100%")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider handling exceptions during model initialization.

Add exception handling to manage potential errors during the model initialization.

def init_context(context):
    context.logger.info("Init context...  0%")
    try:
        model = ModelHandler()
        context.user_data.model = model
        context.logger.info("Init context...100%")
    except Exception as e:
        context.logger.error(f"Failed to initialize model: {e}")
        raise
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
context.user_data.model = model
context.logger.info("Init context...100%")
def init_context(context):
context.logger.info("Init context... 0%")
try:
model = ModelHandler()
context.user_data.model = model
context.logger.info("Init context...100%")
except Exception as e:
context.logger.error(f"Failed to initialize model: {e}")
raise

Comment on lines +18 to +42
def handler(context, event):
context.logger.info("Run cutie model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"]))
shapes = data.get("shapes")
states = data.get("states")
image = Image.open(buf).convert("RGB")
image = np.asarray(image)

results = {
"shapes": [],
"states": []
}

for i, shape in enumerate(shapes):
shape, state = context.user_data.model.handle(image, shape, states[i] if i<len(states) else None)
results["shapes"].append(shape)
results["states"].append(state)

return context.Response(
body=json.dumps(results),
headers={},
content_type='application/json',
status_code=200
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider handling exceptions during the handler execution.

Add exception handling to manage potential errors during the handler execution.

def handler(context, event):
    context.logger.info("Run cutie model")
    try:
        data = event.body
        buf = io.BytesIO(base64.b64decode(data["image"]))
        shapes = data.get("shapes")
        states = data.get("states")
        image = Image.open(buf).convert("RGB")
        image = np.asarray(image)

        results = {
            "shapes": [],
            "states": []
        }

        for i, shape in enumerate(shapes):
            shape, state = context.user_data.model.handle(image, shape, states[i] if i<len(states) else None)
            results["shapes"].append(shape)
            results["states"].append(state)

        return context.Response(
            body=json.dumps(results),
            headers={},
            content_type='application/json',
            status_code=200
        )
    except Exception as e:
        context.logger.error(f"Handler execution failed: {e}")
        return context.Response(
            body=json.dumps({"error": str(e)}),
            headers={},
            content_type='application/json',
            status_code=500
        )
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def handler(context, event):
context.logger.info("Run cutie model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"]))
shapes = data.get("shapes")
states = data.get("states")
image = Image.open(buf).convert("RGB")
image = np.asarray(image)
results = {
"shapes": [],
"states": []
}
for i, shape in enumerate(shapes):
shape, state = context.user_data.model.handle(image, shape, states[i] if i<len(states) else None)
results["shapes"].append(shape)
results["states"].append(state)
return context.Response(
body=json.dumps(results),
headers={},
content_type='application/json',
status_code=200
)
def handler(context, event):
context.logger.info("Run cutie model")
try:
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"]))
shapes = data.get("shapes")
states = data.get("states")
image = Image.open(buf).convert("RGB")
image = np.asarray(image)
results = {
"shapes": [],
"states": []
}
for i, shape in enumerate(shapes):
shape, state = context.user_data.model.handle(image, shape, states[i] if i<len(states) else None)
results["shapes"].append(shape)
results["states"].append(state)
return context.Response(
body=json.dumps(results),
headers={},
content_type='application/json',
status_code=200
)
except Exception as e:
context.logger.error(f"Handler execution failed: {e}")
return context.Response(
body=json.dumps({"error": str(e)}),
headers={},
content_type='application/json',
status_code=500
)

Comment on lines +95 to +164
class ModelHandler:
def __init__(self) -> CUTIE:
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
coco_lvis_weights = '/opt/nuclio/cutie/coco_lvis_h18_itermask.pth'
cutie_base_weights = '/opt/nuclio/cutie/cutie-base-mega.pth'

assert os.path.exists(cutie_base_weights), f"{cutie_base_weights} does not exist"

# load configurations
initialize(version_base='1.3.2', config_path="./config", job_name="eval_config")
cfg = compose(config_name="eval_config")
with open_dict(cfg):
cfg['weights'] = cutie_base_weights
get_dataset_cfg(cfg)

# load model
cutie = CUTIE(cfg).to(self.device).eval()
model_weights = torch.load(cutie_base_weights, map_location=torch.device(self.device))
cutie.load_weights(model_weights)

# use one processor per video
# self.processor = InferenceCore(cutie, cfg=cutie.cfg)
# self.processor.max_internal_size = 480
self.cutie = cutie

def encode_state(self, state):
# state.pop('net', None)

for k,v in state.items():
state[k] = jsonpickle.encode(v)

return state

def decode_state(self, state):
for k,v in state.items():
state[k] = jsonpickle.decode(v)

# state['net'] = copy(self.cutie)

self.cutie = state['net']
self.processor = InferenceCore(self.cutie, cfg=self.cutie.cfg)
self.processor.max_internal_size = 480

def handle(self, image: np.array, shape: Optional[List[float]]=None, state: Optional[Dict]=None)->Tuple[List[float], Optional[Dict]]:
image = prepare_numpy_image(image, self.device)

if state is None:
mask = convert_polygon_to_mask(image, shape) # TODO: only handles single object, need to make sure multi-object can be dealt with
objects = np.unique(mask)
# background '0' does not count as an object
objects = objects[objects != 0].tolist()
mask = torch.from_numpy(mask).to(self.device)
self.processor = InferenceCore(self.cutie, cfg=self.cutie.cfg)
self.processor.max_internal_size = 480
output_prob = self.processor.step(image, mask, objects=objects)
state = {}
state['net'] = self.cutie
state = self.encode_state(state)
else:
self.decode_state(state)
output_prob = self.processor.step(image)
state = self.encode_state(state)

mask = self.processor.output_prob_to_mask(output_prob).cpu().numpy().astype(np.uint8)

shape = convert_mask_to_polygon(mask)# Convert a mask to a polygon


return shape, state

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider handling exceptions for model loading and inference.

The ModelHandler class correctly handles the model, but it would be beneficial to add exception handling for model loading and inference to ensure robustness.

try:
    model_weights = torch.load(cutie_base_weights, map_location=torch.device(self.device))
    cutie.load_weights(model_weights)
except Exception as e:
    print(f"Error loading model weights: {e}")
    # Handle the exception as needed

restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a new line at the end of the file.

The file is missing a new line character at the end.

-      mountMode: volume
+
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mountMode: volume
mountMode: volume

Copy link

sonarcloud bot commented Aug 6, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant