You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've analyzed the backend.ai-webui repository with a focus on the /react and /packages/backend.ai-ui directories. The repository has significant testing infrastructure in place but currently has very low test coverage (~4% file coverage) with only 19 test files covering 482 source files.
Current State Analysis
Test Infrastructure
Testing Framework:
Jest 30.1.3 with React Testing Library
Tests located alongside source files with .test.ts or .test.tsx extensions
Coverage configuration already in place
Coverage Steps Configuration:
Configuration exists at .github/actions/daily-test-improver/coverage-steps/action.yml
Builds both React project and backend.ai-ui package
Generates coverage reports for each project separately
Attempts to merge coverage reports into combined report
Note: Pages are integration-heavy and may be better covered by E2E tests (Playwright tests exist in e2e/ directory). Recommend focusing on unit/component tests first.
Commands for Build, Test, and Coverage
Install dependencies:
cd /home/runner/work/backend.ai-webui/backend.ai-webui
npx pnpm@latest install --no-frozen-lockfile --ignore-scripts
Build React GraphQL types (required before testing):
cd react
npx pnpm run relay
Run React tests with coverage:
cd react
NODE_OPTIONS='--no-deprecation --experimental-vm-modules' npx jest --coverage --coverageDirectory=coverage
Run backend.ai-ui tests with coverage:
cd packages/backend.ai-ui
NODE_OPTIONS='--no-deprecation --experimental-vm-modules' npx jest --coverage --coverageDirectory=coverage
Generate combined coverage report:
# After running both test suites
mkdir -p coverage/combined
npx nyc merge coverage/temp coverage/combined/coverage-final.json
npx nyc report --temp-dir coverage/combined --reporter=html --reporter=lcov --reporter=text-summary --report-dir coverage/combined
Use AI to generate basic test scaffolds for utility functions
Create test templates for common patterns (components, hooks)
Snapshot Testing:
Add snapshot tests for presentational components
Quick wins for visual regression protection
Integration with CI:
Enforce minimum coverage thresholds
Fail builds if coverage decreases
Generate coverage badges
Test Utilities:
Create test helpers for common Relay mocking patterns
Build custom render utilities for testing with providers
Documentation:
Document testing patterns and best practices
Create examples for testing common scenarios (hooks with Relay, components with Ant Design)
Questions for Maintainers
Test Framework: Is the project using Jest or Vitest? The matchMedia.mock.js uses Jest API but some configurations suggest Vitest.
Priority Areas: Which components/features are most critical and should be prioritized for test coverage?
Coverage Goals: What is the target coverage percentage? Should we aim for 80%, or is there a different goal?
Relay Testing: What's the preferred approach for mocking Relay queries in tests? Should we use relay-test-utils or another approach?
Breaking Changes: Are breaking changes acceptable if tests reveal bugs? Or should bugs be reported separately?
How to Control this Workflow
You can control this workflow using the following commands:
Disable the workflow:
gh aw disable daily-test-improver --repo lablup/backend.ai-webui
Enable the workflow:
gh aw enable daily-test-improver --repo lablup/backend.ai-webui
Run the workflow manually (with optional repeats):
# Run once
gh aw run daily-test-improver --repo lablup/backend.ai-webui
# Run multiple times (useful for testing or making rapid progress)
gh aw run daily-test-improver --repo lablup/backend.ai-webui --repeat 5
View workflow logs:
gh aw logs daily-test-improver --repo lablup/backend.ai-webui
Provide Feedback:
Add comments to this discussion with feedback or priorities
Close this discussion if you don't want automated test improvements
React to specific items in the plan to indicate priority
What Happens Next
The next time this workflow runs, Phase 2 will be performed:
The workflow will analyze the codebase to create coverage steps configuration
It will attempt to validate the configuration by running the coverage steps
A pull request will be created with the configuration file
If running in "repeat" mode, the workflow will automatically proceed after PR creation
After Phase 2 completes and the PR is merged, Phase 3 will begin on subsequent runs:
The workflow will select specific areas to improve test coverage
It will write new tests following the established patterns
Draft pull requests will be created with coverage improvements
Coverage metrics will be tracked and reported
Humans can review this research and add comments before the workflow continues to Phase 2.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Executive Summary
I've analyzed the backend.ai-webui repository with a focus on the
/reactand/packages/backend.ai-uidirectories. The repository has significant testing infrastructure in place but currently has very low test coverage (~4% file coverage) with only 19 test files covering 482 source files.Current State Analysis
Test Infrastructure
Testing Framework:
.test.tsor.test.tsxextensionsCoverage Steps Configuration:
.github/actions/daily-test-improver/coverage-steps/action.ymlCurrent Test Coverage
Existing Test Files (19 total):
React Project (14 tests):
react/src/hooks/- 6 test files (useMemoWithPrevious, backendai, useBackendAIImageMetaData, useControllableState, useResourceLimitAndRemaining, index)react/src/components/- 3 test files (EnvVarFormList, SessionFormItems/ResourceAllocationFormItems, MyResourceWithinResourceGroup)react/src/helper/- 5 test files (resultTypes, csv-util, graphql-transformer, big-number, index)backend.ai-ui Package (11 tests):
packages/backend.ai-ui/src/hooks/- 1 test file (useIntervalValue)packages/backend.ai-ui/src/components/- 9 test files (BAIBulkEditFormItem, BAIButton, BAIBackButton, BAIUnmountAfterClose, BAIStatistic, fragments/BAIDomainSelect, BAIFlex, BAIPropertyFilter, BAITag)packages/backend.ai-ui/src/helper/- 1 test file (index)Test Coverage Ratio:
Testing Challenges Encountered
Environment Setup Issues:
--no-frozen-lockfiledue to lockfile mismatchjestnot defined in matchMedia.mock.js)Module Resolution:
relay-compiler: not found)Configuration:
matchMedia.mock.jsuses Jest API (jest.fn()) instead of Vitest API (vi.fn())Testing Strategy and Plan
Phase 1: Fix Test Environment (Priority: HIGH)
Actions Needed:
matchMedia.mock.jsto use proper mock API (determine if using Jest or Vitest)Phase 2: Low-Hanging Fruit - Utility Functions (Priority: HIGH)
Target: Helper/Utility Functions (Easy to test, high impact)
React helper/ directory candidates (0% coverage):
bui-language.ts- Simple object exports, easy to testcustomThemeConfig.ts- Theme loading and configurationreact-to-webcomponent.tsx- Web component wrapperconst-vars.ts- Constants file (likely trivial but should verify exports)backend.ai-ui helper/ directory candidates:
useDebouncedDeferredValue.ts- Custom hook combining debounce and deferred valuereactQueryAlias.ts- React Query wrappers with custom logicEstimated Impact:
Phase 3: Component Testing - BAI UI Library (Priority: MEDIUM)
Target: backend.ai-ui Components (Already has some test patterns established)
High-Value Candidates:
BAIModal- Modal component (widely used)BAITable- Table component with custom featuresBAIText- Text component with ellipsis featuresBAISelect- Select component variationsBAICard- Card component with statusEstimated Impact:
Phase 4: Hook Testing - React Project (Priority: MEDIUM)
Target: React hooks/ (Many uncovered hooks)
High-Value Candidates:
useBAINotification.tsx- Notification managementuseBackendAIAppLauncher.tsx- App launcher logicuseCurrentProject.tsx- Project contextuseWebUIMenuItems.tsx- Menu generationuseStartSession.tsx- Session creationEstimated Impact:
Phase 5: Page Testing (Priority: LOW)
Target: Pages (Most complex, requires extensive mocking)
Note: Pages are integration-heavy and may be better covered by E2E tests (Playwright tests exist in
e2e/directory). Recommend focusing on unit/component tests first.Commands for Build, Test, and Coverage
Install dependencies:
cd /home/runner/work/backend.ai-webui/backend.ai-webui npx pnpm@latest install --no-frozen-lockfile --ignore-scriptsBuild React GraphQL types (required before testing):
cd react npx pnpm run relayRun React tests with coverage:
Run backend.ai-ui tests with coverage:
Generate combined coverage report:
# After running both test suites mkdir -p coverage/combined npx nyc merge coverage/temp coverage/combined/coverage-final.json npx nyc report --temp-dir coverage/combined --reporter=html --reporter=lcov --reporter=text-summary --report-dir coverage/combinedTest Organization
Tests follow colocation pattern:
{SourceFileName}.test.{ts|tsx}Example:
Opportunities for Significant Coverage Increase
Automated Test Generation:
Snapshot Testing:
Integration with CI:
Test Utilities:
Documentation:
Questions for Maintainers
Test Framework: Is the project using Jest or Vitest? The
matchMedia.mock.jsuses Jest API but some configurations suggest Vitest.Priority Areas: Which components/features are most critical and should be prioritized for test coverage?
Coverage Goals: What is the target coverage percentage? Should we aim for 80%, or is there a different goal?
Relay Testing: What's the preferred approach for mocking Relay queries in tests? Should we use relay-test-utils or another approach?
Breaking Changes: Are breaking changes acceptable if tests reveal bugs? Or should bugs be reported separately?
How to Control this Workflow
You can control this workflow using the following commands:
Disable the workflow:
Enable the workflow:
gh aw enable daily-test-improver --repo lablup/backend.ai-webuiRun the workflow manually (with optional repeats):
View workflow logs:
Provide Feedback:
What Happens Next
The next time this workflow runs, Phase 2 will be performed:
After Phase 2 completes and the PR is merged, Phase 3 will begin on subsequent runs:
Humans can review this research and add comments before the workflow continues to Phase 2.
Beta Was this translation helpful? Give feedback.
All reactions