-
Notifications
You must be signed in to change notification settings - Fork 0
feat(scraper): add scraping status dashboard #39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…endered static content that doesn't embed client side input
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! looking look with things to consider
| const REFRESH_INTERVAL = 30000; // 30 sec | ||
| const ITEMS_PER_PAGE = 12; | ||
| // goes inside <script> tags | ||
| const scriptContent = ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not ideal and elegant to have large js code as string like this. you could move it to a separate js file and run it as service worker on user's browser. I haven't look into how to do it with cloudflare workers and hono. You can look Into it.
Also, currently we are long polling data from the database which is also not ideal. You should look into cloudflare durable object to use web socket for real time update instead of long polling. Let me know if you need any help with it
|
to answer some of your questions:
its totally up to you. Looks good so far.
you can just ignore auth for now, as there is no sensitive info displayed in the dashboard and dashboard is purly presentational. it should be fine.
Its up to you
as I suggested in code review, checkout durable object and we can have a web socket connection to provide real time update instead of long polling
you can safely make this assumption as we will not delete any job, and when creating error record the job must exists
imo, we shouldn’t delete any error record just for monitoring/debugging purposes. e.g. checking how much retries did a given job take to succeed and the error details and whatnot.
Yes, cilent side filtering is good for this case.
It's up to you. Feel free to hold off on this for now. |
7072356 to
503ca3f
Compare
…, just committing so I don't lose the code.
|
🤖 Manus AI Review SummaryThis PR adds a scraping status dashboard to the scraper app, including a new client-side JavaScript file for the dashboard UI, a seed script to populate the database with sample data, and updates to package.json for new scripts. The dashboard supports live data fetching, filtering, pagination, expandable error details, and visual stats bars for jobs and errors. Issues🟡 MEDIUM: The dashboard.js fetches data from the root path '/' expecting JSON data. This assumes the server serves JSON at this endpoint, which may conflict with other routes or static assets. There is no fallback or configurable API endpoint, which may reduce flexibility.
🟡 MEDIUM: The client-side code stores all jobs and errors in memory and performs filtering and pagination on the client. While this works for moderate data sizes, it may cause performance issues or high memory usage if the dataset grows large.
🟡 MEDIUM: The pagination logic wraps around when clicking previous on the first page or next on the last page, which may be unexpected UX behavior. For example, clicking 'prev' on page 1 jumps to the last page instead of disabling the button or staying on page 1.
🟡 MEDIUM: The error expansion rows are inserted directly as sibling rows in the table without ARIA or accessibility considerations, which may impact screen reader users.
🟢 LOW: The escapeHtml function uses DOM methods which is good, but the code uses string concatenation to build HTML strings which can be error-prone and harder to maintain. Using templating or safer DOM manipulation methods would improve maintainability.
🟢 LOW: The seed.ts script contains a large amount of hardcoded seed data. While useful for development, it may be better to generate data programmatically or split into smaller chunks for maintainability.
🟢 LOW: The fetchAndUpdate function alerts on fetch failure, which may be intrusive. Consider using a less disruptive UI notification.
Suggestions💡 Make the API endpoint for fetching dashboard data configurable or use a dedicated endpoint (e.g., '/api/dashboard') to avoid conflicts and improve clarity.
💡 Consider implementing server-side filtering and pagination to improve performance and scalability for large datasets.
💡 Update pagination logic to disable navigation buttons at boundaries instead of wrapping around, to align with common UX patterns.
💡 Improve accessibility of expandable error rows by adding appropriate ARIA attributes and keyboard navigation support.
💡 Refactor HTML rendering in dashboard.js to use safer DOM manipulation or templating libraries to reduce risk of injection and improve maintainability.
💡 Consider programmatic generation or modularization of seed data in seed.ts to improve maintainability and reduce file size.
💡 Replace alert in fetchAndUpdate with a non-blocking UI notification to improve user experience.
Positive Notes✅ The dashboard implements client-side filtering, pagination, and expandable error details, providing a rich interactive UI. ✅ Use of escapeHtml function to sanitize text content helps prevent XSS vulnerabilities. ✅ The stats bars provide a clear visual summary of job and error statuses with legends and tooltips. ✅ The code includes error handling for fetch failures and logs useful messages to the console. ✅ The seed script provides comprehensive sample data covering various job types and statuses, aiding development and testing. ✅ The auto-refresh toggle and manual refresh button give users control over data updates. ✅ Pagination buttons are disabled appropriately when no results are present, maintaining consistent UI layout. Overall AssessmentThis PR delivers a functional and user-friendly scraping status dashboard with good attention to security via escaping and error handling. The client-side architecture is straightforward and aligns with the project's existing patterns. However, there are some medium-impact issues related to scalability, UX behavior in pagination, and accessibility that should be addressed to improve maintainability and user experience. The seed data is extensive but could benefit from modularization. Overall, the quality is good and the feature is a valuable addition, but some refinements are recommended before merging. This review was automatically generated by Manus AI. Please review and verify before committing to architectural decisions. |
deef787 to
1d1f688
Compare
Closes #38
📌 What’s Changed
Screen.Recording.2025-10-13.at.12.32.51.AM-2.mp4
✅ Actions
📝 Notes for Reviewer
To run:
navigate to scraper directory
bun run db:generate
bun run db:migrate:local
bun run seed (this seeds ur local db w the generated data)
bun run dev
go to the wrangler:info url in terminal
Questions / issues
Notes on assumptions: