Skip to content

Epicshop workshop review#3

Closed
kentcdodds wants to merge 1 commit intomainfrom
cursor/epicshop-workshop-review-9b80
Closed

Epicshop workshop review#3
kentcdodds wants to merge 1 commit intomainfrom
cursor/epicshop-workshop-review-9b80

Conversation

@kentcdodds
Copy link
Member

Add learning-review.md to document the end-to-end workshop experience and feedback.

This file contains a step-by-step review of the EpicShop workshop, evaluating each exercise on learning outcomes, instructional clarity, cognitive load, examples, and mechanical correctness, as requested by the task. It highlights issues such as ambiguous UI, missing setup prerequisites, redundant dependency installs, and reliance on GUI-only workflows.


Open in Cursor Open in Web

Document end-to-end exercise completion and note only material issues affecting learning or correctness.

Co-authored-by: me <me@kentcdodds.com>
@cursor
Copy link

cursor bot commented Jan 14, 2026

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@kentcdodds
Copy link
Member Author

@kettanaito I gave cursor (GPT 5.2) the following prompt:

You are an experienced software engineer who has purchased this course for the learning outcomes it promises.

Using the epicshop CLI, navigate and investigate the workshop from the very beginning. Complete the workshop end-to-end, including setting up the playground and completing all exercises in the playground directory.

As you progress, document your experience in learning-review.md. For each exercise, complete it as instructed, then compare your solution to the official solution using the diff command from the epicshop CLI server.

For each exercise step, provide feedback only if there are issues that materially affect learning or correctness. Otherwise, write “no notes.”

Evaluate each exercise on the following dimensions:

  1. Learning outcomes

    • Are the goals of the exercise clear before starting?
    • After completing it, is it clear what skill or concept was learned?
    • Does the exercise meaningfully advance understanding of MCP servers rather than just executing steps?
  2. Instructional clarity

    • Are the instructions explicit, unambiguous, and complete?
    • Are any required steps implied rather than stated?
    • Are assumptions about prior knowledge reasonable for an experienced engineer new to MCP?
  3. Cognitive load and pacing

    • Is the amount of new information introduced appropriate for the exercise?
    • Are there points where missing context forces guesswork?
    • Is the exercise well-scoped, or does it feel rushed or bloated?
  4. Examples and exercise alignment

    • Do examples directly support the task being performed?
    • Are naming, structure, and patterns consistent across examples and exercises?
    • Does the exercise reinforce the example, or diverge from it in confusing ways?
  5. Mechanical correctness

    • Are commands, code snippets, and expected outputs correct?
    • Do links, references, and tooling instructions work as written?
    • Are there environment, versioning, or setup pitfalls that are not called out?

Reporting guidelines

  • Do not nitpick stylistic preferences or minor wording issues.

  • Only report issues that:

    • Block progress
    • Cause incorrect mental models
    • Create unnecessary confusion or backtracking
  • If the exercise delivers good learning outcomes with no notable friction, write “no notes.”


I ran that on a few workshop repos. If you think it's helpful I can run it on the rest of your workshop repos.

@kettanaito
Copy link
Member

@kentcdodds, oh, this is interesting! I think we should do that once I merge #2 since the main contains largely outdated workshop structure that I've since revised and improved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems AI has misread the instructions and couldn't look up the trace.zip file present at exercises/04.debugging/02.problem.trace-viewer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is irrelevant. The exercise is about using the UI mode locally. You cannot use it anywhere else!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed after I've merged the new outline. The test is now passing locally 100% of the time, failing reliably in the trace.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, because it's an exercise designed to be used with the extension.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this one in the new outline.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also fixed this one, the dependencies have been properly aligned across all the exercises.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not anymore. Setting anything in the playground automatically runs npm run setup, which prepares Prisma, generated typedefs from React Router, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outdated exercise, already fixed in the new outline:

	await expect(
		page.getByRole('heading', {
			name: 'Full Stack Workshop Training for Professional Web Developers',
		}),
	).toBeVisible()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will keep an eye on this once I get to this exercise (I'm recording from the middle and around). I've not experienced any issues so far in all the exercises I've recorded.

@kettanaito
Copy link
Member

I'm going to close this one. Most of the feedback provided was either already covered in the new outline (already merged to main) or turned out to be irrelevant due to AI misunderstanding the exercise.

@kettanaito kettanaito closed this Feb 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments