Scenario: The Rulebook Prototype
Operator Briefing: You’ve been assigned to the “Old-School Lab” (back when humans tried to write intelligence down using rules and logic). Your job is to build a rulebook AI that has zero common sense, then watch it collapse when real life gets messy. This mission teaches you why rule-based systems can be powerful and why they break.
Your mission: Complete the Tool Training Lab: choose a scenario, write 5 IF/THEN rules, then explain where your rulebook breaks and what data you’d collect if you wanted the computer to learn instead.
Step 1: Choose ONE scenario
Pick the one you understand best:
Option A: Trusting an online post
You’re deciding if something you saw online is trustworthy.
Option B: Studying tonight
You’re deciding what study plan to follow.
Option C: Spending limited money
You’re deciding what to buy when you don’t have much money.
Step 2: Write 5 IF/THEN rules
Write rules as if you’re programming a robot that has no common sense.
A good rule looks like this:
- IF (a clear situation) THEN (a clear action)
Example (Option A: trust an online post)
- IF the post has no author name, THEN don’t trust it yet.
- IF the claim sounds extreme, THEN search for the same claim on 2 other reliable sites.
Step 3: Where does your rulebook break? (2 bullets)
This is the main point of the lab.
Real life has:
- exceptions
- mixed situations
- missing information
- “yeah but what if…” moments
Step 4: Teach them a lesson
Now imagine you don’t want to write rules at all.
Instead, you want a computer to learn from examples.
So write one line answering:
If you wanted a computer to learn this instead of using rules, what examples/data would you collect?
Example (trusting online posts)
“I would collect lots of posts labelled ‘true’ or ‘false’, including the source, author, language style, and whether reliable sites confirmed it.”

Comments are closed.