Tokens are the basic units of text that LLMs process. For this activity:
1 token ≈ 0.75 words
To estimate tokens: Tokens = Word Count × 1.33
Input Tokens: Everything sent TO the model, including:
Output Tokens: Everything generated BY the model (the assistant's response)
Billing Rates for This Activity:
Using the provided transcript, fill in the table below to estimate token usage and costs.
Remember: For each turn, the input includes ALL previous messages plus the current user message.
Turn # | Who | Word Count | Est. Tokens (This Message Only) Words × 1.33 |
Cumulative Input Tokens Sum of all previous + current user |
Output Tokens Current assistant response |
---|---|---|---|---|---|
1 | User | 0 | |||
1 | Assistant | Same as above | |||
2 | User | 0 | |||
2 | Assistant | Same as above | |||
3 | User | 0 | |||
3 | Assistant | Same as above | |||
4 | User | 0 | |||
4 | Assistant | Same as above | |||
5 | User | 0 | |||
5 | Assistant | Same as above | |||
TOTALS: |
1. Based on your estimate, did this conversation spend more on input tokens or output tokens? Why do you think that is?
2. How does the cost per turn change as the conversation gets longer? Why?
3. What are the limitations of this estimation method?
4. What are the implications of this cost structure for designing AI-powered learning activities?
5. How would you instruct students on interacting effectively (and economically) with AI assistants?
6. If you used Model B instead, how would the cost change? What might justify using a more expensive model?