Deconstructing Dialogue Dollars

Estimating LLM Conversation Costs Worksheet

Name:                         
Group:             
Date:             

Part 1: Key Concepts

Tokens are the basic units of text that LLMs process. For this activity:

1 token ≈ 0.75 words

To estimate tokens: Tokens = Word Count × 1.33

Input Tokens: Everything sent TO the model, including:

Output Tokens: Everything generated BY the model (the assistant's response)

Billing Rates for This Activity:

Part 2: Transcript Analysis & Token Estimation

Using the provided transcript, fill in the table below to estimate token usage and costs.

Remember: For each turn, the input includes ALL previous messages plus the current user message.

Turn # Who Word Count Est. Tokens
(This Message Only)
Words × 1.33
Cumulative Input Tokens
Sum of all previous + current user
Output Tokens
Current assistant response
1 User 0
1 Assistant Same as above
2 User 0
2 Assistant Same as above
3 User 0
3 Assistant Same as above
4 User 0
4 Assistant Same as above
5 User 0
5 Assistant Same as above
TOTALS:

Part 3: Cost Calculation

Total Input Tokens:
Total Output Tokens:
Model Used:
Model A (Balanced) / Model B (High-Performance)

Cost Calculation for Model A:

Input Cost:
(Total Input Tokens ÷ 1,000,000) × $0.50 = $
Output Cost:
(Total Output Tokens ÷ 1,000,000) × $1.50 = $
Total Conversation Cost:
Input Cost + Output Cost = $

Cost Calculation for Model B (Optional):

Input Cost:
(Total Input Tokens ÷ 1,000,000) × $10.00 = $
Output Cost:
(Total Output Tokens ÷ 1,000,000) × $30.00 = $
Total Conversation Cost:
Input Cost + Output Cost = $

Part 4: Analysis & Discussion Questions

1. Based on your estimate, did this conversation spend more on input tokens or output tokens? Why do you think that is?

2. How does the cost per turn change as the conversation gets longer? Why?

3. What are the limitations of this estimation method?

4. What are the implications of this cost structure for designing AI-powered learning activities?

5. How would you instruct students on interacting effectively (and economically) with AI assistants?

6. If you used Model B instead, how would the cost change? What might justify using a more expensive model?