What You'll Learn in This Class

Key Points

  1. Main Frustration Among Authors:
    • The #1 challenge authors face is getting AI to write in their unique voice.
    • Generic prompts and base models often fail to replicate a distinctive style, especially if the author’s voice differs from mainstream training data.
  1. Solution: Fine-Tuning:
    • Fine-tuning customizes a model's style, tone, and output to match your writing.
    • You can fine-tune models on OpenAI, Mistral, and Google.
    • Mistral supports NSFW content—ideal for romance/erotica authors seeking more control over explicit scenes.
  1. What Fine-Tunes Don’t Do:
    • Fine-tunes do not help LLMs memorize facts.
    • They also don’t work well with generic prompts—you must use consistent, structured prompting for strong results.
  1. Why Prompting Matters:
    • Your existing prompt structure (scene briefs, beats, instructions) should be reused in your fine-tune.
    • Fine-tunes + consistent prompting = output that needs minimal editing.
  1. Authentic Voice Replication:
    • A strong fine-tune can produce outputs nearly indistinguishable from the author’s own writing.
    • Elizabeth cites her Jane Austen fanfic fine-tune as being indistinguishable from her original work—even fooling her longtime editor.
  1. Reader Expectations:
    • Readers care about speed and quality of stories, not how they’re written.
    • Fine-tunes help authors meet demand without compromising voice.
  1. Unique Use Case for Creatives:
    • Most industries use AI for generic tasks, not voice/style.
    • Fine-tuning for creative output is a rare but essential use case for fiction writers and narrative nonfiction authors.
  1. DIY Philosophy:
    • Future Fiction Academy emphasizes teaching authors to build their own fine-tunes rather than outsourcing it.
    • A mass-produced fine-tune used by thousands would defeat the purpose of having a unique voice.
  1. Dyno Trainer Tool:
    • Makes it easier to format and manage your fine-tune datasets.
    • Converts JSON (user-friendly, categorized files) into JSONL (required format for fine-tuning OpenAI/Mistral) or CSV (for Google).
  1. File Formats Covered:
    • JSON: Easier for humans to manage (includes headers, categories).
    • JSONL: The long-form version accepted by LLMs for training. Very syntax-sensitive.
    • Dyno Trainer bridges the two—users input via JSON, and it converts that to JSONL or CSV for model ingestion.

🎓 Summary Paragraph

In this lesson, Elizabeth explains why fine-tuning is the most effective way to get AI to write in your unique author voice—especially if your style differs from the generic data models were trained on. She covers which models support fine-tuning (like OpenAI, Google, and Mistral), the benefits of using Mistral for NSFW genres, and the limitations of fine-tuning (such as not improving factual accuracy). Elizabeth also introduces Dyno Trainer, a tool that simplifies the formatting process, converting easy-to-read JSON files into the strict JSONL or CSV formats needed for fine-tuning. This class is designed to empower you to create a model that writes like you, not like a generic AI.

Complete and Continue