What's going on?

My workflow for using LLMs to create Anki cards

First a warning: Don't live a cognitively sedentary lifestyle!

If you want to improve your cardiovascular fitness by riding a bicycle, you need to push your body to do some work while pedaling yourself around. You can't just sit back on a Class 2 E-bike and roll on the throttle without ever turning the pedals. Both approaches might get you from one end of the bike path to the other, but it should be obvious that the point of the activity isn't just to traverse the bike path. The point of the activity is to stress the heart, lungs, and leg muscles so that a physiological adaptation will occur over time.

Similarly, if you want to effectively learn a complex subject, there's no way to avoid doing cognitively demanding work.

Methodically working through every detail of an example, struggling through dead-end attempts to apply new techniques, and carefully articulating how new concepts fit into your existing knowledge framework might feel mentally draining and uncomfortable, but the cognitive strain is the point! It's what drives the learning! This strain is what forces the adaptations that build intuition and retain knowledge.

When you use a tool like Google's NotebookLM to produce topic outlines, fully worked exercises, mind-maps, or even an AI-generated 15 minute podcast conversation on a topic(!), you've done the cognitive equivalent of zoom down the bike path on a Super73 electric motorbike. Even if the AI output is high-quality (which is rarely the case), just consuming what it gives you won't have a substantial benefit. Creating or using these artifacts isn't the point. The point is to engage deeply with the learning material in a cognitively demanding way.

The marketing of AI products and many of the AI enthusiasts on the internet seem to assume that the best way to use these tools is to turn to them as soon as we experience difficulty in any situation. But I hope it's obvious that, if the goal is to learn things, that isn't a viable approach. That's living a cognitively sedentary lifestyle.

So how to use LLMs for learning?

This is not to say that AI tooling can't improve aspects of the learning experience, or that using an LLM to help make learning feel easier is entirely bad.

No one has an infinite supply of time, focus, or willpower, so there are many ways to apply these tools that'll be worthwhile, even if a use-case may not be "optimal" according to some abstract, idealized model of learning.

These are the basic guidelines I use when deciding if and how to utilize LLMs (or really any tooling or aides) in my learning:

My Basic Anki Workflow

With all that in mind, using Claude to help create Anki cards is a use-case that I've settled on that meets the above guidelines.

The value of spaced repetition systems like Anki is well-established. During my time as a student, when I would make the effort of creating Anki cards for subjects I was learning, my understanding of the material was obviously better. But creating a comprehensive set of Anki cards on my own is a time-consuming upfront investment, even if the payoff down the road is well worth it. Once I left school and started working, I very rarely made the time to create new Anki cards.

By using an LLM to help write Anki cards, I'd be shortchanging myself the cognitive benefits of selecting and wording the cards on my own. But in exchange, I'd be enabling the cognitive effort of reviewing the cards I wouldn't have otherwise created. Given that I still regularly read to learn things for my profession, this seemed like a worthwhile tradeoff to make.

Here's how I do things:

  1. Read a chapter or section of a book on material that I want to learn.
    • Make sure I properly read and understand it. No skimming or glossing over details.
    • I find that 12-18 pages is the sweet spot, but this varies based on the density of the material. Putting too much information into the model's context window will degrade output quality, and makes the review of the output too time consuming.
    • I typically do this reading on my e-ink tablet, since I can easily write notes or mark up the margins if that helps me to digest tricky details.
  2. On my desktop computer, I save off a separate PDF of those pages that I've just read.
    • The PDF should include the actual text, and not consist of scanned images of pages, since relying on an LLM's OCR won't work as effectively.
  3. Input a basic prompt with guidelines for creating effective flashcards, and provide the pdf document.
    • It's important to specify that the scope of flashcard content should be limited to the material in the document, and to avoid making flashcards about content that isn't in the document. Otherwise, the output is likely to contain irrelevant or hallucinated details.
    • I also provide some direction on the styling and formatting of cards.
  4. Go back over the pages that I read, and evaluate/edit the generated flashcards.
    • Here, I'll usually delete a few of the cards, since they cover some details that are unimportant or are otherwise redundant.
    • I'll occasionally make some minor edits to the cards that are worth keeping.
  5. Import the cards into my Anki deck. They go into my usual daily review rotation.
    • For 12-18 pages of technical content, I typically end up with around 10-20 new cards.
    • I try to read and create cards for 2-3 new chapters/sections per week. This feels like a natural, leisurely pace of reading. Adding new cards every single day quickly makes the review burden more than I'm willing to put up with.

Some improvements to workflow

My initial workflow was to do the above using the Claude chat interface. I'd copy/paste my prompt in, upload the pdf, then manually copy/paste the cards I wanted into Anki, one-by-one. This was fine in the beginning, but there's a lot of friction in doing things that way. In particular, the formatting that Claude would generate doesn't always work well with what Anki wants.

I tried using other models (Gemini 2.5 Pro and ChatGPT 4), but Claude (first Sonnet 3.7 and now 4) consistently produced the best results. The other models seemed to more frequently get ordering jumbled, get subtle details wrong, or couldn't follow my selection and formatting instructions as well.

Copy pasting a lot of text around sucks. Once I saw the value of using LLMs to help create Anki cards in this way, I built a small web app to wrap the Anthropic API and streamline the prompting/checking/editing/exporting process. It's freely available here, but you will need to use your own API key. Using Claude Sonnet 4.0, creating cards for a 15-ish page chapter usually costs me between 10 and 15 cents of credits.

How the app improves on the manual workflow:

I've also added a text field option for input, so I can copy/paste material that isn't in PDF format. I don't use this non-PDF workflow super often, but sometimes it's helpful for material from web pages.

While this works well-enough for my needs, there are some limitations.

Have LLM generated Anki cards actually helped me?

I think so.

I've used this workflow to read and create Anki cards for a large portion of the wonderfully written (and freely available) textbook Operating Systems: Three Easy Pieces. As a software engineer without a proper computer-science background, filling this gap in my knowledge has been great! I now have a much better mental model of how programs execute, and what responsibilities the operating system and hardware provide.

When reading a textbook, if the important definitions and examples from earlier chapters are readily available in my memory, reading new chapters that build upon those earlier concepts becomes much more fruitful. There are far more hooks to hang new information on.

And from a psychological perspective, reading and creating cards for each new chapter yields the same kind of satisfaction as ticking off a todo list item. This incentivizes me to read more often than I otherwise might.

I've also used this workflow for some language/library/tooling documentation, coding snippets and patterns, and a few math techniques. This has been helpful on the job. When the friction of retrieval is reduced, I find that I can more effectively operate and "stay in the zone" of my execution mode.

There are a few other ways I'm using LLMs for learning, but this has had the highest return-on-investment.