Creating Anki cards with AI?
Do we even want to use LLMs for this?
When people talk about using LLMs in education, I think it's best to be highly skeptical. A lot of LLM output is slop, and if these tools are used as a way to avoid doing hard work, it's unlikely that much learning will occur.
However, no one has an infinite supply of time, focus, or willpower, so there are ways to apply these tools that'll be worthwhile, even if a use-case may not be "optimal" according to some abstract, idealized model of learning.
These are the guidelines I use when deciding if and how to utilize LLMs (or really any tooling or aides) in my learning:
- Remain aware that there is a tradeoff between effectiveness of learning and the subjective feeling of smoothness and ease. This can be difficult to do when it feels so much like you've understood something after having a chatbot explain it to you in a way that clicks.
- Try to use the tools in a way that enables you to exert cognitive effort in an effective way, not to replace the cognitive effort.
- If the time, willpower, or cognitive burden of a learning task is prohibitively high, it may be worth it to use LLM assistance to modify that task so that it reaches the threshold at which you are willing and able to do the task.
My Basic Anki Workflow
Using Claude to help create Anki cards is a use-case that that meets the above guidelines for me.
The value of spaced repetition systems like Anki is well-established. During my time as a student, when I would make the effort of creating Anki cards for subjects I was learning, my understanding of the material was obviously better than when I didn't. But creating a comprehensive set of Anki cards is a time-consuming upfront investment, even if the payoff down the road is well worth it. Once I left school and began working, I rarely made the time to create new Anki cards.
By using an LLM to help write Anki cards, I'd be shortchanging myself the cognitive benefits of selecting and wording the cards on my own. But in exchange, I'd be enabling the cognitive effort of reviewing the cards I wouldn't have otherwise created. Given that I still regularly read to learn things for my profession, this seemed like a worthwhile tradeoff to make.
Here's how I do things:
- Read a chapter or section of a book on material that I want to learn.
- Make sure I properly read and understand it. No skimming or glossing over details.
- I find that 12-18 pages is the sweet spot, but this varies based on the density of the material. Putting too much information into the model's context window will degrade output quality, and makes the review of the output too time consuming.
- I typically do this reading on my e-ink tablet, since I can easily write notes or mark up the margins if that helps me to digest tricky details.
- On my desktop computer, I save off a separate PDF of those pages that I've just read.
- The PDF should include the actual text, and not consist of scanned images of pages, since relying on an LLM's OCR won't work as effectively.
- Input a basic prompt with guidelines for creating effective flashcards, and provide the pdf document.
- It's important to specify that the scope of flashcard content should be limited to the material in the document, and to avoid making flashcards about content that isn't in the document. Otherwise, the output is likely to contain irrelevant or hallucinated details.
- I also provide some direction on the styling and formatting of cards.
- Go back over the pages that I read, and evaluate/edit the generated flashcards.
- Here, I'll usually delete a few of the cards, since they cover some details that are unimportant or are otherwise redundant.
- I'll occasionally make some minor edits to the cards that are worth keeping.
- Import the cards into my Anki deck. They go into my usual daily review rotation.
- For 12-18 pages of technical content, I typically end up with around 10-20 new cards.
- I try to read and create cards for 2-3 new chapters/sections per week. This feels like a natural, leisurely pace of reading. Adding new cards every single day quickly makes the review burden more than I'm willing to put up with.
Some improvements to workflow
My initial workflow was to do the above using the Claude chat interface. I'd copy/paste my prompt in, upload the pdf, then manually copy/paste the cards I wanted into Anki, one-by-one. This was fine in the beginning, but there's a lot of friction in doing things that way. In particular, the formatting that Claude would generate doesn't always work well with what Anki wants.
I tried using other models (Gemini 2.5 Pro and ChatGPT 4), but Claude (first Sonnet 3.7 and now 4) consistently produced the best results. The other models seemed to more frequently get ordering jumbled, get subtle details wrong, or couldn't follow my selection and formatting instructions as well.
Copy pasting a lot of text around sucks. Once I saw the value of using LLMs to help create Anki cards in this way, I built a small web app to wrap the Anthropic API and streamline the prompting/checking/editing/exporting process. It's freely available here, but you will need to use your own API key. Using Claude Sonnet 4.0, creating cards for a 15-ish page chapter usually costs me between 10 and 15 cents of credits.
How the app improves on the manual workflow:
- Management of multiple prompts: I use slightly tweaked instructions for the different sources I'm reading, so the app provides an easy way to save, edit, and switch between prompts.
- Making sure that the formatting is Anki-compatible: Anki cards use HTML tags for styling, but instructing Claude to directly generate well-formatted cards this way is inconsistent. Instead, the app instructs Claude to generate Markdown, and will render that Markdown into well-formatted HTML.
- Simplifying the editing process: I added an integrated PDF Viewer alongside the rendered cards, so that the source material is readily visible during the checking/editing process.
- Exporting/importing cards: Once I'm done tweaking the generated cards, the full set is output as well-formed plaintext CSV, which can then be imported all at once into my Anki deck for review.
I've also added a text field option for input, so I can copy/paste material that isn't in PDF format. I don't use this non-PDF workflow super often, but sometimes it's helpful for material from web pages.
While this works well-enough for my needs, there are some limitations.
- The LLM generated output often stuffs too much information into a single card. Futzing with the prompts can only change things so much.
- Most of the material I've used this on is likely well represented in the LLM's training corpus and is straightforward knowledge. For particularly complex material (like newly published academic papers), the LLM might not be able to create worthwhile Anki cards.
- I also haven't made any serious attempts to apply this to material outside of the technical disciplines of computer engineering and mathematics, so I can't say if this would work well outside of these contexts. I wouldn't be surprised if Claude gets totally tripped up by humanities readings and produces a bunch of slop cards.
After I created this app to fit my workflow, I did a search and found a handful of other tools people have created that do something similar. Some appear to have some fancier features, but I'm not sure if they're useful for my needs (and I don't want to pay for some subscription/credits just to use their product). I've kept my tool simple to focus on managing prompts, easing editing, and making sure the import/export process just works.
Have LLM generated Anki cards actually helped me?
I think so.
I've used this workflow to read and create Anki cards for a large portion of the wonderfully written (and freely available) textbook Operating Systems: Three Easy Pieces. As a software engineer without a proper computer-science background, filling this gap in my knowledge has been great! I now have a much better mental model of how programs execute, and what responsibilities the operating system and hardware handle.
When reading a textbook, if the important definitions and examples from earlier chapters are readily available in my memory, reading new chapters that build upon those earlier concepts becomes much more fruitful. There are far more hooks to hang new information on.
And from a psychological perspective, reading and creating cards for each new chapter yields the same kind of satisfaction as ticking off a todo list item. This incentivizes me to read more often than I otherwise might.
I've also used this workflow for some language/library/tooling documentation, coding snippets and patterns, and a few math techniques. This has been helpful on the job. When the friction of retrieval is reduced, I find that I can more effectively operate and "stay in the zone" of my execution mode.