Here are a few resources if you're curious about the tools we use behind the scenes:
What it is: A way to convert words and phrases into numbers, so computers can compare them based on meaning rather than just matching letters.
Think of it like: Plotting every word or concept on a giant map, where similar ideas are placed closer together.
Learn more:
The Illustrated Word2Vec by Jay Alammar
Google’s Introduction to Embeddings (TensorFlow)
What it is: A method that groups similar things together—like sorting books into piles where each pile shares a common theme.
Think of it like: Asking the computer to find the most natural groupings in a list of topics, without being told what the groups are.
Learn more:
Khan Academy: Clustering with K-Means
StatQuest’s YouTube Explanation of K-Means (10 min)
What it is: A kind of computer program inspired by how brains work. It "learns" by looking at lots of examples and adjusting itself to make better predictions.
Think of it like: Teaching a toddler to recognize cats and dogs—only the toddler is made of math.
Learn more:
3Blue1Brown: What is a Neural Network?
Neural Networks and Deep Learning by Michael Nielsen (Ch. 1)
What it is: A type of AI that predicts what words come next in a sentence. It can be trained to write, summarize, or even answer questions.
Think of it like: A really advanced autocomplete—but trained on billions of words.
Learn more:
Distill.pub: Visualizing a Neural Language Model
Google’s AI Blog: What are Language Models?