Facebook iconWhat is Tokenization and How does it work? - F22 Labs
Blogs/AI

What is Tokenization and How does it work?

Oct 31, 20244 Min Read
Written by Ajay Patel
What is Tokenization and How does it work? Hero

Tokenization is a fundamental process in Natural Language Processing (NLP) and plays a crucial role in preparing text data for machine learning models. This blog post will break down what tokenization is, why it's important, and how it works with a concrete example.

What is Tokenization?

Tokenization is the process of splitting text into smaller, manageable pieces called tokens. These tokens can be words, subwords, characters, or other units depending on the tokenization strategy. The purpose of tokenization is to transform text into a format that can be effectively processed by machine learning algorithms.

Why is Tokenization Important?

Before any NLP model can analyze and understand text, it needs to be converted into a numerical format. Tokenization is the first step in this conversion process. By breaking down text into tokens, we enable models to handle, learn from, and make predictions based on textual data.

How Tokenization Works

Let’s dive into a practical example to understand tokenization better. Consider the sentence:

"f22 Labs: A software studio based out of Chennai. We are the rocket fuel for other startups across the world, powering them with extremely high-quality software. We help entrepreneurs build their vision into beautiful software products."

Here’s a step-by-step breakdown of how tokenization works:

Step 1: Splitting the Sentence into Tokens

The first step in tokenization is breaking the sentence into smaller units. Depending on the tokenizer used, these tokens can be:

Partner with Us for Success

Experience seamless collaboration and exceptional results.

Words: ["f22", "Labs", ":", "A", "software", "studio", "based", "out", "of", "Chennai", ".", "We", "are", "the", "rocket", "fuel", "for", "other", "startups", "across", "the", "world", ",", "powering", "them", "with", "extremely", "high-quality", "software", ".", "We", "help", "entrepreneurs", "build", "their", "vision", "into", "beautiful", "software", "products", "."]

Subwords: the tokens might be more granular. For example, ["f22", "Lab", "s", ":", "A", "software", "studio", "based", "out", "of", "Chennai", ".", "We", "are", "the", "rock", "et", "fuel", "for", "other", "start", "ups", "across", "the", "world", ",", "power", "ing", "them", "with", "extremely", "high", "-", "quality", "software", ".", "We", "help", "entrepreneur", "s", "build", "their", "vision", "into", "beautiful", "software", "products", "."]

Characters: For character-level tokenization, the sentence would be split into individual characters: ["f", "2", "2", " ", "L", "a", "b", "s", ":", " ", "A", " ", "s", "o", "f", "t", "w", "a", "r", "e", " ", "s", "t", "u", "d", "i", "o", " ", "b", "a", "s", "e", "d", " ", "o", "u", "t", " ", "o", "f", " ", "C", "h", "e", "n", "n", "a", "i", ".", " ", "W", "e", " ", "a", "r", "e", " ", "t", "h", "e", " ", "r", "o", "c", "k", "e", "t", " ", "f", "u", "e", "l", " ", "f", "o", "r", " ", "o", "t", "h", "e", "r", " ", "s", "t", "a", "r", "t", "u", "p", "s", " ", "a", "c", "r", "o", "s", "s", " ", "t", "h", "e", " ", "w", "o", "r", "l", "d", ",", " ", "p", "o", "w", "e", "r", "i", "n", "g", " ", "t", "h", "e", "m", " ", "w", "i", "t", "h", " ", "e", "x", "t", "r", "e", "m", "e", "l", "y", " ", "h", "i", "g", "h", "-", "q", "u", "a", "l", "i", "t", "y", " ", "s", "o", "f", "t", "w", "a", "r", "e", ".", " ", "W", "e", " ", "h", "e", "l", "p", " ", "e", "n", "t", "r", "e", "p", "r", "e", "n", "e", "u", "r", "s", " ", "b", "u", "i", "l", "d", " ", "t", "h", "e", "i", "r", " ", "v", "i", "s", "i", "o", "n", " ", "i", "n", "t", "o", " ", "b", "e", "a", "u", "t", "i", "f", "u", "l", " ", "s", "o", "f", "t", "w", "a", "r", "e", " ", "p", "r", "o", "d", "u", "c", "t", "s", "."]

Step 2: Mapping Tokens to Numerical IDs

Once the sentence is tokenized, each token is mapped to a unique numerical ID using a vocabulary. The vocabulary is a predefined mapping that associates each token with a specific ID. For example:

Vocabulary:

{"f22": 1501, "Labs": 1022, ":": 3, "A": 4, "software": 2301, "studio": 2302, "based": 2303, "out": 2304, "of": 2305, "Chennai": 2306, ".": 5, "We": 6, "are": 7, "the": 8, "rocket": 2307, "fuel": 2308, "for": 2309, "other": 2310, "startups": 2311, "across": 2312, "world": 2313, ",": 9, "powering": 2314, "them": 2315, "with": 2316, "extremely": 2317, "high-quality": 2318, "products": 2319, "entrepreneurs": 2320, "build": 2321, "their": 2322, "vision": 2323, "into": 2324, "beautiful": 2325}

Token IDs:

[1501, 1022, 3, 4, 2301, 2302, 2303, 2304, 2305, 2306, 5, 6, 7, 8, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 9, 2314, 2315, 2316, 2317, 2318, 2301, 5, 6, 2320, 2321, 2322, 2323, 2324, 2325]

So the original sentence is represented as the sequence of token IDs.

Real-World Tokenization

To analyze the tokens and token IDs for your example sentence using OpenAI's tokenizer, you can follow these steps:

1. Visit the Tokenizer Tool: Go to OpenAI's Tokenizer to access the tool.

Partner with Us for Success

Experience seamless collaboration and exceptional results.

2. Input Your Sentence: Enter your example sentence in the text box. 

View Tokens and IDs: The tool will display the tokens and their corresponding token IDs. Each word or subword will be split into tokens as per the GPT tokenizer's rules, and you can see how the sentence breaks down.

Analyze the tokens and token IDs using ChatGPT

Token IDs

Token IDs

Suggested Reads- What is a Large Language Model (LLM)

Conclusion

Tokenization is the crucial first step in transforming raw text into a format that machine learning models can understand. By breaking down sentences into tokens and converting them to numerical IDs, we prepare text data for further processing and analysis. Understanding how tokenization works is essential for anyone working with NLP tasks and models.

Author-Ajay Patel
Ajay Patel

Hi, I am an AI engineer with 3.5 years of experience passionate about building intelligent systems that solve real-world problems through cutting-edge technology and innovative solutions.

Phone

Next for you

What is MLX? A Beginner's Guide To Apple’s Machine Learning Cover

AI

Apr 1, 20255 min read

What is MLX? A Beginner's Guide To Apple’s Machine Learning

Are you tired of compatibility issues when running machine learning on your Mac? MLX, a powerful, native machine learning framework designed specifically for macOS and Apple Silicon, is changing how Mac users approach machine learning. This guide unpacks everything beginners need to know about this exciting framework. From its innovative unified memory model to its impressive speed on Apple Silicon chips, we'll walk through why MLX matters and how you can start using it today. Keep reading to s

Best OCR Models Comparison Guide in 2025 Cover

AI

Apr 2, 20255 min read

Best OCR Models Comparison Guide in 2025

OCR technology has transformed the way documents are processed, allowing text to be extracted from images and converted to a readable format for computers, and this has opened up a variety of applications, from data entry to searching scanned archives. In the last few years, OCR has seen dramatic advances, driven by the advent of new deep learning models, therefore this has extended the scope of OCR to previously unthought-of levels. In this blog, we will highlight some of the most advanced OCR

What is Model Context Protocol (MCP) in 2025 Cover

AI

Mar 29, 20255 min read

What is Model Context Protocol (MCP) in 2025

Integrating AI models with external data sources has traditionally been a complex and fragmented process, often requiring custom code for each dataset. This approach not only consumes significant development time but also poses challenges in maintaining and scaling integrations.  To address these issues, Anthropic introduced the Model Context Protocol (MCP), an open-source tool designed to standardize and simplify the connection between AI assistants and various data sources.  What is MCP (Mo