Expand description
Markdown lexical analysis and token representation.
This module provides the core lexical analysis functionality for parsing Markdown text into a structured token stream. It handles both block-level elements like headings and lists, as well as inline formatting like emphasis and links.
The lexer maintains proper nesting of elements and handles edge cases around delimiter matching and whitespace handling according to CommonMark spec.
§Examples
use markdown2pdf::markdown::Token;
// Heading token with nested content (level 1-6 is valid)
let heading = Token::Heading(vec![Token::Text("Title".to_string())], 1);
assert!(matches!(heading, Token::Heading(_, 1)));
// Emphasis token with nested content (level 1-3 is valid)
let emphasis = Token::Emphasis {
level: 1,
content: vec![Token::Text("italic".to_string())]
};
assert!(matches!(emphasis, Token::Emphasis { level: 1, .. }));
// Link token with text and URL
let link = Token::Link(
"Click here".to_string(),
"https://example.com".to_string()
);
assert!(matches!(link, Token::Link(_, _)));Token (nested) structure looks like:
Token::Heading
└── Vec
Structs§
- Lexer
- A lexical analyzer that converts Markdown text into a sequence of tokens. Handles nested structures and special Markdown syntax elements while maintaining proper context and state during parsing.
Enums§
- Lexer
Error - Error types that can occur during lexical analysis
- Parse
Context - Parsing context — determines which tokens are valid in the current location.
- Token
- Represents the different types of tokens that can be parsed from Markdown text. Each variant captures both the semantic meaning and associated content/metadata needed to properly render the element.