Module markdown

Module markdown 

Source
Expand description

Markdown lexical analysis and token representation.

This module provides the core lexical analysis functionality for parsing Markdown text into a structured token stream. It handles both block-level elements like headings and lists, as well as inline formatting like emphasis and links.

The lexer maintains proper nesting of elements and handles edge cases around delimiter matching and whitespace handling according to CommonMark spec.

§Examples

use markdown2pdf::markdown::Token;

// Heading token with nested content (level 1-6 is valid)
let heading = Token::Heading(vec![Token::Text("Title".to_string())], 1);
assert!(matches!(heading, Token::Heading(_, 1)));

// Emphasis token with nested content (level 1-3 is valid)
let emphasis = Token::Emphasis {
    level: 1,
    content: vec![Token::Text("italic".to_string())]
};
assert!(matches!(emphasis, Token::Emphasis { level: 1, .. }));

// Link token with text and URL
let link = Token::Link(
    "Click here".to_string(),
    "https://example.com".to_string()
);
assert!(matches!(link, Token::Link(_, _)));

Token (nested) structure looks like: Token::Heading └── Vec ├── Token::Text ├── Token::Emphasis │ └── Vec │ └── Token::Text └── Token::Link ├── text: String └── url: String

Structs§

Lexer
A lexical analyzer that converts Markdown text into a sequence of tokens. Handles nested structures and special Markdown syntax elements while maintaining proper context and state during parsing.

Enums§

LexerError
Error types that can occur during lexical analysis
ParseContext
Parsing context — determines which tokens are valid in the current location.
Token
Represents the different types of tokens that can be parsed from Markdown text. Each variant captures both the semantic meaning and associated content/metadata needed to properly render the element.