Files
tamigo-cli/venv/lib/python3.12/site-packages/pygments/__pycache__/lexer.cpython-312.pyc

329 lines
38 KiB
Plaintext
Raw Normal View History

<EFBFBD>
<00><16>i%<25><00><00>X<00>dZddlZddlZddlZddlmZmZddlmZddl m
Z
m Z m Z m Z mZddlmZmZmZmZmZmZddlmZgd<08>Zej2d <09>Zgd
<EFBFBD>Zed <0B><00>ZGd <0C>d e<1E>ZGd<0E>de<1F><10>Z Gd<11>de <20>Z!Gd<13>de"<22>Z#Gd<15>d<16>Z$e$<24>Z%Gd<17>de&<26>Z'Gd<19>d<1A>Z(d<1B>Z)Gd<1C>d<1D>Z*e*<2A>Z+d<1E>Z,Gd<1F>d <20>Z-Gd!<21>d"e<14>Z.Gd#<23>d$e<1F>Z/Gd%<25>d&e e/<2F><10>Z0Gd'<27>d(<28>Z1Gd)<29>d*e0<65>Z2d+<2B>Z3Gd,<2C>d-e/<2F>Z4Gd.<2E>d/e0e4<65><10>Z5y)0z<30>
pygments.lexer
~~~~~~~~~~~~~~
Base lexer classes.
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
<EFBFBD>N)<02> apply_filters<72>Filter)<01>get_filter_by_name)<05>Error<6F>Text<78>Other<65>
Whitespace<EFBFBD>
_TokenType)<06> get_bool_opt<70> get_int_opt<70> get_list_opt<70>make_analysator<6F>Future<72> guess_decode)<01> regex_opt) <0A>Lexer<65>
RegexLexer<EFBFBD>ExtendedRegexLexer<65>DelegatingLexer<65> LexerContext<78>include<64>inherit<69>bygroups<70>using<6E>this<69>default<6C>words<64>line_rez.*?
))s<>utf-8)s<00><>zutf-32)s<00><>zutf-32be)s<00><>zutf-16)s<00><>zutf-16bec<00><00>y)N<><00>)<01>xs <20>T/home/daniel/Projects/tamigo-cli/venv/lib/python3.12/site-packages/pygments/lexer.py<70><lambda>r%"<00><00><00>c<00><00>eZdZdZd<02>Zy)<04> LexerMetaz<61>
This metaclass automagically converts ``analyse_text`` methods into
static methods which always return float values.
c<00>\<00>d|vrt|d<00>|d<tj||||<03>S)N<> analyse_text)r<00>type<70>__new__)<04>mcs<63>name<6D>bases<65>ds r$r-zLexerMeta.__new__+s3<00><00> <19>Q<EFBFBD> <1E> /<2F><01>.<2E>0A<30> B<>A<EFBFBD>n<EFBFBD> <1D><13>|<7C>|<7C>C<EFBFBD><14>u<EFBFBD>a<EFBFBD>0<>0r'N)<05>__name__<5F>
__module__<EFBFBD> __qualname__<5F>__doc__r-r"r'r$r)r)%s <00><00><08>
1r'r)c<00>`<00>eZdZdZdZgZgZgZgZdZ dZ
dZ dZ d<04>Z d<05>Zd<06>Zd<07>Zd<08>Zd d <09>Zd
<EFBFBD>Zy) ra"
Lexer for a specific language.
See also :doc:`lexerdevelopment`, a high-level guide to writing
lexers.
Lexer classes have attributes used for choosing the most appropriate
lexer based on various criteria.
.. autoattribute:: name
:no-value:
.. autoattribute:: aliases
:no-value:
.. autoattribute:: filenames
:no-value:
.. autoattribute:: alias_filenames
.. autoattribute:: mimetypes
:no-value:
.. autoattribute:: priority
Lexers included in Pygments should have two additional attributes:
.. autoattribute:: url
:no-value:
.. autoattribute:: version_added
:no-value:
Lexers included in Pygments may have additional attributes:
.. autoattribute:: _example
:no-value:
You can pass options to the constructor. The basic options recognized
by all lexers and processed by the base `Lexer` class are:
``stripnl``
Strip leading and trailing newlines from the input (default: True).
``stripall``
Strip all leading and trailing whitespace from the input
(default: False).
``ensurenl``
Make sure that the input ends with a newline (default: True). This
is required for some lexers that consume input linewise.
.. versionadded:: 1.3
``tabsize``
If given and greater than 0, expand tabs in the input (default: 0).
``encoding``
If given, must be an encoding name. This encoding will be used to
convert the input string to Unicode, if it is not already a Unicode
string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
Latin1 detection. Can also be ``'chardet'`` to use the chardet
library, if it is installed.
``inencoding``
Overrides the ``encoding`` if given.
Nrc <00>l<00>||_t|dd<02>|_t|dd<04>|_t|dd<02>|_t |dd<07>|_|jdd <09>|_|jd
<EFBFBD>xs |j|_g|_ t|d d <0C>D]}|j|<02><00>y )a<>
This constructor takes arbitrary options as keyword arguments.
Every subclass must first process its own options and then call
the `Lexer` constructor, since it processes the basic
options like `stripnl`.
An example looks like this:
.. sourcecode:: python
def __init__(self, **options):
self.compress = options.get('compress', '')
Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the
command line usage), there are various utility functions
available to help with that, see `Utilities`_.
<20>stripnlT<6C>stripallF<6C>ensurenl<6E>tabsizer<00>encoding<6E>guess<73>
inencoding<EFBFBD>filtersr"N) <0C>optionsr r8r9r:r r;<00>getr<r?r <00>
add_filter)<03>selfr@<00>filter_s r$<00>__init__zLexer.__init__<5F>s<><00><00>&<1F><04> <0C>#<23>G<EFBFBD>Y<EFBFBD><04>=<3D><04> <0C>$<24>W<EFBFBD>j<EFBFBD>%<25>@<40><04> <0A>$<24>W<EFBFBD>j<EFBFBD>$<24>?<3F><04> <0A>"<22>7<EFBFBD>I<EFBFBD>q<EFBFBD>9<><04> <0C><1F> <0B> <0B>J<EFBFBD><07>8<><04> <0A><1F> <0B> <0B>L<EFBFBD>1<>B<>T<EFBFBD>]<5D>]<5D><04> <0A><19><04> <0C>#<23>G<EFBFBD>Y<EFBFBD><02>;<3B> %<25>G<EFBFBD> <10>O<EFBFBD>O<EFBFBD>G<EFBFBD> $<24> %r'c<00><><00>|jr'd|jj<00>d|j<00>d<03>Sd|jj<00>d<03>S)Nz<pygments.lexers.z with <20>>)r@<00> __class__r2<00>rCs r$<00>__repr__zLexer.__repr__<5F>sI<00><00> <0F><<3C><<3C>&<26>t<EFBFBD>~<7E>~<7E>'><3E>'><3E>&?<3F>v<EFBFBD>d<EFBFBD>l<EFBFBD>l<EFBFBD>EU<45>UV<55>W<> W<>&<26>t<EFBFBD>~<7E>~<7E>'><3E>'><3E>&?<3F>q<EFBFBD>A<> Ar'c <00>r<00>t|t<00>s t|fi|<02><01>}|jj |<01>y)z8
Add a new stream filter to this lexer.
N)<05>
isinstancerrr?<00>append)rCrDr@s r$rBzLexer.add_filter<65>s/<00><00><1A>'<27>6<EFBFBD>*<2A>(<28><17><<3C>G<EFBFBD><<3C>G<EFBFBD> <0C> <0C> <0C><1B><1B>G<EFBFBD>$r'c<00><00>y)a<>
A static method which is called for lexer guessing.
It should analyse the text and return a float in the range
from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer
will not be selected as the most probable one, if it returns
``1.0``, it will be selected immediately. This is used by
`guess_lexer`.
The `LexerMeta` metaclass automatically wraps this function so
that it works like a static method (no ``self`` or ``cls``
parameter) and the return value is automatically converted to
`float`. If the return value is an object that is boolean `False`
it's the same as if the return values was ``0.0``.
Nr")<01>texts r$r+zLexer.analyse_text<78>r&r'c<00><><00>t|t<00>s<>|jdk(rt|<01>\}}n<>|jdk(r<> ddl}d}t D]6\}}|j|<06>s<01>|t|<06>dj|d<06>}n|<05>9|j|dd<00>}|j|jd<08>xsd d<06>}|}nZ|j|j<00>}|jd
<EFBFBD>r.|td
<EFBFBD>d}n|jd
<EFBFBD>r|td
<EFBFBD>d}|jd d <0C>}|jd d <0C>}|jr|j<00>}n|jr|jd <0C>}|j dkDr|j#|j <00>}|j$r|j'd <0C>s|d z }|S#t
$r}t d<05>|<04>d}~wwxYw)zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r=<00>chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/<2F>replaceir<ruz
<EFBFBD>
<EFBFBD> )rL<00>strr<rrQ<00> ImportError<6F> _encoding_map<61>
startswith<EFBFBD>len<65>decode<64>detectrArRr9<00>stripr8r;<00>
expandtabsr:<00>endswith) rCrO<00>_rQ<00>e<>decoded<65>bomr<<00>encs r$<00>_preprocess_lexer_inputzLexer._preprocess_lexer_input<75>s<><00><00><1A>$<24><03>$<24><13>}<7D>}<7D><07>'<27>&<26>t<EFBFBD>,<2C><07><04>a<EFBFBD><15><1D><1D>)<29>+<2B>T<01>"<22> <1F><07>%2<><1E>M<EFBFBD>C<EFBFBD><18><1B><EFBFBD><EFBFBD>s<EFBFBD>+<2B>"&<26>s<EFBFBD>3<EFBFBD>x<EFBFBD>y<EFBFBD>/<2F>"8<>"8<><18>9<EFBFBD>"M<><07><1D><1E>
<1B>?<3F>!<21>.<2E>.<2E><14>e<EFBFBD>t<EFBFBD><1B>5<>C<EFBFBD>"<22>k<EFBFBD>k<EFBFBD>#<23>'<27>'<27>*<2A>*=<3D>*H<><17>*3<>5<>G<EFBFBD><1E><04><1B>{<7B>{<7B>4<EFBFBD>=<3D>=<3D>1<><04><17>?<3F>?<3F>8<EFBFBD>,<2C><1F><03>H<EFBFBD> <0A><0E>/<2F>D<EFBFBD><13><EFBFBD><EFBFBD>x<EFBFBD>(<28><1B>C<EFBFBD><08>M<EFBFBD>N<EFBFBD>+<2B><04><14>|<7C>|<7C>F<EFBFBD>D<EFBFBD>)<29><04><13>|<7C>|<7C>D<EFBFBD>$<24>'<27><04> <0F>=<3D>=<3D><17>:<3A>:<3A><<3C>D<EFBFBD> <11>\<5C>\<5C><17>:<3A>:<3A>d<EFBFBD>#<23>D<EFBFBD> <0F><<3C><<3C>!<21> <1B><17>?<3F>?<3F>4<EFBFBD><<3C><<3C>0<>D<EFBFBD> <0F>=<3D>=<3D><14><1D><1D>t<EFBFBD>!4<> <10>D<EFBFBD>L<EFBFBD>D<EFBFBD><13> <0B><>I#<23>T<01>%<25>'L<01>M<01>RS<52>T<01><>T<01>s<00>G<00> G"<03> G<03>G"c<00>x<00><00><01><00>j<00><01><00><01><00>fd<01>}|<03>}|st|<04>j<00><00>}|S)ae
This method is the basic interface of a lexer. It is called by
the `highlight()` function. It must process the text and return an
iterable of ``(tokentype, value)`` pairs from `text`.
Normally, you don't need to override this method. The default
implementation processes the options recognized by all lexers
(`stripnl`, `stripall` and so on), and then yields all tokens
from `get_tokens_unprocessed()`, with the ``index`` dropped.
If `unfiltered` is set to `True`, the filtering mechanism is
bypassed even if filters are defined.
c3<00>N<00>K<00><00>j<00><04>D] \}}}||f<02><01><00>y<00>w<01>N)<01>get_tokens_unprocessed)r_<00>t<>vrCrOs <20><>r$<00>streamerz"Lexer.get_tokens.<locals>.streamer s0<00><><00><><00><1F>6<>6<>t<EFBFBD><<3C> <1B><07><01>1<EFBFBD>a<EFBFBD><17><11>d<EFBFBD>
<EFBFBD> <1B>s<00>"%)rdrr?)rCrO<00>
unfilteredrk<00>streams`` r$<00>
get_tokenszLexer.get_tokens<6E>s=<00><><00><14>+<2B>+<2B>D<EFBFBD>1<><04> <1B><1A><1A><06><19>"<22>6<EFBFBD>4<EFBFBD><<3C><<3C><14>><3E>F<EFBFBD><15> r'c<00><00>t<00>)aS
This method should process the text and return an iterable of
``(index, tokentype, value)`` tuples where ``index`` is the starting
position of the token within the input text.
It must be overridden by subclasses. It is recommended to
implement it as a generator to maximize effectiveness.
)<01>NotImplementedError)rCrOs r$rhzLexer.get_tokens_unprocesseds
<00><00>"<22>!r')F)r2r3r4r5r/<00>aliases<65> filenames<65>alias_filenames<65> mimetypes<65>priority<74>url<72> version_added<65>_examplerErJrBr+rdrnrhr"r'r$rr1sl<00><00>8<08>v <10>D<EFBFBD><11>G<EFBFBD>
<13>I<EFBFBD><19>O<EFBFBD><13>I<EFBFBD><11>H<EFBFBD> <0F>C<EFBFBD><19>M<EFBFBD><14>H<EFBFBD>%<25><B<01> %<25> <0C>"-<14>^<16>0 "r'r)<01> metaclassc<00> <00>eZdZdZefd<02>Zd<03>Zy)ra 
This lexer takes two lexer as arguments. A root lexer and
a language lexer. First everything is scanned using the language
lexer, afterwards all ``Other`` tokens are lexed using the root
lexer.
The lexers from the ``template`` lexer package use this base lexer.
c <00>r<00>|di|<04><01>|_|di|<04><01>|_||_tj|fi|<04><01>y<00>Nr")<05>
root_lexer<EFBFBD>language_lexer<65>needlerrE)rC<00> _root_lexer<65>_language_lexer<65>_needler@s r$rEzDelegatingLexer.__init__+s9<00><00>%<25>0<><07>0<><04><0F>-<2D>8<><07>8<><04><1B><1D><04> <0B> <0A><0E><0E>t<EFBFBD>'<27>w<EFBFBD>'r'c<00>l<00>d}g}g}|jj|<01>D]N\}}}||jur&|r|jt |<02>|f<02>g}||z }<02>;|j|||f<03><00>P|r|jt |<02>|f<02>t ||j j|<02><00>S)N<>)r~rhrrMrY<00> do_insertionsr})rCrO<00>buffered<65>
insertions<EFBFBD>
lng_buffer<EFBFBD>irirjs r$rhz&DelegatingLexer.get_tokens_unprocessed1s<><00><00><15><08><17>
<EFBFBD><17>
<EFBFBD><1B>*<2A>*<2A>A<>A<>$<24>G<> -<2D>G<EFBFBD>A<EFBFBD>q<EFBFBD>!<21><10>D<EFBFBD>K<EFBFBD>K<EFBFBD><1F><1D><1E>%<25>%<25>s<EFBFBD>8<EFBFBD>}<7D>j<EFBFBD>&A<>B<>!#<23>J<EFBFBD><18>A<EFBFBD> <0A><08><1A>!<21>!<21>1<EFBFBD>a<EFBFBD><11>)<29>,<2C> -<2D> <16> <16> <1D> <1D>s<EFBFBD>8<EFBFBD>}<7D>j<EFBFBD>9<> :<3A><1C>Z<EFBFBD>!<21>_<EFBFBD>_<EFBFBD>C<>C<>H<EFBFBD>M<>O<01> Or'N)r2r3r4r5rrErhr"r'r$rr!s<00><00><08>>C<01>(<28> Or'rc<00><00>eZdZdZy)rzI
Indicates that a state should include rules from another state.
N<>r2r3r4r5r"r'r$rrHs <00><00><08> r'rc<00><00>eZdZdZd<02>Zy)<04>_inheritzC
Indicates the a state should inherit from its superclass.
c<00><00>y)Nrr"rIs r$rJz_inherit.__repr__Ss<00><00>r'N)r2r3r4r5rJr"r'r$r<>r<>Os <00><00><08>r'r<>c<00><00>eZdZdZd<02>Zd<03>Zy)<05>combinedz:
Indicates a state combined from multiple states.
c<00>.<00>tj||<01>Srg)<02>tupler-)<02>cls<6C>argss r$r-zcombined.__new__^s<00><00><14>}<7D>}<7D>S<EFBFBD>$<24>'<27>'r'c<00><00>yrgr")rCr<>s r$rEzcombined.__init__as<00><00> r'N)r2r3r4r5r-rEr"r'r$r<>r<>Ys<00><00><08>(<28> r'r<>c<00>:<00>eZdZdZd<02>Zd d<04>Zd d<05>Zd d<06>Zd<07>Zd<08>Z y)
<EFBFBD> _PseudoMatchz:
A pseudo match object constructed from a string.
c<00> <00>||_||_yrg)<02>_text<78>_start)rC<00>startrOs r$rEz_PseudoMatch.__init__ks<00><00><19><04>
<EFBFBD><1B><04> r'Nc<00><00>|jSrg)r<><00>rC<00>args r$r<>z_PseudoMatch.startos <00><00><13>{<7B>{<7B>r'c<00>F<00>|jt|j<00>zSrg)r<>rYr<>r<>s r$<00>endz_PseudoMatch.endrs<00><00><13>{<7B>{<7B>S<EFBFBD><14><1A><1A>_<EFBFBD>,<2C>,r'c<00>4<00>|r td<01><00>|jS)Nz No such group)<02>
IndexErrorr<EFBFBD>r<>s r$<00>groupz_PseudoMatch.groupus<00><00> <0E><1C>_<EFBFBD>-<2D> -<2D><13>z<EFBFBD>z<EFBFBD>r'c<00><00>|jfSrg)r<>rIs r$<00>groupsz_PseudoMatch.groupszs<00><00><14>
<EFBFBD>
<EFBFBD>}<7D>r'c<00><00>iSrgr"rIs r$<00> groupdictz_PseudoMatch.groupdict}s<00><00><11> r'rg)
r2r3r4r5rEr<>r<>r<>r<>r<>r"r'r$r<>r<>fs%<00><00><08><1C><1B>-<2D><1A>
<1D>r'r<>c<00><00><00>d<02>fd<01> }|S)zL
Callback that yields multiple actions for each group in the match.
c
3<00><><00>K<00>t<00><07>D]<5D>\}}|<04><01> t|<04>tur1|j|dz<00>}|s<01>1|j |dz<00>||f<03><01><00>K|j|dz<00>}|<05><01>b|r|j |dz<00>|_||t |j |dz<00>|<05>|<02>D] }|s<01>|<06><01><00> <00><>|r|j<00>|_yy<00>w)N<>)<08> enumerater,r
r<>r<><00>posr<73>r<>)<08>lexer<65>match<63>ctxr<78><00>action<6F>data<74>itemr<6D>s <20>r$<00>callbackzbygroups.<locals>.callback<63>s<><00><><00><><00>"<22>4<EFBFBD><1F> '<27>I<EFBFBD>A<EFBFBD>v<EFBFBD><15>~<7E><18><15>f<EFBFBD><1C><1A>+<2B><1C>{<7B>{<7B>1<EFBFBD>q<EFBFBD>5<EFBFBD>)<29><04><17><1F>+<2B>+<2B>a<EFBFBD>!<21>e<EFBFBD>,<2C>f<EFBFBD>d<EFBFBD>:<3A>:<3A><1C>{<7B>{<7B>1<EFBFBD>q<EFBFBD>5<EFBFBD>)<29><04><17>#<23><1A>"'<27>+<2B>+<2B>a<EFBFBD>!<21>e<EFBFBD>"4<><03><07> &<26>u<EFBFBD>'3<>E<EFBFBD>K<EFBFBD>K<EFBFBD><01>A<EFBFBD><05>4F<34><04>'M<>s<EFBFBD>!T<01>'<27><04><1F>"&<26>J<EFBFBD>'<27> '<27> <0F><1B>i<EFBFBD>i<EFBFBD>k<EFBFBD>C<EFBFBD>G<EFBFBD> <0F>s<00><C<01>0C<01>1AC<01>8!Crgr")r<>r<>s` r$rr<00>s<00><><00>"<22>& <14>Or'c<00><00>eZdZdZy)<03>_ThiszX
Special singleton used for indicating the caller class.
Used by ``using``.
Nr<4E>r"r'r$r<>r<><00>s<00><00>r'r<>c <00><><00><00><01><04>i<00>d<01>vr4<72>jd<01>}t|ttf<02>r|<02>d<nd|f<02>d<<00>tur d<06><04>fd<04> }|Sd<06><00><04>fd<05> }|S)a<>
Callback that processes the match with a different lexer.
The keyword arguments are forwarded to the lexer, except `state` which
is handled separately.
`state` specifies the state that the new lexer will start in, and can
be an enumerable such as ('root', 'inline', 'string') or a simple
string which is assumed to be on top of the root state.
Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
<20>state<74>stack<63>rootc3<00>*<00>K<00><00> r.<2E> j|j<00>|jdi<00> <09><01>}n|}|j<00>}|j|j <00>fi<00><08><01>D]\}}}||z||f<03><01><00>|r|j <00>|_yy<00>wr|)<08>updater@rHr<>rhr<>r<>r<>)
r<EFBFBD>r<>r<><00>lx<6C>sr<73>rirj<00> gt_kwargs<67>kwargss
<20><>r$r<>zusing.<locals>.callback<63>s<><00><><00><><00><16><16> <0A> <0A>e<EFBFBD>m<EFBFBD>m<EFBFBD>,<2C>$<24>U<EFBFBD>_<EFBFBD>_<EFBFBD>.<2E>v<EFBFBD>.<2E><02><1A><02><15> <0B> <0B> <0A>A<EFBFBD>4<>2<EFBFBD>4<>4<>U<EFBFBD>[<5B>[<5B>]<5D>P<>i<EFBFBD>P<> "<22><07><01>1<EFBFBD>a<EFBFBD><17>!<21>e<EFBFBD>Q<EFBFBD><01>k<EFBFBD>!<21> "<22><12><1F>)<29>)<29>+<2B><03><07><13>s<00>BBc3<00> <00>K<00><00>
j|j<00><00>di<00>
<EFBFBD><01>}|j<00>}|j|j <00>fi<00> <09><01>D]\}}}||z||f<03><01><00>|r|j <00>|_yy<00>wr|)r<>r@r<>rhr<>r<>r<>) r<>r<>r<>r<>r<>r<>rirj<00>_otherr<72>r<>s <20><><EFBFBD>r$r<>zusing.<locals>.callback<63>s<><00><><00><><00> <12>M<EFBFBD>M<EFBFBD>%<25>-<2D>-<2D> (<28><17>!<21>&<26>!<21>B<EFBFBD><15> <0B> <0B> <0A>A<EFBFBD>4<>2<EFBFBD>4<>4<>U<EFBFBD>[<5B>[<5B>]<5D>P<>i<EFBFBD>P<> "<22><07><01>1<EFBFBD>a<EFBFBD><17>!<21>e<EFBFBD>Q<EFBFBD><01>k<EFBFBD>!<21> "<22><12><1F>)<29>)<29>+<2B><03><07><13>s<00>BBrg)<05>poprL<00>listr<74>r)r<>r<>r<>r<>r<>s`` @r$rr<00>se<00><><00><13>I<EFBFBD><0E>&<26><18> <12>J<EFBFBD>J<EFBFBD>w<EFBFBD> <1F><01> <15>a<EFBFBD>$<24><05><1D> '<27>!"<22>I<EFBFBD>g<EFBFBD> <1E>"(<28>!<21><1B>I<EFBFBD>g<EFBFBD> <1E> <0A><14>~<7E> &<26>2 <14>O<EFBFBD> &<26> <14>Or'c<00><00>eZdZdZd<02>Zy)rz<>
Indicates a state or state action (e.g. #pop) to apply.
For example default('#pop') is equivalent to ('', Token, '#pop')
Note that state tuples may be used as well.
.. versionadded:: 2.0
c<00><00>||_yrg)r<>)rCr<>s r$rEzdefault.__init__<5F>s <00><00><1A><04>
r'N)r2r3r4r5rEr"r'r$rr<00>s <00><00><08>r'rc<00><00>eZdZdZdd<02>Zd<03>Zy)rz<>
Indicates a list of literal words that is transformed into an optimized
regex that matches any of the words.
.. versionadded:: 2.0
c<00>.<00>||_||_||_yrg)r<00>prefix<69>suffix)rCrr<>r<>s r$rEzwords.__init__<5F>s<00><00><1A><04>
<EFBFBD><1C><04> <0B><1C><04> r'c<00>Z<00>t|j|j|j<00><01>S)N<>r<>r<>)rrr<>r<>rIs r$rAz words.get<65>s<00><00><18><14><1A><1A>D<EFBFBD>K<EFBFBD>K<EFBFBD><04> <0B> <0B>L<>Lr'N)r<>r<>)r2r3r4r5rErAr"r'r$rr<00>s<00><00><08> <1D>
Mr'rc<00><<00>eZdZdZd<02>Zd<03>Zd<04>Zd<05>Zd
d<07>Zd<08>Z d <09>Z
y) <0B>RegexLexerMetazw
Metaclass for RegexLexer, creates the self._tokens attribute from
self.tokens on the first instantiation.
c<00><><00>t|t<00>r|j<00>}tj||<02>j
S)zBPreprocess the regular expression component of a token definition.)rLrrA<00>re<72>compiler<65>)r<><00>regex<65>rflagsr<73>s r$<00>_process_regexzRegexLexerMeta._process_regex<65>s.<00><00> <15>e<EFBFBD>V<EFBFBD> $<24><19>I<EFBFBD>I<EFBFBD>K<EFBFBD>E<EFBFBD><11>z<EFBFBD>z<EFBFBD>%<25><16>(<28>.<2E>.<2E>.r'c<00>R<00>t|<01>tust|<01>s
Jd|<01><02><02><00>|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r,r
<00>callable)r<><00>tokens r$<00>_process_tokenzRegexLexerMeta._process_token<65>s2<00><00><13>E<EFBFBD>{<7B>j<EFBFBD>(<28>H<EFBFBD>U<EFBFBD>O<EFBFBD> I<01>><3E>u<EFBFBD>i<EFBFBD> H<> I<01>;<3B><14> r'c<00><><00>t|t<00>r5|dk(ry||vr|fS|dk(r|S|dddk(rt|dd<00> SJd|<01><02><02><00>t|t<00>rfd|jz}|xjd z c_g}|D]3}||k7s
Jd
|<06><02><02><00>|j |j |||<06><00><00>5|||<|fSt|t<00>r|D]}||vr<01>|d vr<01> Jd|z<00><00>|SJd |<01><02><02><00>) z=Preprocess the state transition action of a token definition.<2E>#pop<6F><70><EFBFBD><EFBFBD><EFBFBD><EFBFBD>#pushN<68>z#pop:zunknown new state z_tmp_%dr<64>zcircular state ref )r<>r<>zunknown new state def )rLrU<00>intr<74><00>_tmpname<6D>extend<6E>_process_stater<65>)r<><00> new_state<74> unprocessed<65> processed<65> tmp_state<74>itokens<6E>istates r$<00>_process_new_statez!RegexLexerMeta._process_new_statesQ<00><00> <15>i<EFBFBD><13> %<25><18>F<EFBFBD>"<22><19><1A>k<EFBFBD>)<29>!<21>|<7C>#<23><1A>g<EFBFBD>%<25> <20> <20><1A>2<EFBFBD>A<EFBFBD><1D>'<27>)<29><1B>I<EFBFBD>a<EFBFBD>b<EFBFBD>M<EFBFBD>*<2A>*<2A>*<2A>@<40> 2<>9<EFBFBD>-<2D>@<40>@<40>u<EFBFBD> <17> <09>8<EFBFBD> ,<2C>!<21>C<EFBFBD>L<EFBFBD>L<EFBFBD>0<>I<EFBFBD> <0F>L<EFBFBD>L<EFBFBD>A<EFBFBD> <1D>L<EFBFBD><18>G<EFBFBD>#<23> F<01><06><1D><19>*<2A>L<>.A<>&<26><1A>,L<>L<>*<2A><17><0E><0E>s<EFBFBD>1<>1<>+<2B>2;<3B>V<EFBFBD> E<01>F<01> F<01>$+<2B>I<EFBFBD>i<EFBFBD> <20><1D><<3C> <1F> <17> <09>5<EFBFBD> )<29>#<23> 2<><06><1E>+<2B>-<2D><1E>"3<>3<>2<>(<28>6<EFBFBD>1<>2<>4<> 2<><1D> <1C> @<40>2<>9<EFBFBD>-<2D>@<40> @<40>5r'c <00>~<00>t|t<00>s
Jd|<03><02><02><00>|ddk7s
Jd|<03><02><02><00>||vr||Sgx}||<|j}||D<00>]?}t|t<00>r;||k7s
Jd|<03><02><02><00>|j |j ||t|<06><00><00><00>Ot|t <00>r<01>`t|t<00>rO|j|j||<02>}|jtjd<06>jd|f<03><00><>t|<06>tus
Jd|<06><02><02><00> |j!|d||<03>}|j'|d <00>}
t)|<06>dk(rd}n|j|d||<02>}|j||
|f<03><00><01>B|S#t"$r } t%d |d<00>d
|<03>d |<00>d | <09><00><08>| <09>d} ~ wwxYw)z%Preprocess a single state definition.zwrong state name r<00>#zinvalid state name zcircular state reference r<>Nzwrong rule def zuncompilable regex z
in state z of z: r<><00>)rLrU<00>flagsrr<>r<>r<>rr<>r<>rMr<>r<>r<>r,r<>r<><00> Exception<6F>
ValueErrorr<EFBFBD>rY) r<>r<>r<>r<><00>tokensr<73><00>tdefr<66><00>rex<65>errr<72>s r$r<>zRegexLexerMeta._process_state's<><00><00><19>%<25><13>%<25>D<>):<3A>5<EFBFBD>)<29>'D<>D<>%<25><14>Q<EFBFBD>x<EFBFBD>3<EFBFBD><EFBFBD>?<3F>"5<>e<EFBFBD>Y<EFBFBD> ?<3F>?<3F><EFBFBD> <10>I<EFBFBD> <1D><1C>U<EFBFBD>#<23> #<23>$&<26>&<26><06><19>5<EFBFBD>!<21><14><19><19><06><1F><05>&<26> 3<>D<EFBFBD><19>$<24><07>(<28><1B>u<EFBFBD>}<7D>K<>(A<>%<25><19>&K<>K<>}<7D><16> <0A> <0A>c<EFBFBD>0<>0<><1B>i<EFBFBD>14<31>T<EFBFBD><19><<3C>=<3D><18><19>$<24><08>)<29><19><19>$<24><07>(<28><1F>2<>2<>4<EFBFBD>:<3A>:<3A>{<7B>I<EFBFBD>V<> <09><16> <0A> <0A>r<EFBFBD>z<EFBFBD>z<EFBFBD>"<22>~<7E>3<>3<>T<EFBFBD>9<EFBFBD>E<>F<><18><17><04>:<3A><15>&<26> B<>/<2F>$<24><18>(B<> B<>&<26> r<01><19>(<28>(<28><14>a<EFBFBD><17>&<26>%<25>@<40><03><18>&<26>&<26>t<EFBFBD>A<EFBFBD>w<EFBFBD>/<2F>E<EFBFBD><12>4<EFBFBD>y<EFBFBD>A<EFBFBD>~<7E> <20> <09><1F>2<>2<>4<EFBFBD><01>7<EFBFBD>3><3E> <09>K<01> <09> <13>M<EFBFBD>M<EFBFBD>3<EFBFBD><05>y<EFBFBD>1<> 2<>A 3<>B<16> <0A><><1D> r<01> <20>#6<>t<EFBFBD>A<EFBFBD>w<EFBFBD>k<EFBFBD><1A>E<EFBFBD>9<EFBFBD>TX<54>Y\<5C>X_<58>_a<5F>be<62>af<61>!g<>h<>nq<6E>q<><71> r<01>s<00>)F<02> F<<05>F7<05>7F<Nc<00><><00>ix}|j|<|xs|j|}t|<02>D]}|j|||<04><00>|S)z-Preprocess a dictionary of token definitions.)<04> _all_tokensr<73>r<>r<>)r<>r/<00> tokendefsr<73>r<>s r$<00>process_tokendefzRegexLexerMeta.process_tokendefRsS<00><00>,.<2E>.<2E> <09>C<EFBFBD>O<EFBFBD>O<EFBFBD>D<EFBFBD>)<29><1D>1<><13><1A><1A>D<EFBFBD>!1<> <09><19>)<29>_<EFBFBD> <<3C>E<EFBFBD> <0F> <1E> <1E>y<EFBFBD>)<29>U<EFBFBD> ;<3B> <<3C><18>r'c<00><><00>i}i}|jD]<5D>}|jjdi<00>}|j<00>D]t\}}|j|<05>}|<07>!|||< |j t
<00>}|||<<00>:|j|d<02>}|<08><01>O||||dz |j t
<00>} || z||<<00>v<00><>|S#t $rY<00><>wxYw#t $rY<00><>wxYw)a
Merge tokens from superclasses in MRO order, returning a single tokendef
dictionary.
Any state that is not defined by a subclass will be inherited
automatically. States that *are* defined by subclasses will, by
default, override that state in the superclass. If a subclass wishes to
inherit definitions from a superclass, it can use the special value
"inherit", which will cause the superclass' state definition to be
included at that point in the state.
r<>Nr<4E>)<08>__mro__<5F>__dict__rA<00>items<6D>indexrr<>r<>)
r<EFBFBD>r<><00> inheritable<6C>c<>toksr<73>r<><00>curitems<6D> inherit_ndx<64> new_inh_ndxs
r$<00> get_tokendefszRegexLexerMeta.get_tokendefsZs<00><00><14><06><18> <0B><14><1B><1B> C<01>A<EFBFBD><14>:<3A>:<3A>><3E>><3E>(<28>B<EFBFBD>/<2F>D<EFBFBD> $<24>
<EFBFBD>
<EFBFBD> <0C> C<01> <0C><05>u<EFBFBD>!<21>:<3A>:<3A>e<EFBFBD>,<2C><08><1B>#<23>
%*<2A>F<EFBFBD>5<EFBFBD>M<EFBFBD>!<21>&+<2B>k<EFBFBD>k<EFBFBD>'<27>&:<3A> <0B>*5<>K<EFBFBD><05>&<26><1C>)<29>o<EFBFBD>o<EFBFBD>e<EFBFBD>T<EFBFBD>:<3A> <0B><1E>&<26><1C>7<<3C><08><1B>[<5B><11>]<5D>3<>C<01>#(<28>+<2B>+<2B>g<EFBFBD>"6<>K<EFBFBD>*5<>{<7B>)B<>K<EFBFBD><05>&<26>9 C<01> C<01>B<16> <0A><>)&<26>!<21> <20>!<21><>"<22><19><18><19>s$<00>B;<04>C
<04>; C<07>C<07>
C<07>Cc<00><><00>d|jvrLi|_d|_t|d<03>r |jrn%|j d|j <00><00>|_tj|g|<01><01>i|<02><01>S)z:Instantiate cls after preprocessing its token definitions.<2E>_tokensr<00>token_variantsr<73>)
r<EFBFBD>r<>r<><00>hasattrrr<>rrr,<00>__call__)r<>r<><00>kwdss r$rzRegexLexerMeta.__call__<5F>sh<00><00> <14>C<EFBFBD>L<EFBFBD>L<EFBFBD> (<28> <20>C<EFBFBD>O<EFBFBD><1C>C<EFBFBD>L<EFBFBD><16>s<EFBFBD>,<2C>-<2D>#<23>2D<32>2D<32><14>!<21>2<>2<>2<EFBFBD>s<EFBFBD>7H<37>7H<37>7J<37>K<><03> <0B><13>}<7D>}<7D>S<EFBFBD>0<>4<EFBFBD>0<>4<EFBFBD>0<>0r'rg) r2r3r4r5r<>r<>r<>r<>r<>rrr"r'r$r<>r<><00>s.<00><00><08>
/<2F> <15> !A<01>F)<16>V<19>/<16>b 1r'r<>c<00>4<00>eZdZdZej
ZiZdd<02>Zy)rz<>
Base for simple stateful regular expression-based lexers.
Simplifies the lexing process so that you need only
provide a list of states and regular expressions.
c#<00>:K<00>d}|j}t|<02>}||d} |D<00>]&\}}} |||<03>}
|
s<01>|<08>8t|<08>tur|||
j <00>f<03><01>n|||
<EFBFBD>Ed{<00><02><02>|
j <00>}| <09><>t | t<00>rX| D]R} | dk(r t|<05>dkDs<01>|j<00><00>(| dk(r|j|d<00><00>B|j| <0B><00>TnWt | t<00>r#t| <09>t|<05>k\r|dd<04>=n*|| d<04>=n$| dk(r|j|d<00>n
Jd| <09><02><02><00>||d}n7 ||dk(rd g}|d }|tdf<03><02>|dz }<03><01>P|t||f<03><02>|dz }<03><01>d7<00><01>#t$rYywxYw<01>w)
z~
Split ``text`` into (tokentype, text) pairs.
``stack`` is the initial stack (default: ``['root']``)
rr<>r<>Nr<4E>r<><00>wrong state def: rSr<>)rr<>r,r
r<>r<>rLr<>rYr<>rMr<><00>absr rr<>) rCrOr<>r<>r<><00>
statestack<EFBFBD> statetokens<6E>rexmatchr<68>r<><00>mr<6D>s r$rhz!RegexLexer.get_tokens_unprocessed<65>s<><00><00><><00> <10><03><18>L<EFBFBD>L<EFBFBD> <09><19>%<25>[<5B>
<EFBFBD><1F>
<EFBFBD>2<EFBFBD><0E>/<2F> <0B><0F>/:<3A>0 <1A>+<2B><08>&<26>)<29><1C>T<EFBFBD>3<EFBFBD>'<27><01><14><1D>)<29><1F><06><<3C>:<3A>5<>"%<25>v<EFBFBD>q<EFBFBD>w<EFBFBD>w<EFBFBD>y<EFBFBD>"8<>8<>'-<2D>d<EFBFBD>A<EFBFBD><EFBFBD>6<>6<><1B>%<25>%<25>'<27>C<EFBFBD> <20>,<2C>%<25>i<EFBFBD><15>7<>)2<>=<3D><05>#(<28>F<EFBFBD>?<3F>'*<2A>:<3A><EFBFBD><11>':<3A>(2<><0E><0E>(8<>%*<2A>g<EFBFBD>%5<>$.<2E>$5<>$5<>j<EFBFBD><12>n<EFBFBD>$E<>$.<2E>$5<>$5<>e<EFBFBD>$<<3C>=<3D>(<28> <09>3<EFBFBD>7<> #<23>9<EFBFBD>~<7E><13>Z<EFBFBD><1F>@<40>$.<2E>q<EFBFBD>r<EFBFBD>N<EFBFBD>$.<2E>y<EFBFBD>z<EFBFBD>$:<3A>&<26>'<27>1<>&<26>-<2D>-<2D>j<EFBFBD><12>n<EFBFBD>=<3D>K<>,=<3D>i<EFBFBD>]<5D>*K<>K<>5<EFBFBD>&/<2F>
<EFBFBD>2<EFBFBD><0E>&?<3F> <0B><19>C0 <1A>J <1A><1B>C<EFBFBD>y<EFBFBD>D<EFBFBD>(<28>&,<2C>X<EFBFBD>
<EFBFBD>&/<2F><06>&7<> <0B>!<21>:<3A>t<EFBFBD>3<>3<><1B>q<EFBFBD><08><03> <20><1D>u<EFBFBD>d<EFBFBD>3<EFBFBD>i<EFBFBD>/<2F>/<2F><17>1<EFBFBD>H<EFBFBD>C<EFBFBD>_<10>7<><37>P"<22><1A><19><1A>sM<00>8F<01>5F<01>0F <06>1>F<01>0B!F<01> F <00>2F<01>4F <00>F<01> F<03>F<01>F<03>FN<>)r<>) r2r3r4r5r<><00> MULTILINEr<45>r<>rhr"r'r$rr<00>s<00><00><08> <0F>L<EFBFBD>L<EFBFBD>E<EFBFBD>0<10>F<EFBFBD>;r'rc<00><00>eZdZdZdd<03>Zd<04>Zy)rz9
A helper object that holds lexer position data.
Nc<00>`<00>||_||_|xs t|<01>|_|xsdg|_y)Nr<4E>)rOr<>rYr<>r<>)rCrOr<>r<>r<>s r$rEzLexerContext.__init__<5F>s.<00><00><18><04> <09><16><04><08><16>#<23>#<23>d<EFBFBD>)<29><04><08><1A>&<26>v<EFBFBD>h<EFBFBD><04>
r'c<00>V<00>d|j<00>d|j<00>d|j<00>d<03>S)Nz LexerContext(z, <20>))rOr<>r<>rIs r$rJzLexerContext.__repr__s)<00><00><1E>t<EFBFBD>y<EFBFBD>y<EFBFBD>m<EFBFBD>2<EFBFBD>d<EFBFBD>h<EFBFBD>h<EFBFBD>\<5C><12>D<EFBFBD>J<EFBFBD>J<EFBFBD>><3E><11>K<>Kr'<00>NN)r2r3r4r5rErJr"r'r$rr<00>s<00><00><08>'<27> Lr'rc<00><00>eZdZdZdd<03>Zy)rzE
A RegexLexer that uses a context object to store its state.
Nc#<00><>K<00>|j}|st|d<01>}|d}n |}||jd}|j} |D<00>]<5D>\}}}|||j|j
<00>} | s<01>)|<07>lt |<07>tur5|j|| j<00>f<03><01>| j <00>|_n&||| |<04>Ed{<00><02><02>|s||jd}|<08><01>5t|t<00>r<>|D]<5D>}
|
dk(r4t|j<00>dkDs<01>!|jj<00><00><|
dk(r)|jj|jd<00><00>j|jj|
<EFBFBD><00><>n<>t|t<00>rAt|<08>t|j<00>k\r|jdd<05>=nH|j|d<05>=n8|dk(r)|jj|jd<00>n
Jd|<08><02><02><00>||jd}n<> |j|j
k\ry||jd k(r9dg|_|d}|jt d f<03><02>|xjdz c_<00><02>;|jt"||jf<03><02>|xjdz c_<00><02>s7<00><01><>#t$$rYywxYw<01>w)
z
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.
rr<>r<>r<>Nr<4E>r<>r
rS)rrr<>rOr<>r<>r,r
r<>rLr<>rYr<>rMr<>r rrr<>) rCrO<00>contextr<74>r<>r rr<>r<>rr<>s r$rhz)ExtendedRegexLexer.get_tokens_unprocessedsw<00><00><><00>
<19>L<EFBFBD>L<EFBFBD> <09><16><1E>t<EFBFBD>Q<EFBFBD>'<27>C<EFBFBD>#<23>F<EFBFBD>+<2B>K<EFBFBD><19>C<EFBFBD>#<23>C<EFBFBD>I<EFBFBD>I<EFBFBD>b<EFBFBD>M<EFBFBD>2<>K<EFBFBD><16>8<EFBFBD>8<EFBFBD>D<EFBFBD><0F>/:<3A>2 <1A>+<2B><08>&<26>)<29><1C>T<EFBFBD>3<EFBFBD>7<EFBFBD>7<EFBFBD>C<EFBFBD>G<EFBFBD>G<EFBFBD>4<><01><14><1D>)<29><1F><06><<3C>:<3A>5<>"%<25>'<27>'<27>6<EFBFBD>1<EFBFBD>7<EFBFBD>7<EFBFBD>9<EFBFBD>"<<3C><<3C>&'<27>e<EFBFBD>e<EFBFBD>g<EFBFBD>C<EFBFBD>G<EFBFBD>'-<2D>d<EFBFBD>A<EFBFBD>s<EFBFBD>';<3B>;<3B>;<3B>#,<2C>.7<EFBFBD><03> <09> <09>"<22> <0A>.F<> <0B> <20>,<2C>%<25>i<EFBFBD><15>7<>)2<><<3C><05>#(<28>F<EFBFBD>?<3F>'*<2A>3<EFBFBD>9<EFBFBD>9<EFBFBD>~<7E><01>'9<>(+<2B> <09> <09> <0A> <0A><0F>%*<2A>g<EFBFBD>%5<>$'<27>I<EFBFBD>I<EFBFBD>$4<>$4<>S<EFBFBD>Y<EFBFBD>Y<EFBFBD>r<EFBFBD>]<5D>$C<>$'<27>I<EFBFBD>I<EFBFBD>$4<>$4<>U<EFBFBD>$;<3B><<3C>(<28> <09>3<EFBFBD>7<>"<22>9<EFBFBD>~<7E><13>S<EFBFBD>Y<EFBFBD>Y<EFBFBD><1E>?<3F>$'<27>I<EFBFBD>I<EFBFBD>a<EFBFBD>b<EFBFBD>M<EFBFBD>$'<27>I<EFBFBD>I<EFBFBD>i<EFBFBD>j<EFBFBD>$9<>&<26>'<27>1<><1F>I<EFBFBD>I<EFBFBD>,<2C>,<2C>S<EFBFBD>Y<EFBFBD>Y<EFBFBD>r<EFBFBD>]<5D>;<3B>K<>,=<3D>i<EFBFBD>]<5D>*K<>K<>5<EFBFBD>&/<2F><03> <09> <09>"<22> <0A>&><3E> <0B><19>G2 <1A>J <1A><1A>w<EFBFBD>w<EFBFBD>#<23>'<27>'<27>)<29><1D><1B>C<EFBFBD>G<EFBFBD>G<EFBFBD>}<7D><04>,<2C>%+<2B>H<EFBFBD><03> <09>&/<2F><06>&7<> <0B>!<21>g<EFBFBD>g<EFBFBD>t<EFBFBD>T<EFBFBD>1<>1<><1B><07><07>1<EFBFBD> <0C><07> <20><1D>'<27>'<27>5<EFBFBD>$<24>s<EFBFBD>w<EFBFBD>w<EFBFBD>-<2D>7<>7<><17>G<EFBFBD>G<EFBFBD>q<EFBFBD>L<EFBFBD>G<EFBFBD>c<10><<3C><>R"<22><1A><19><1A>s^<00>A,K <01>/AK <01>J7<06>A K <01>DK <01>J:<00>2K <01>3A J:<00><K <01>>7J:<00>5K <01>: K<03>K <01>K<03>K r)r2r3r4r5rhr"r'r$rr s <00><00><08>@r'rc#<00><>K<00>t|<00>} t|<00>\}}d}d}|D]<5D>\}}}|<04>|}d} |rx|t|<08>z|k\rg|| ||z
}
|
r|||
f<03><01>|t|
<EFBFBD>z }|D]\} } } || | f<03><01>|t| <0A>z }<04>||z
} t|<00>\}}|r|t|<08>z|k\r<01>g| t|<08>ks<01><>|||| df<03><01>|t|<08>| z
z }<04><>|r9|xsd}|D]\}}}|||f<03><01>|t|<08>z }<04> t|<00>\}}|r<01>8yy#t$r|Ed{<00><03><02>7YywxYw#t$rd}Y<00><>wxYw#t$rd}YywxYw<01>w)ag
Helper for lexers which must combine the results of several
sublexers.
``insertions`` is a list of ``(index, itokens)`` pairs.
Each ``itokens`` iterable should be inserted at position
``index`` into the token stream given by the ``tokens``
argument.
The result is a combined token stream.
TODO: clean up the code here.
NTrF)<04>iter<65>next<78> StopIterationrY)r<>r<>r<>r<><00>realpos<6F>insleftr<74>rirj<00>oldi<64>tmpval<61>it_index<65>it_token<65>it_value<75>ps r$r<>r<>Qs<><00><00><><00><16>j<EFBFBD>!<21>J<EFBFBD><0F><1D>j<EFBFBD>)<29><0E><05>w<EFBFBD> <13>G<EFBFBD><12>G<EFBFBD><1A>%<25><07><01>1<EFBFBD>a<EFBFBD> <12>?<3F><17>G<EFBFBD><10><04><15>!<21>c<EFBFBD>!<21>f<EFBFBD>*<2A><05>-<2D><16>t<EFBFBD>E<EFBFBD>A<EFBFBD>I<EFBFBD>&<26>F<EFBFBD><15><1D>q<EFBFBD>&<26>(<28>(<28><17>3<EFBFBD>v<EFBFBD>;<3B>&<26><07>07<30> )<29>,<2C><08>(<28>H<EFBFBD><1D>x<EFBFBD><18>1<>1<><17>3<EFBFBD>x<EFBFBD>=<3D>(<28><07> )<29><19>1<EFBFBD>9<EFBFBD>D<EFBFBD> <16>!%<25>j<EFBFBD>!1<><0E><05>w<EFBFBD><16>!<21>c<EFBFBD>!<21>f<EFBFBD>*<2A><05>-<2D> <10>#<23>a<EFBFBD>&<26>=<3D><19>1<EFBFBD>a<EFBFBD><04><05>h<EFBFBD>&<26> &<26> <13>s<EFBFBD>1<EFBFBD>v<EFBFBD><04>}<7D> $<24>G<EFBFBD>+%<25>0 <12><19>,<2C>Q<EFBFBD><07><1E> <1E>G<EFBFBD>A<EFBFBD>q<EFBFBD>!<21><19>1<EFBFBD>a<EFBFBD>-<2D> <1F> <13>s<EFBFBD>1<EFBFBD>v<EFBFBD> <1D>G<EFBFBD> <1E> <12>!<21>*<2A>-<2D>N<EFBFBD>E<EFBFBD>7<EFBFBD> <12><>E <19><0F><19><19><19><0E><0F><>4!<21> <16><1F><07><15> <16><> <1D> <12><1B>G<EFBFBD> <11> <12>s<EFBFBD><00> E<01>D<00>A*E<01>D,<02>E<01>*E<01>9AE<01>?D=<00> E<01>E<01>D)<03> D#<06>!D)<03>&E<01>(D)<03>)E<01>, D:<05>7E<01>9D:<05>:E<01>= E <03>E<01>
E <03> Ec<00><00>eZdZdZd<02>Zy)<04>ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.c<00><><00><00><03><05><06>t|t<00>r-t|j|j|j<00><01><00>n|<01>t j <00>|<02><00>tjf<01><00><05><06>fd<02> }|S)Nr<4E>c<00><><00><04><00>jdj<00>
<EFBFBD> fddg<02>}tj<00>}<04>j|||<02>}tj<00>}|dxxdz cc<|dxx||z
z cc<|S)Nr<4E>rr!r<>)<04>
_prof_data<EFBFBD>
setdefault<EFBFBD>timer<65>) rOr<><00>endpos<6F>info<66>t0<74>res<65>t1r<31><00>compiledr<64>r<>s <20><><EFBFBD><EFBFBD>r$<00>
match_funcz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func<6E>sr<00><><00><16>><3E>><3E>"<22>%<25>0<>0<>%<25><13><1C><01>3<EFBFBD>x<EFBFBD>H<>D<EFBFBD><15><19><19><1B>B<EFBFBD><1A>.<2E>.<2E><14>s<EFBFBD>F<EFBFBD>3<>C<EFBFBD><15><19><19><1B>B<EFBFBD> <10><11>G<EFBFBD>q<EFBFBD>L<EFBFBD>G<EFBFBD> <10><11>G<EFBFBD>r<EFBFBD>B<EFBFBD>w<EFBFBD> <1E>G<EFBFBD><16>Jr') rLrrr<>r<>r<>r<><00>sys<79>maxsize)r<>r<>r<>r<>r3r2r<>s` ` @@r$r<>z&ProfilingRegexLexerMeta._process_regex<65>sZ<00><><00> <15>e<EFBFBD>U<EFBFBD> #<23><1B>E<EFBFBD>K<EFBFBD>K<EFBFBD><05> <0C> <0C>#(<28><<3C><<3C>1<>C<EFBFBD><18>C<EFBFBD><15>:<3A>:<3A>c<EFBFBD>6<EFBFBD>*<2A><08>),<2C><1B><1B> <17> <17><1A>r'N)r2r3r4r5r<>r"r'r$r'r'<00>s
<00><00>H<>r'r'c<00> <00>eZdZdZgZdZdd<03>Zy)<06>ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.<2E>c#<00>J<00>K<00><00>jjji<00>tj <00>||<02>Ed{<00><02><02><00>jjj <00>}t d<01>|j<00>D<00><00>fd<02>d<03><04>}td<05>|D<00><00>}t<00>td<06>jjt|<01>|fz<00>td<07>tdd z<00>td
<EFBFBD>|D]}td |z<00><00>td<07>y7<00>ݭw) Nc3<00><>K<00>|]H\\}}\}}|t|<02>jd<00>jdd<02>dd|d|zd|z|z f<05><01><00>Jy<03>w)zu'z\\<5C>\N<>Ai<>)<03>reprr\rR)<05>.0r<EFBFBD><00>r<>nris r$<00> <genexpr>z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr><3E>sa<00><00><><00>@<01>+<2B>F<EFBFBD>Q<EFBFBD><01>F<EFBFBD>Q<EFBFBD><01><1A>4<EFBFBD><01>7<EFBFBD>=<3D>=<3D><15>/<2F>7<>7<><06><04>E<>c<EFBFBD>r<EFBFBD>J<><19>4<EFBFBD>!<21>8<EFBFBD>T<EFBFBD>A<EFBFBD>X<EFBFBD><01>\<5C>3<>@<01>s<00>AAc<00>"<00><01>|<00>jSrg)<01>_prof_sort_index)r#rCs <20>r$r%z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda><3E>s<00><><00>A<EFBFBD>d<EFBFBD>&;<3B>&;<3B>$<<3C>r'T)<02>key<65>reversec3<00>&K<00>|] }|d<00><01><00> y<01>w)<02>Nr")r>r#s r$rAz=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr><3E>s<00><00><><00>+<2B><11><01>!<21><04>+<2B>s<00>z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls tottime percall)r<>r<>zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f) rHr*rMrrhr<><00>sortedr<64><00>sum<75>printr2rY)rCrOr<><00>rawdatar<61><00> sum_totalr1s` r$rhz*ProfilingRegexLexer.get_tokens_unprocessed<65>s<><00><><00><><00> <0C><0E><0E>!<21>!<21>(<28>(<28><12>,<2C><1D>4<>4<>T<EFBFBD>4<EFBFBD><15>G<>G<>G<><16>.<2E>.<2E>+<2B>+<2B>/<2F>/<2F>1<><07><15>@<01>/6<>}<7D>}<7D><EFBFBD>@<01>=<3D>"<22> $<24><04>
<18>+<2B>d<EFBFBD>+<2B>+<2B> <09> <0A><07> <0A>B<><13>~<7E>~<7E>&<26>&<26><03>D<EFBFBD> <09>9<EFBFBD>=<3D>><3E> ?<3F> <0A>i<EFBFBD><18> <0A>4<>7I<37>I<>J<> <0A>i<EFBFBD><18><15> 5<>A<EFBFBD> <11>/<2F>!<21>3<> 4<> 5<> <0A>i<EFBFBD><18># H<01>s<00>AD#<01>D!<04>CD#Nr)r2r3r4r5r*rCrhr"r'r$r7r7<00>s<00><00>P<><13>J<EFBFBD><18><14>r'r7)6r5r<>r4r,<00>pygments.filterrr<00>pygments.filtersr<00>pygments.tokenrrrr r
<00> pygments.utilr r r rrr<00>pygments.regexoptr<00>__all__r<5F>rrW<00> staticmethod<6F>_default_analyser,r)rrrUrr<>rr<>r<>r<>rr<>rrrrr<>rrrr<>r'r7r"r'r$<00><module>rUsL<00><01><04>
<EFBFBD>
<EFBFBD> <0B>1<>/<2F>E<>E<>*<2A>*<2A>'<27> *<2A><07> <15>"<22>*<2A>*<2A>W<EFBFBD>
<1D><07>,<2C> <0A>  <20> <0A>.<2E><10> 1<><04> 1<>m"<22>i<EFBFBD>m"<22>`O<01>e<EFBFBD>O<01>N <09>c<EFBFBD> <09><19><19> <13>*<2A><07>
 <0A>u<EFBFBD>
 <0A><12><12>6<14>4<08><08>  <0A>w<EFBFBD><04>/<14>d <1B> <1B> M<01>F<EFBFBD> M<01> d1<>Y<EFBFBD>d1<>N^<1A><15>.<2E>^<1A>B L<01> L<01>E<1A><1A>E<1A>P=<12>@<1A>n<EFBFBD><1A>,<19>*<2A>0G<30>r'