New prototypes for the implementation of the lexical analysis: The identification of the elements in the input string (the lexical analysis identifying strings, operators, numerical values as characters with special meaning) within the same routine as the syntax analysis leads to a very complex code. In order to make a more readable code, the idea here is to move it to separate routines (as done by Yacc & Lex tools). This does not modify the overall functionality, but only presents a code redesign. Therefore two scanner prototypes are provided here with some test and performance code. I have so far just tested that they work properly, and the small test patterns did not show a noticeable difference in performance. For the time being I just provide them here "as is" for the case someone is interested. Their integration into the parsers will follow. Now the analysis is structured as follows: akt = Eingang[indexEin++]; #ifndef EXT_FOR while (akt != '\0' && akt != '}') { #else while (akt != '\0' && akt != '}' && indexEin < to) { #endif if (akt == 40) { /* '(' */ } else if (akt == 41) { /* ')' */ } else if ((akt>=48 && akt<=57) || akt == '.') { /* 0-9 . */ } else if(akt == '+' || /* 43 */ akt == '-' || /* 45 */ akt == '*' || /* 42 */ akt == '/' || /* 47 */ akt == '^') { /* */ } else if (akt >= 65 && akt <= 90 || akt >= 97 && akt <= 122) { /* Buchstabe A-Z a-z */ if (0 == strcmp("sqrt",Funktion)) { } else if (0 == strcmp("exp",Funktion)) { ... } else if (0 == strcmp("if",Funktion)) { } else if (0 == strcmp("while",Funktion)) { ... } else { With the separate lexical analysis it would be like: while (-1 != token && indexEin < to) { switch (yylex()) { case openexpr: case closeexpr: case integer: case floating: case operator: case string: if (0 == strncmp("sqrt", yyval.strval.text, 4)) { } else if (0 == strncmp("if", yyval.strval.text, 2)) { case endcode: // To leave the loop and the routine. token = -1; break; case semicol: /* Den Puffer leeren bis Position die bestand beim Aufruf von parsen */ break; default: