Most of the resources on lexical analyzers and parsers illustrate use of streams to communicate between them (or so I understand).
It is explained that the parser asks for the next token, say by calling a function getNextToken()
, and the lexer responds to it by returning the next token. Are we supposed to think of them as two objects interacting within the same program or two different programs interacting through streams?
Also, I haven't been able to understand why a serial approach isn't chosen, i.e. the lexical analyzer runs till the end of the provided source, and only then does the parser use the output of the lexical analyzer for parsing. To be precise, if the lexical analyzer reads the next lexeme only when the parser asks for the next token, how is an error handled? Especially if the error occurs towards the end of the file, all the computation done by the parser might be wasted due to the error (assuming a very basic parser without any error handling capabilities). Is recent output cached?
Copyright Notice:Content Author:「rajatkhanduja」,Reproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/9413893/lexical-analyser-and-parser-communication