Message ID | 20180808120334.10970-24-armbru@redhat.com |
---|---|
State | New |
Headers | show |
Series | json: Fixes, error reporting improvements, cleanups | expand |
On 08/08/2018 07:03 AM, Markus Armbruster wrote: > Both the lexer and the parser (attempt to) validate UTF-8 in JSON > strings. > > > The commit before previous made the parser reject invalid UTF-8 > sequences. Since then, anything the lexer rejects, the parser would > reject as well. Thus, the lexer's rejecting is unnecessary for > correctness, and harmful for error reporting. Nice analysis. > > However, we want to keep rejecting ASCII control characters in the > lexer, because that produces the behavior we want for unclosed > strings. > > We also need to keep rejecting \xFF in the lexer, because we > documented that as a way to reset the JSON parser > (docs/interop/qmp-spec.txt section 2.6 QGA Synchronization), which > means we can't change how we recover from this error now. I wish we > hadn't done that. Or, if we give special meaning to 0xff to cause a lexer reset without also emitting an error message, as a design decision. (Doesn't change this patch - that would be a change on top). > > I think we should treat \xFE the same as \xFF. Reasonable, as it would cover byte-order-marks. > > Change the lexer to accept \xC0..\xC1 and \xF5..\xFD. It now rejects > only \x00..\x1F and \xFE..\xFF. Error reporting for invalid UTF-8 in > strings is much improved, except for \xFE and \xFF. For the example > above, the lexer now produces > > JSON_LCURLY { > JSON_STRING "abc\xC0\xAFijk" > JSON_COLON : > JSON_INTEGER 1 > JSON_RCURLY > > and the parser reports just > > JSON parse error, invalid UTF-8 sequence in string > > Signed-off-by: Markus Armbruster <armbru@redhat.com> > --- > qobject/json-lexer.c | 6 ++---- > 1 file changed, 2 insertions(+), 4 deletions(-) Reviewed-by: Eric Blake <eblake@redhat.com>
diff --git a/qobject/json-lexer.c b/qobject/json-lexer.c index 109a7d8bb8..ca1e0e2c03 100644 --- a/qobject/json-lexer.c +++ b/qobject/json-lexer.c @@ -177,8 +177,7 @@ static const uint8_t json_lexer[][256] = { ['u'] = IN_DQ_UCODE0, }, [IN_DQ_STRING] = { - [0x20 ... 0xBF] = IN_DQ_STRING, - [0xC2 ... 0xF4] = IN_DQ_STRING, + [0x20 ... 0xFD] = IN_DQ_STRING, ['\\'] = IN_DQ_STRING_ESCAPE, ['"'] = JSON_STRING, }, @@ -217,8 +216,7 @@ static const uint8_t json_lexer[][256] = { ['u'] = IN_SQ_UCODE0, }, [IN_SQ_STRING] = { - [0x20 ... 0xBF] = IN_SQ_STRING, - [0xC2 ... 0xF4] = IN_SQ_STRING, + [0x20 ... 0xFD] = IN_SQ_STRING, ['\\'] = IN_SQ_STRING_ESCAPE, ['\''] = JSON_STRING, },
Both the lexer and the parser (attempt to) validate UTF-8 in JSON strings. The lexer rejects bytes that can't occur in valid UTF-8: \xC0..\xC1, \xF5..\xFF. This rejects some, but not all invalid UTF-8. It also rejects ASCII control characters \x00..\x1F, in accordance with RFC 7159 (see recent commit "json: Reject unescaped control characters"). When the lexer rejects, it ends the token right after the first bad byte. Good when the bad byte is a newline. Not so good when it's something like an overlong sequence in the middle of a string. For instance, input {"abc\xC0\xAFijk": 1}\n produces the tokens JSON_LCURLY { JSON_ERROR "abc\xC0 JSON_ERROR \xAF JSON_KEYWORD ijk JSON_ERROR ": 1}\n The parser then reports four errors Invalid JSON syntax Invalid JSON syntax JSON parse error, invalid keyword 'ijk' Invalid JSON syntax before it recovers at the newline. The commit before previous made the parser reject invalid UTF-8 sequences. Since then, anything the lexer rejects, the parser would reject as well. Thus, the lexer's rejecting is unnecessary for correctness, and harmful for error reporting. However, we want to keep rejecting ASCII control characters in the lexer, because that produces the behavior we want for unclosed strings. We also need to keep rejecting \xFF in the lexer, because we documented that as a way to reset the JSON parser (docs/interop/qmp-spec.txt section 2.6 QGA Synchronization), which means we can't change how we recover from this error now. I wish we hadn't done that. I think we should treat \xFE the same as \xFF. Change the lexer to accept \xC0..\xC1 and \xF5..\xFD. It now rejects only \x00..\x1F and \xFE..\xFF. Error reporting for invalid UTF-8 in strings is much improved, except for \xFE and \xFF. For the example above, the lexer now produces JSON_LCURLY { JSON_STRING "abc\xC0\xAFijk" JSON_COLON : JSON_INTEGER 1 JSON_RCURLY and the parser reports just JSON parse error, invalid UTF-8 sequence in string Signed-off-by: Markus Armbruster <armbru@redhat.com> --- qobject/json-lexer.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)