Message ID | 1359128620-14226-2-git-send-email-armbru@redhat.com |
---|---|
State | New |
Headers | show |
comments in-line On 01/25/13 16:43, Markus Armbruster wrote: > We use atomic operations to keep track of dropped events. > > Inconveniently, GLib supports only int and void * atomics, but the > counter dropped_events is uint64_t. That's too bad. Instead of (or in addition to) int, glib should target the (biggest) natural word size, ie. the integer whose size equals that of "void *". size_t or uintptr_t looks like a good choice. > Can't stop commit 62bab732: a > quick (gint *)&dropped_events bludgeons the compiler into submission. > > That cast is okay only when int is exactly 64 bits wide, which it > commonly isn't. > > If int is even wider, we clobber whatever follows dropped_events. Not > worth worrying about, as none of the machines that interest us have > such morbidly obese ints. > > That leaves the common case: int narrower than 64 bits. > > Harmless on little endian hosts: we just don't access the most > significant bits of dropped_events. They remain zero. > > On big endian hosts, we use only the most significant bits of > dropped_events as counter. The least significant bits remain zero. > However, we write out the full value, which is the correct counter > shifted left a bunch of places. > > Fix by changing the variables involved to int. > > There's another, equally suspicious-looking (gint *)&trace_idx > argument to g_atomic_int_compare_and_exchange(), but that one casts > unsigned *, so it's okay. But it's also superfluous, because GLib's > atomic int operations work just fine for unsigned. Drop it. Agree with "OK", disagree with "superfluous". In the intersection of signed int's and unsigned int's domains, the object representation is indeed required to be the same. But that doesn't make "signed int" a type compatible with "unsigned int". Since we have a prototype for g_atomic_int_compare_and_exchange(), "the arguments are implicitly converted, as if by assignment"; however, for the pointer-to-pointer assignment here, the argument-ptr and the parameter-ptr would have to point to compatible types, which "signed int" and "unsigned int" are not. gcc's "-pedantic" emits warning: pointer targets in passing argument N of 'F' differ in signedness Anyway I don't insist reinstating the cast, I'm just saying that ISO C would require it. > > Signed-off-by: Markus Armbruster <armbru@redhat.com> > --- > trace/simple.c | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/trace/simple.c b/trace/simple.c > index ce17d64..ccbdb6a 100644 > --- a/trace/simple.c > +++ b/trace/simple.c > @@ -53,7 +53,7 @@ enum { > uint8_t trace_buf[TRACE_BUF_LEN]; > static unsigned int trace_idx; > static unsigned int writeout_idx; > -static uint64_t dropped_events; > +static int dropped_events; > static FILE *trace_fp; > static char *trace_file_name; > > @@ -63,7 +63,7 @@ typedef struct { > uint64_t timestamp_ns; > uint32_t length; /* in bytes */ > uint32_t reserved; /* unused */ > - uint8_t arguments[]; > + uint64_t arguments[]; > } TraceRecord; > > typedef struct { > @@ -160,7 +160,7 @@ static gpointer writeout_thread(gpointer opaque) > uint8_t bytes[sizeof(TraceRecord) + sizeof(uint64_t)]; > } dropped; > unsigned int idx = 0; > - uint64_t dropped_count; > + int dropped_count; > size_t unused __attribute__ ((unused)); > > for (;;) { > @@ -169,16 +169,16 @@ static gpointer writeout_thread(gpointer opaque) > if (dropped_events) { > dropped.rec.event = DROPPED_EVENT_ID, > dropped.rec.timestamp_ns = get_clock(); > - dropped.rec.length = sizeof(TraceRecord) + sizeof(dropped_events), > + dropped.rec.length = sizeof(TraceRecord) + sizeof(uint64_t), > dropped.rec.reserved = 0; > while (1) { > dropped_count = dropped_events; > - if (g_atomic_int_compare_and_exchange((gint *)&dropped_events, > + if (g_atomic_int_compare_and_exchange(&dropped_events, > dropped_count, 0)) { > break; > } > } > - memcpy(dropped.rec.arguments, &dropped_count, sizeof(uint64_t)); > + dropped.rec.arguments[0] = dropped_count; I *think* I'd prefer if the element type of TraceRecord.arguments[] were not changed; an array of u8's seems to be more flexible. The element type change appears both unrelated and not really necessary for this patch. (You'd have to keep the memcpy(), but it doesn't hurt.) BTW I wonder if trace files are endianness-dependent by design -- normally you'd store them in network byte order (= big endian) only and use htonX() when writing, an ntohX() when reading. > unused = fwrite(&dropped.rec, dropped.rec.length, 1, trace_fp); > } > > @@ -220,11 +220,11 @@ int trace_record_start(TraceBufferRecord *rec, TraceEventID event, size_t datasi > > if (new_idx - writeout_idx > TRACE_BUF_LEN) { > /* Trace Buffer Full, Event dropped ! */ > - g_atomic_int_inc((gint *)&dropped_events); > + g_atomic_int_inc(&dropped_events); > return -ENOSPC; > } > > - if (g_atomic_int_compare_and_exchange((gint *)&trace_idx, > + if (g_atomic_int_compare_and_exchange(&trace_idx, > old_idx, new_idx)) { > break; > } You decide if a respin is warranted... Reviewed-by: Laszlo Ersek <lersek@redhat.com>
On 01/25/2013 09:13 PM, Markus Armbruster wrote: > We use atomic operations to keep track of dropped events. > > Inconveniently, GLib supports only int and void * atomics, but the > counter dropped_events is uint64_t. Can't stop commit 62bab732: a > quick (gint *)&dropped_events bludgeons the compiler into submission. > > That cast is okay only when int is exactly 64 bits wide, which it > commonly isn't. > > If int is even wider, we clobber whatever follows dropped_events. Not > worth worrying about, as none of the machines that interest us have > such morbidly obese ints. > > That leaves the common case: int narrower than 64 bits. > > Harmless on little endian hosts: we just don't access the most > significant bits of dropped_events. They remain zero. > > On big endian hosts, we use only the most significant bits of > dropped_events as counter. The least significant bits remain zero. > However, we write out the full value, which is the correct counter > shifted left a bunch of places. > > Fix by changing the variables involved to int. > > There's another, equally suspicious-looking (gint *)&trace_idx > argument to g_atomic_int_compare_and_exchange(), but that one casts > unsigned *, so it's okay. But it's also superfluous, because GLib's > atomic int operations work just fine for unsigned. Drop it. > > Signed-off-by: Markus Armbruster <armbru@redhat.com> > --- > trace/simple.c | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/trace/simple.c b/trace/simple.c > index ce17d64..ccbdb6a 100644 > --- a/trace/simple.c > +++ b/trace/simple.c > @@ -53,7 +53,7 @@ enum { > uint8_t trace_buf[TRACE_BUF_LEN]; > static unsigned int trace_idx; > static unsigned int writeout_idx; > -static uint64_t dropped_events; > +static int dropped_events; > static FILE *trace_fp; > static char *trace_file_name; > > @@ -63,7 +63,7 @@ typedef struct { > uint64_t timestamp_ns; > uint32_t length; /* in bytes */ > uint32_t reserved; /* unused */ > - uint8_t arguments[]; > + uint64_t arguments[]; I am just paranoid about this change, however if the patch has gone through a stress testing, I have no objections. > } TraceRecord; > > typedef struct { > @@ -160,7 +160,7 @@ static gpointer writeout_thread(gpointer opaque) > uint8_t bytes[sizeof(TraceRecord) + sizeof(uint64_t)]; > } dropped; > unsigned int idx = 0; > - uint64_t dropped_count; > + int dropped_count; Just because of Glib's limitations, we are limiting our counting ability, that's bad. > size_t unused __attribute__ ((unused)); > > for (;;) { > @@ -169,16 +169,16 @@ static gpointer writeout_thread(gpointer opaque) > if (dropped_events) { > dropped.rec.event = DROPPED_EVENT_ID, > dropped.rec.timestamp_ns = get_clock(); > - dropped.rec.length = sizeof(TraceRecord) + sizeof(dropped_events), > + dropped.rec.length = sizeof(TraceRecord) + sizeof(uint64_t), > dropped.rec.reserved = 0; > while (1) { > dropped_count = dropped_events; > - if (g_atomic_int_compare_and_exchange((gint *)&dropped_events, > + if (g_atomic_int_compare_and_exchange(&dropped_events, > dropped_count, 0)) { > break; > } > } > - memcpy(dropped.rec.arguments, &dropped_count, sizeof(uint64_t)); > + dropped.rec.arguments[0] = dropped_count; I like the above improvement for the price paid. Lets see what Stefan has to say about this patch. thanks, Harsh > unused = fwrite(&dropped.rec, dropped.rec.length, 1, trace_fp); > } > > @@ -220,11 +220,11 @@ int trace_record_start(TraceBufferRecord *rec, TraceEventID event, size_t datasi > > if (new_idx - writeout_idx > TRACE_BUF_LEN) { > /* Trace Buffer Full, Event dropped ! */ > - g_atomic_int_inc((gint *)&dropped_events); > + g_atomic_int_inc(&dropped_events); > return -ENOSPC; > } > > - if (g_atomic_int_compare_and_exchange((gint *)&trace_idx, > + if (g_atomic_int_compare_and_exchange(&trace_idx, > old_idx, new_idx)) { > break; > } >
Laszlo Ersek <lersek@redhat.com> writes: > comments in-line > > On 01/25/13 16:43, Markus Armbruster wrote: >> We use atomic operations to keep track of dropped events. >> >> Inconveniently, GLib supports only int and void * atomics, but the >> counter dropped_events is uint64_t. > > That's too bad. Instead of (or in addition to) int, glib should target > the (biggest) natural word size, ie. the integer whose size equals that > of "void *". size_t or uintptr_t looks like a good choice. > >> Can't stop commit 62bab732: a >> quick (gint *)&dropped_events bludgeons the compiler into submission. >> >> That cast is okay only when int is exactly 64 bits wide, which it >> commonly isn't. >> >> If int is even wider, we clobber whatever follows dropped_events. Not >> worth worrying about, as none of the machines that interest us have >> such morbidly obese ints. >> >> That leaves the common case: int narrower than 64 bits. >> >> Harmless on little endian hosts: we just don't access the most >> significant bits of dropped_events. They remain zero. >> >> On big endian hosts, we use only the most significant bits of >> dropped_events as counter. The least significant bits remain zero. >> However, we write out the full value, which is the correct counter >> shifted left a bunch of places. >> >> Fix by changing the variables involved to int. >> >> There's another, equally suspicious-looking (gint *)&trace_idx >> argument to g_atomic_int_compare_and_exchange(), but that one casts >> unsigned *, so it's okay. But it's also superfluous, because GLib's >> atomic int operations work just fine for unsigned. Drop it. > > Agree with "OK", disagree with "superfluous". http://developer.gnome.org/glib/stable/glib-Atomic-Operations.html The following is a collection of compiler macros to provide atomic access to integer and pointer-sized values. The macros that have 'int' in the name will operate on pointers to gint and guint. > In the intersection of signed int's and unsigned int's domains, the > object representation is indeed required to be the same. But that > doesn't make "signed int" a type compatible with "unsigned int". > > Since we have a prototype for g_atomic_int_compare_and_exchange(), "the > arguments are implicitly converted, as if by assignment"; however, for > the pointer-to-pointer assignment here, the argument-ptr and the > parameter-ptr would have to point to compatible types, which "signed > int" and "unsigned int" are not. > > gcc's "-pedantic" emits > > warning: pointer targets in passing argument N of 'F' differ in signedness So GLib reneges on the promise it made in its documentation. Meh. > Anyway I don't insist reinstating the cast, I'm just saying that ISO C > would require it. I'm leaving the decision to the maintainer. Stefan? >> Signed-off-by: Markus Armbruster <armbru@redhat.com> >> --- >> trace/simple.c | 16 ++++++++-------- >> 1 file changed, 8 insertions(+), 8 deletions(-) >> >> diff --git a/trace/simple.c b/trace/simple.c >> index ce17d64..ccbdb6a 100644 >> --- a/trace/simple.c >> +++ b/trace/simple.c >> @@ -53,7 +53,7 @@ enum { >> uint8_t trace_buf[TRACE_BUF_LEN]; >> static unsigned int trace_idx; >> static unsigned int writeout_idx; >> -static uint64_t dropped_events; >> +static int dropped_events; >> static FILE *trace_fp; >> static char *trace_file_name; >> >> @@ -63,7 +63,7 @@ typedef struct { >> uint64_t timestamp_ns; >> uint32_t length; /* in bytes */ >> uint32_t reserved; /* unused */ >> - uint8_t arguments[]; >> + uint64_t arguments[]; >> } TraceRecord; >> >> typedef struct { >> @@ -160,7 +160,7 @@ static gpointer writeout_thread(gpointer opaque) >> uint8_t bytes[sizeof(TraceRecord) + sizeof(uint64_t)]; >> } dropped; >> unsigned int idx = 0; >> - uint64_t dropped_count; >> + int dropped_count; >> size_t unused __attribute__ ((unused)); >> >> for (;;) { >> @@ -169,16 +169,16 @@ static gpointer writeout_thread(gpointer opaque) >> if (dropped_events) { >> dropped.rec.event = DROPPED_EVENT_ID, >> dropped.rec.timestamp_ns = get_clock(); >> - dropped.rec.length = sizeof(TraceRecord) + sizeof(dropped_events), >> + dropped.rec.length = sizeof(TraceRecord) + sizeof(uint64_t), >> dropped.rec.reserved = 0; >> while (1) { >> dropped_count = dropped_events; >> - if (g_atomic_int_compare_and_exchange((gint *)&dropped_events, >> + if (g_atomic_int_compare_and_exchange(&dropped_events, >> dropped_count, 0)) { >> break; >> } >> } >> - memcpy(dropped.rec.arguments, &dropped_count, sizeof(uint64_t)); >> + dropped.rec.arguments[0] = dropped_count; > > I *think* I'd prefer if the element type of TraceRecord.arguments[] were > not changed; an array of u8's seems to be more flexible. The element > type change appears both unrelated and not really necessary for this > patch. (You'd have to keep the memcpy(), but it doesn't hurt.) Casting uint64_t[] to uint8_t is safe. Casting uint8_t[] to FOO * isn't when alignof(FOO) > 1. That's why I really don't like uint8_t[] there. It happens to be amply aligned, and it happens to be used only with memcpy(). But neither is immediately obvious, or obviously bound to stay that way. > BTW I wonder if trace files are endianness-dependent by design -- > normally you'd store them in network byte order (= big endian) only and > use htonX() when writing, an ntohX() when reading. Good point, but outside the scope of this series. >> unused = fwrite(&dropped.rec, dropped.rec.length, 1, trace_fp); >> } >> >> @@ -220,11 +220,11 @@ int trace_record_start(TraceBufferRecord *rec, TraceEventID event, size_t datasi >> >> if (new_idx - writeout_idx > TRACE_BUF_LEN) { >> /* Trace Buffer Full, Event dropped ! */ >> - g_atomic_int_inc((gint *)&dropped_events); >> + g_atomic_int_inc(&dropped_events); >> return -ENOSPC; >> } >> >> - if (g_atomic_int_compare_and_exchange((gint *)&trace_idx, >> + if (g_atomic_int_compare_and_exchange(&trace_idx, >> old_idx, new_idx)) { >> break; >> } > > You decide if a respin is warranted... > > Reviewed-by: Laszlo Ersek <lersek@redhat.com>
On 01/31/13 13:10, Markus Armbruster wrote: > Laszlo Ersek <lersek@redhat.com> writes: >> On 01/25/13 16:43, Markus Armbruster wrote: >>> @@ -63,7 +63,7 @@ typedef struct { >>> uint64_t timestamp_ns; >>> uint32_t length; /* in bytes */ >>> uint32_t reserved; /* unused */ >>> - uint8_t arguments[]; >>> + uint64_t arguments[]; >>> } TraceRecord; >>> >>> typedef struct { >>> @@ -160,7 +160,7 @@ static gpointer writeout_thread(gpointer opaque) >>> uint8_t bytes[sizeof(TraceRecord) + sizeof(uint64_t)]; >>> } dropped; >>> unsigned int idx = 0; >>> - uint64_t dropped_count; >>> + int dropped_count; >>> size_t unused __attribute__ ((unused)); >>> >>> for (;;) { >>> @@ -169,16 +169,16 @@ static gpointer writeout_thread(gpointer opaque) >>> if (dropped_events) { >>> dropped.rec.event = DROPPED_EVENT_ID, >>> dropped.rec.timestamp_ns = get_clock(); >>> - dropped.rec.length = sizeof(TraceRecord) + sizeof(dropped_events), >>> + dropped.rec.length = sizeof(TraceRecord) + sizeof(uint64_t), >>> dropped.rec.reserved = 0; >>> while (1) { >>> dropped_count = dropped_events; >>> - if (g_atomic_int_compare_and_exchange((gint *)&dropped_events, >>> + if (g_atomic_int_compare_and_exchange(&dropped_events, >>> dropped_count, 0)) { >>> break; >>> } >>> } >>> - memcpy(dropped.rec.arguments, &dropped_count, sizeof(uint64_t)); >>> + dropped.rec.arguments[0] = dropped_count; >> >> I *think* I'd prefer if the element type of TraceRecord.arguments[] were >> not changed; an array of u8's seems to be more flexible. The element >> type change appears both unrelated and not really necessary for this >> patch. (You'd have to keep the memcpy(), but it doesn't hurt.) > > Casting uint64_t[] to uint8_t is safe. Casting uint8_t[] to FOO * isn't > when alignof(FOO) > 1. That's why I really don't like uint8_t[] there. The thought of casting &arguments[N] to another pointer type didn't cross my mind (independently of the element type). Who would do such a thing? memcpy() is pretty much a requirement in this case, in my world. > It happens to be amply aligned, and it happens to be used only with > memcpy(). But neither is immediately obvious, or obviously bound to > stay that way. Changing the element type to uint64_t doesn't guarantee in general that (type*)&arguments[N] will be a correctly aligned pointer. (The guaranteed cases are when "type" is a character type, void, or uint64_t). Anyway my reviewed-by stands. Laszlo
On Thu, Jan 31, 2013 at 12:18:52PM +0100, Laszlo Ersek wrote: > BTW I wonder if trace files are endianness-dependent by design -- > normally you'd store them in network byte order (= big endian) only and > use htonX() when writing, an ntohX() when reading. Yes, they are host endian but the magic number tells you whether they are in big or little endian. So if a parser cared it could handle both endiannesses. Stefan
diff --git a/trace/simple.c b/trace/simple.c index ce17d64..ccbdb6a 100644 --- a/trace/simple.c +++ b/trace/simple.c @@ -53,7 +53,7 @@ enum { uint8_t trace_buf[TRACE_BUF_LEN]; static unsigned int trace_idx; static unsigned int writeout_idx; -static uint64_t dropped_events; +static int dropped_events; static FILE *trace_fp; static char *trace_file_name; @@ -63,7 +63,7 @@ typedef struct { uint64_t timestamp_ns; uint32_t length; /* in bytes */ uint32_t reserved; /* unused */ - uint8_t arguments[]; + uint64_t arguments[]; } TraceRecord; typedef struct { @@ -160,7 +160,7 @@ static gpointer writeout_thread(gpointer opaque) uint8_t bytes[sizeof(TraceRecord) + sizeof(uint64_t)]; } dropped; unsigned int idx = 0; - uint64_t dropped_count; + int dropped_count; size_t unused __attribute__ ((unused)); for (;;) { @@ -169,16 +169,16 @@ static gpointer writeout_thread(gpointer opaque) if (dropped_events) { dropped.rec.event = DROPPED_EVENT_ID, dropped.rec.timestamp_ns = get_clock(); - dropped.rec.length = sizeof(TraceRecord) + sizeof(dropped_events), + dropped.rec.length = sizeof(TraceRecord) + sizeof(uint64_t), dropped.rec.reserved = 0; while (1) { dropped_count = dropped_events; - if (g_atomic_int_compare_and_exchange((gint *)&dropped_events, + if (g_atomic_int_compare_and_exchange(&dropped_events, dropped_count, 0)) { break; } } - memcpy(dropped.rec.arguments, &dropped_count, sizeof(uint64_t)); + dropped.rec.arguments[0] = dropped_count; unused = fwrite(&dropped.rec, dropped.rec.length, 1, trace_fp); } @@ -220,11 +220,11 @@ int trace_record_start(TraceBufferRecord *rec, TraceEventID event, size_t datasi if (new_idx - writeout_idx > TRACE_BUF_LEN) { /* Trace Buffer Full, Event dropped ! */ - g_atomic_int_inc((gint *)&dropped_events); + g_atomic_int_inc(&dropped_events); return -ENOSPC; } - if (g_atomic_int_compare_and_exchange((gint *)&trace_idx, + if (g_atomic_int_compare_and_exchange(&trace_idx, old_idx, new_idx)) { break; }
We use atomic operations to keep track of dropped events. Inconveniently, GLib supports only int and void * atomics, but the counter dropped_events is uint64_t. Can't stop commit 62bab732: a quick (gint *)&dropped_events bludgeons the compiler into submission. That cast is okay only when int is exactly 64 bits wide, which it commonly isn't. If int is even wider, we clobber whatever follows dropped_events. Not worth worrying about, as none of the machines that interest us have such morbidly obese ints. That leaves the common case: int narrower than 64 bits. Harmless on little endian hosts: we just don't access the most significant bits of dropped_events. They remain zero. On big endian hosts, we use only the most significant bits of dropped_events as counter. The least significant bits remain zero. However, we write out the full value, which is the correct counter shifted left a bunch of places. Fix by changing the variables involved to int. There's another, equally suspicious-looking (gint *)&trace_idx argument to g_atomic_int_compare_and_exchange(), but that one casts unsigned *, so it's okay. But it's also superfluous, because GLib's atomic int operations work just fine for unsigned. Drop it. Signed-off-by: Markus Armbruster <armbru@redhat.com> --- trace/simple.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)