diff mbox series

[RFC] Track time spent in each command

Message ID CAH2r5ms+oJ8PpAbsmTmRDqoS4C=dcNV=xYv3tMWnoigA5JqFsA@mail.gmail.com
State New
Headers show
Series [RFC] Track time spent in each command | expand

Commit Message

Steve French March 26, 2019, 5:39 a.m. UTC
I created a small experimental patch (leveraging the existing code for
the slow command check we already have in mid processing) to track the
total time per SMB2/SMB3 command.   Seems like it could be useful -
here is sample output (see below) from running seven xfstests to
generate the stats (these stats require CONFIG_CIFS_STATS2 to be
compiled as do the slow command counters).   I wanted to keep the
patch small and low risk so this seemed like the best place to capture
it.    This could be helpful in calculating average time per command
(and perhaps help isolate performance problems, and isolate what is
really causing slowdowns).   As with other counters: "echo 0 >
/proc/fs/cifs/Stats" resets them.   The new stats counters start at
the line "Total time spent processing each command"

Thoughts?

$ cat /proc/fs/cifs/Stats
Resources in use
CIFS Session: 1
Share (unique mount targets): 3
SMB Request/Response Buffer: 1 Pool size: 5
SMB Small Req/Resp Buffer: 2 Pool size: 30
Total Large 60864 Small 1165977 Allocations
Operations (MIDs): 1

0 session 0 share reconnects
Total vfs operations: 280236 maximum at one time: 3
Total time spent processing each command
cmd: 0 jiffies: 0 seconds 0
cmd: 1 jiffies: 2 seconds 0
cmd: 2 jiffies: 0 seconds 0
cmd: 3 jiffies: 1 seconds 0
cmd: 4 jiffies: 1 seconds 0
cmd: 5 jiffies: 6212 seconds 24
cmd: 6 jiffies: 3993 seconds 15
cmd: 7 jiffies: 135 seconds 0
cmd: 8 jiffies: 83 seconds 0
cmd: 9 jiffies: 44 seconds 0
cmd: 10 jiffies: 0 seconds 0
cmd: 11 jiffies: 1 seconds 0
cmd: 12 jiffies: 0 seconds 0
cmd: 13 jiffies: 0 seconds 0
cmd: 14 jiffies: 47 seconds 0
cmd: 15 jiffies: 0 seconds 0
cmd: 16 jiffies: 1890 seconds 7
cmd: 17 jiffies: 1368 seconds 5
cmd: 18 jiffies: 0 seconds 0

1) \\localhost\btrfs-scratch
SMBs: 42255
Bytes read: 25379425  Bytes written: 53689661
Open files: 1 total (local), 1 open on server
TreeConnects: 1 total 0 failed
TreeDisconnects: 0 total 0 failed
Creates: 14012 total 0 failed
Closes: 14009 total 0 failed
Flushes: 27 total 0 failed
Reads: 41 total 0 failed
Writes: 112 total 0 failed
Locks: 0 total 0 failed
IOCTLs: 2 total 1 failed
QueryDirectories: 28 total 0 failed
ChangeNotifies: 0 total 0 failed
QueryInfos: 11312 total 0 failed
SetInfos: 2711 total 0 failed
OplockBreaks: 0 sent 0 failed
2) \\localhost\btrfs
SMBs: 77
Bytes read: 282  Bytes written: 94
Open files: 0 total (local), 0 open on server
TreeConnects: 1 total 0 failed
TreeDisconnects: 0 total 0 failed
Creates: 23 total 0 failed
Closes: 21 total 0 failed
Flushes: 0 total 0 failed
Reads: 3 total 0 failed
Writes: 1 total 0 failed
Locks: 0 total 0 failed

Comments

ronnie sahlberg March 26, 2019, 6:38 a.m. UTC | #1
Very nice.
But maybe min and max to each command as well.
I have all three of min/max/avg is very useful.

At least avg and max will tell you if there are any abnormal outliers.

On Tue, Mar 26, 2019 at 3:39 PM Steve French <smfrench@gmail.com> wrote:
>
> I created a small experimental patch (leveraging the existing code for
> the slow command check we already have in mid processing) to track the
> total time per SMB2/SMB3 command.   Seems like it could be useful -
> here is sample output (see below) from running seven xfstests to
> generate the stats (these stats require CONFIG_CIFS_STATS2 to be
> compiled as do the slow command counters).   I wanted to keep the
> patch small and low risk so this seemed like the best place to capture
> it.    This could be helpful in calculating average time per command
> (and perhaps help isolate performance problems, and isolate what is
> really causing slowdowns).   As with other counters: "echo 0 >
> /proc/fs/cifs/Stats" resets them.   The new stats counters start at
> the line "Total time spent processing each command"
>
> Thoughts?
>
> $ cat /proc/fs/cifs/Stats
> Resources in use
> CIFS Session: 1
> Share (unique mount targets): 3
> SMB Request/Response Buffer: 1 Pool size: 5
> SMB Small Req/Resp Buffer: 2 Pool size: 30
> Total Large 60864 Small 1165977 Allocations
> Operations (MIDs): 1
>
> 0 session 0 share reconnects
> Total vfs operations: 280236 maximum at one time: 3
> Total time spent processing each command
> cmd: 0 jiffies: 0 seconds 0
> cmd: 1 jiffies: 2 seconds 0
> cmd: 2 jiffies: 0 seconds 0
> cmd: 3 jiffies: 1 seconds 0
> cmd: 4 jiffies: 1 seconds 0
> cmd: 5 jiffies: 6212 seconds 24
> cmd: 6 jiffies: 3993 seconds 15
> cmd: 7 jiffies: 135 seconds 0
> cmd: 8 jiffies: 83 seconds 0
> cmd: 9 jiffies: 44 seconds 0
> cmd: 10 jiffies: 0 seconds 0
> cmd: 11 jiffies: 1 seconds 0
> cmd: 12 jiffies: 0 seconds 0
> cmd: 13 jiffies: 0 seconds 0
> cmd: 14 jiffies: 47 seconds 0
> cmd: 15 jiffies: 0 seconds 0
> cmd: 16 jiffies: 1890 seconds 7
> cmd: 17 jiffies: 1368 seconds 5
> cmd: 18 jiffies: 0 seconds 0
>
> 1) \\localhost\btrfs-scratch
> SMBs: 42255
> Bytes read: 25379425  Bytes written: 53689661
> Open files: 1 total (local), 1 open on server
> TreeConnects: 1 total 0 failed
> TreeDisconnects: 0 total 0 failed
> Creates: 14012 total 0 failed
> Closes: 14009 total 0 failed
> Flushes: 27 total 0 failed
> Reads: 41 total 0 failed
> Writes: 112 total 0 failed
> Locks: 0 total 0 failed
> IOCTLs: 2 total 1 failed
> QueryDirectories: 28 total 0 failed
> ChangeNotifies: 0 total 0 failed
> QueryInfos: 11312 total 0 failed
> SetInfos: 2711 total 0 failed
> OplockBreaks: 0 sent 0 failed
> 2) \\localhost\btrfs
> SMBs: 77
> Bytes read: 282  Bytes written: 94
> Open files: 0 total (local), 0 open on server
> TreeConnects: 1 total 0 failed
> TreeDisconnects: 0 total 0 failed
> Creates: 23 total 0 failed
> Closes: 21 total 0 failed
> Flushes: 0 total 0 failed
> Reads: 3 total 0 failed
> Writes: 1 total 0 failed
> Locks: 0 total 0 failed
>
> --
> Thanks,
>
> Steve
Steve French March 26, 2019, 6:54 p.m. UTC | #2
How about this updated patch - note how I now display min/max
(although min in this case will always be zero since jiffies are
pretty coarse granularity - maybe would be more helpful to run this
example over a slow link, or I could just make the patch simpler and
not save the minimum time for each command).  Note that the new stats
begin with the line "Total time spent processing by command ..."




$ cat /proc/fs/cifs/Stats
Resources in use
CIFS Session: 1
Share (unique mount targets): 3
SMB Request/Response Buffer: 1 Pool size: 5
SMB Small Req/Resp Buffer: 1 Pool size: 30
Total Large 38854 Small 1039857 Allocations
Operations (MIDs): 0

0 session 0 share reconnects
Total vfs operations: 272653 maximum at one time: 3

Total time spent processing by command. Time units are jiffies (250 per second)
  SMB3 CMD    Number    Total Time    Fastest    Slowest
  --------    ------    ----------    -------    -------
  0        1    1        1    1
  1        2    1        0    1
  2        0    0        0    0
  3        6    1        0    1
  4        3    0        0    0
  5        95353    5163        0    12
  6        95314    3012        0    1
  7        2    2        1    1
  8        4263    57        0    1
  9        1101    33        0    1
  10        0    0        0    0
  11        16    0        0    0
  12        0    0        0    0
  13        0    0        0    0
  14        121    37        0    1
  15        0    0        0    0
  16        56357    1100        0    1
  17        16703    1089        0    1
  18        0    0        0    0

1) \\localhost\btrfs-scratch
SMBs: 72
Bytes read: 0  Bytes written: 210729
Open files: 1 total (local), 2 open on server
TreeConnects: 1 total 0 failed
TreeDisconnects: 0 total 0 failed
Creates: 21 total 2 failed
Closes: 15 total 0 failed
Flushes: 0 total 0 failed
Reads: 0 total 0 failed
Writes: 11 total 0 failed
Locks: 0 total 0 failed
IOCTLs: 2 total 1 failed
QueryDirectories: 0 total 0 failed
ChangeNotifies: 0 total 0 failed
QueryInfos: 22 total 1 failed
SetInfos: 0 total 0 failed
OplockBreaks: 0 sent 0 failed
2) \\localhost\btrfs
SMBs: 269059
Bytes read: 115059522  Bytes written: 87813235
Open files: 0 total (local), 0 open on server
TreeConnects: 1 total 0 failed
TreeDisconnects: 0 total 0 failed
Creates: 95304 total 14 failed
Closes: 95277 total 0 failed
Flushes: 2 total 0 failed
Reads: 4263 total 0 failed
Writes: 1090 total 0 failed
Locks: 0 total 0 failed
IOCTLs: 2 total 1 failed
QueryDirectories: 117 total 0 failed
ChangeNotifies: 0 total 0 failed
QueryInfos: 56300 total 0 failed
SetInfos: 16703 total 0 failed
OplockBreaks: 0 sent 0 failed


On Tue, Mar 26, 2019 at 1:38 AM ronnie sahlberg
<ronniesahlberg@gmail.com> wrote:
>
> Very nice.
> But maybe min and max to each command as well.
> I have all three of min/max/avg is very useful.
>
> At least avg and max will tell you if there are any abnormal outliers.
>
> On Tue, Mar 26, 2019 at 3:39 PM Steve French <smfrench@gmail.com> wrote:
> >
> > I created a small experimental patch (leveraging the existing code for
> > the slow command check we already have in mid processing) to track the
> > total time per SMB2/SMB3 command.   Seems like it could be useful -
> > here is sample output (see below) from running seven xfstests to
> > generate the stats (these stats require CONFIG_CIFS_STATS2 to be
> > compiled as do the slow command counters).   I wanted to keep the
> > patch small and low risk so this seemed like the best place to capture
> > it.    This could be helpful in calculating average time per command
> > (and perhaps help isolate performance problems, and isolate what is
> > really causing slowdowns).   As with other counters: "echo 0 >
> > /proc/fs/cifs/Stats" resets them.   The new stats counters start at
> > the line "Total time spent processing each command"
> >
> > Thoughts?
> >
> > $ cat /proc/fs/cifs/Stats
> > Resources in use
> > CIFS Session: 1
> > Share (unique mount targets): 3
> > SMB Request/Response Buffer: 1 Pool size: 5
> > SMB Small Req/Resp Buffer: 2 Pool size: 30
> > Total Large 60864 Small 1165977 Allocations
> > Operations (MIDs): 1
> >
> > 0 session 0 share reconnects
> > Total vfs operations: 280236 maximum at one time: 3
> > Total time spent processing each command
> > cmd: 0 jiffies: 0 seconds 0
> > cmd: 1 jiffies: 2 seconds 0
> > cmd: 2 jiffies: 0 seconds 0
> > cmd: 3 jiffies: 1 seconds 0
> > cmd: 4 jiffies: 1 seconds 0
> > cmd: 5 jiffies: 6212 seconds 24
> > cmd: 6 jiffies: 3993 seconds 15
> > cmd: 7 jiffies: 135 seconds 0
> > cmd: 8 jiffies: 83 seconds 0
> > cmd: 9 jiffies: 44 seconds 0
> > cmd: 10 jiffies: 0 seconds 0
> > cmd: 11 jiffies: 1 seconds 0
> > cmd: 12 jiffies: 0 seconds 0
> > cmd: 13 jiffies: 0 seconds 0
> > cmd: 14 jiffies: 47 seconds 0
> > cmd: 15 jiffies: 0 seconds 0
> > cmd: 16 jiffies: 1890 seconds 7
> > cmd: 17 jiffies: 1368 seconds 5
> > cmd: 18 jiffies: 0 seconds 0
> >
> > 1) \\localhost\btrfs-scratch
> > SMBs: 42255
> > Bytes read: 25379425  Bytes written: 53689661
> > Open files: 1 total (local), 1 open on server
> > TreeConnects: 1 total 0 failed
> > TreeDisconnects: 0 total 0 failed
> > Creates: 14012 total 0 failed
> > Closes: 14009 total 0 failed
> > Flushes: 27 total 0 failed
> > Reads: 41 total 0 failed
> > Writes: 112 total 0 failed
> > Locks: 0 total 0 failed
> > IOCTLs: 2 total 1 failed
> > QueryDirectories: 28 total 0 failed
> > ChangeNotifies: 0 total 0 failed
> > QueryInfos: 11312 total 0 failed
> > SetInfos: 2711 total 0 failed
> > OplockBreaks: 0 sent 0 failed
> > 2) \\localhost\btrfs
> > SMBs: 77
> > Bytes read: 282  Bytes written: 94
> > Open files: 0 total (local), 0 open on server
> > TreeConnects: 1 total 0 failed
> > TreeDisconnects: 0 total 0 failed
> > Creates: 23 total 0 failed
> > Closes: 21 total 0 failed
> > Flushes: 0 total 0 failed
> > Reads: 3 total 0 failed
> > Writes: 1 total 0 failed
> > Locks: 0 total 0 failed
> >
> > --
> > Thanks,
> >
> > Steve
Pavel Shilovsky March 26, 2019, 7:06 p.m. UTC | #3
вт, 26 мар. 2019 г. в 11:55, Steve French <smfrench@gmail.com>:
>
> How about this updated patch - note how I now display min/max
> (although min in this case will always be zero since jiffies are
> pretty coarse granularity - maybe would be more helpful to run this
> example over a slow link, or I could just make the patch simpler and
> not save the minimum time for each command).  Note that the new stats
> begin with the line "Total time spent processing by command ..."
>
>
>
>
> $ cat /proc/fs/cifs/Stats
> Resources in use
> CIFS Session: 1
> Share (unique mount targets): 3
> SMB Request/Response Buffer: 1 Pool size: 5
> SMB Small Req/Resp Buffer: 1 Pool size: 30
> Total Large 38854 Small 1039857 Allocations
> Operations (MIDs): 0
>
> 0 session 0 share reconnects
> Total vfs operations: 272653 maximum at one time: 3
>
> Total time spent processing by command. Time units are jiffies (250 per second)
>   SMB3 CMD    Number    Total Time    Fastest    Slowest
>   --------    ------    ----------    -------    -------
>   0        1    1        1    1
>   1        2    1        0    1
>   2        0    0        0    0
>   3        6    1        0    1
>   4        3    0        0    0
>   5        95353    5163        0    12
>   6        95314    3012        0    1
>   7        2    2        1    1
>   8        4263    57        0    1
>   9        1101    33        0    1
>   10        0    0        0    0
>   11        16    0        0    0
>   12        0    0        0    0
>   13        0    0        0    0
>   14        121    37        0    1
>   15        0    0        0    0
>   16        56357    1100        0    1
>   17        16703    1089        0    1
>   18        0    0        0    0


Looks good. Probably "SMB3 CMD" should be just "CMD" and extra tabs
are needed for proper indentation.

--
Best regards,
Pavel Shilovsky
ronnie sahlberg March 26, 2019, 7:57 p.m. UTC | #4
Looks good.

On Wed, Mar 27, 2019 at 4:54 AM Steve French <smfrench@gmail.com> wrote:
>
> How about this updated patch - note how I now display min/max
> (although min in this case will always be zero since jiffies are
> pretty coarse granularity - maybe would be more helpful to run this
> example over a slow link, or I could just make the patch simpler and
> not save the minimum time for each command).  Note that the new stats
> begin with the line "Total time spent processing by command ..."
>
>
>
>
> $ cat /proc/fs/cifs/Stats
> Resources in use
> CIFS Session: 1
> Share (unique mount targets): 3
> SMB Request/Response Buffer: 1 Pool size: 5
> SMB Small Req/Resp Buffer: 1 Pool size: 30
> Total Large 38854 Small 1039857 Allocations
> Operations (MIDs): 0
>
> 0 session 0 share reconnects
> Total vfs operations: 272653 maximum at one time: 3
>
> Total time spent processing by command. Time units are jiffies (250 per second)
>   SMB3 CMD    Number    Total Time    Fastest    Slowest
>   --------    ------    ----------    -------    -------
>   0        1    1        1    1
>   1        2    1        0    1
>   2        0    0        0    0
>   3        6    1        0    1
>   4        3    0        0    0
>   5        95353    5163        0    12
>   6        95314    3012        0    1
>   7        2    2        1    1
>   8        4263    57        0    1
>   9        1101    33        0    1
>   10        0    0        0    0
>   11        16    0        0    0
>   12        0    0        0    0
>   13        0    0        0    0
>   14        121    37        0    1
>   15        0    0        0    0
>   16        56357    1100        0    1
>   17        16703    1089        0    1
>   18        0    0        0    0
>
> 1) \\localhost\btrfs-scratch
> SMBs: 72
> Bytes read: 0  Bytes written: 210729
> Open files: 1 total (local), 2 open on server
> TreeConnects: 1 total 0 failed
> TreeDisconnects: 0 total 0 failed
> Creates: 21 total 2 failed
> Closes: 15 total 0 failed
> Flushes: 0 total 0 failed
> Reads: 0 total 0 failed
> Writes: 11 total 0 failed
> Locks: 0 total 0 failed
> IOCTLs: 2 total 1 failed
> QueryDirectories: 0 total 0 failed
> ChangeNotifies: 0 total 0 failed
> QueryInfos: 22 total 1 failed
> SetInfos: 0 total 0 failed
> OplockBreaks: 0 sent 0 failed
> 2) \\localhost\btrfs
> SMBs: 269059
> Bytes read: 115059522  Bytes written: 87813235
> Open files: 0 total (local), 0 open on server
> TreeConnects: 1 total 0 failed
> TreeDisconnects: 0 total 0 failed
> Creates: 95304 total 14 failed
> Closes: 95277 total 0 failed
> Flushes: 2 total 0 failed
> Reads: 4263 total 0 failed
> Writes: 1090 total 0 failed
> Locks: 0 total 0 failed
> IOCTLs: 2 total 1 failed
> QueryDirectories: 117 total 0 failed
> ChangeNotifies: 0 total 0 failed
> QueryInfos: 56300 total 0 failed
> SetInfos: 16703 total 0 failed
> OplockBreaks: 0 sent 0 failed
>
>
> On Tue, Mar 26, 2019 at 1:38 AM ronnie sahlberg
> <ronniesahlberg@gmail.com> wrote:
> >
> > Very nice.
> > But maybe min and max to each command as well.
> > I have all three of min/max/avg is very useful.
> >
> > At least avg and max will tell you if there are any abnormal outliers.
> >
> > On Tue, Mar 26, 2019 at 3:39 PM Steve French <smfrench@gmail.com> wrote:
> > >
> > > I created a small experimental patch (leveraging the existing code for
> > > the slow command check we already have in mid processing) to track the
> > > total time per SMB2/SMB3 command.   Seems like it could be useful -
> > > here is sample output (see below) from running seven xfstests to
> > > generate the stats (these stats require CONFIG_CIFS_STATS2 to be
> > > compiled as do the slow command counters).   I wanted to keep the
> > > patch small and low risk so this seemed like the best place to capture
> > > it.    This could be helpful in calculating average time per command
> > > (and perhaps help isolate performance problems, and isolate what is
> > > really causing slowdowns).   As with other counters: "echo 0 >
> > > /proc/fs/cifs/Stats" resets them.   The new stats counters start at
> > > the line "Total time spent processing each command"
> > >
> > > Thoughts?
> > >
> > > $ cat /proc/fs/cifs/Stats
> > > Resources in use
> > > CIFS Session: 1
> > > Share (unique mount targets): 3
> > > SMB Request/Response Buffer: 1 Pool size: 5
> > > SMB Small Req/Resp Buffer: 2 Pool size: 30
> > > Total Large 60864 Small 1165977 Allocations
> > > Operations (MIDs): 1
> > >
> > > 0 session 0 share reconnects
> > > Total vfs operations: 280236 maximum at one time: 3
> > > Total time spent processing each command
> > > cmd: 0 jiffies: 0 seconds 0
> > > cmd: 1 jiffies: 2 seconds 0
> > > cmd: 2 jiffies: 0 seconds 0
> > > cmd: 3 jiffies: 1 seconds 0
> > > cmd: 4 jiffies: 1 seconds 0
> > > cmd: 5 jiffies: 6212 seconds 24
> > > cmd: 6 jiffies: 3993 seconds 15
> > > cmd: 7 jiffies: 135 seconds 0
> > > cmd: 8 jiffies: 83 seconds 0
> > > cmd: 9 jiffies: 44 seconds 0
> > > cmd: 10 jiffies: 0 seconds 0
> > > cmd: 11 jiffies: 1 seconds 0
> > > cmd: 12 jiffies: 0 seconds 0
> > > cmd: 13 jiffies: 0 seconds 0
> > > cmd: 14 jiffies: 47 seconds 0
> > > cmd: 15 jiffies: 0 seconds 0
> > > cmd: 16 jiffies: 1890 seconds 7
> > > cmd: 17 jiffies: 1368 seconds 5
> > > cmd: 18 jiffies: 0 seconds 0
> > >
> > > 1) \\localhost\btrfs-scratch
> > > SMBs: 42255
> > > Bytes read: 25379425  Bytes written: 53689661
> > > Open files: 1 total (local), 1 open on server
> > > TreeConnects: 1 total 0 failed
> > > TreeDisconnects: 0 total 0 failed
> > > Creates: 14012 total 0 failed
> > > Closes: 14009 total 0 failed
> > > Flushes: 27 total 0 failed
> > > Reads: 41 total 0 failed
> > > Writes: 112 total 0 failed
> > > Locks: 0 total 0 failed
> > > IOCTLs: 2 total 1 failed
> > > QueryDirectories: 28 total 0 failed
> > > ChangeNotifies: 0 total 0 failed
> > > QueryInfos: 11312 total 0 failed
> > > SetInfos: 2711 total 0 failed
> > > OplockBreaks: 0 sent 0 failed
> > > 2) \\localhost\btrfs
> > > SMBs: 77
> > > Bytes read: 282  Bytes written: 94
> > > Open files: 0 total (local), 0 open on server
> > > TreeConnects: 1 total 0 failed
> > > TreeDisconnects: 0 total 0 failed
> > > Creates: 23 total 0 failed
> > > Closes: 21 total 0 failed
> > > Flushes: 0 total 0 failed
> > > Reads: 3 total 0 failed
> > > Writes: 1 total 0 failed
> > > Locks: 0 total 0 failed
> > >
> > > --
> > > Thanks,
> > >
> > > Steve
>
>
>
> --
> Thanks,
>
> Steve
diff mbox series

Patch

From 3f1ea43711cc8298263323a25e26227cbb22873d Mon Sep 17 00:00:00 2001
From: Steve French <stfrench@microsoft.com>
Date: Tue, 26 Mar 2019 00:32:31 -0500
Subject: [PATCH] SMB3: track total time spent per command

This allows us to calculate average time spent in each SMB2/SMB3 command and
also helps determine where bottlenecks could be.

Signed-off-by: Steve French <stfrench@microsoft.com>
---
 fs/cifs/cifs_debug.c | 9 ++++++++-
 fs/cifs/cifsglob.h   | 3 ++-
 fs/cifs/transport.c  | 3 +++
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
index 13c1288b04a7..8fcfc7770ebb 100644
--- a/fs/cifs/cifs_debug.c
+++ b/fs/cifs/cifs_debug.c
@@ -462,8 +462,10 @@  static ssize_t cifs_stats_proc_write(struct file *file,
 			server = list_entry(tmp1, struct TCP_Server_Info,
 					    tcp_ses_list);
 #ifdef CONFIG_CIFS_STATS2
-			for (i = 0; i < NUMBER_OF_SMB2_COMMANDS; i++)
+			for (i = 0; i < NUMBER_OF_SMB2_COMMANDS; i++) {
 				atomic_set(&server->smb2slowcmd[i], 0);
+				server->time_per_cmd[i] = 0;
+			}
 #endif /* CONFIG_CIFS_STATS2 */
 			list_for_each(tmp2, &server->smb_ses_list) {
 				ses = list_entry(tmp2, struct cifs_ses,
@@ -531,6 +533,11 @@  static int cifs_stats_proc_show(struct seq_file *m, void *v)
 		server = list_entry(tmp1, struct TCP_Server_Info,
 				    tcp_ses_list);
 #ifdef CONFIG_CIFS_STATS2
+		seq_printf(m, "Total time spent processing each command\n");
+		for (j = 0; j < NUMBER_OF_SMB2_COMMANDS; j++)
+			seq_printf(m, "cmd: %d jiffies: %llu seconds %llu\n", j,
+				server->time_per_cmd[j],
+				server->time_per_cmd[j]/HZ);
 		for (j = 0; j < NUMBER_OF_SMB2_COMMANDS; j++)
 			if (atomic_read(&server->smb2slowcmd[j]))
 				seq_printf(m, "%d slow responses from %s for command %d\n",
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
index 38feae812b47..7232788fffc0 100644
--- a/fs/cifs/cifsglob.h
+++ b/fs/cifs/cifsglob.h
@@ -18,7 +18,7 @@ 
  */
 #ifndef _CIFS_GLOB_H
 #define _CIFS_GLOB_H
-
+#define CONFIG_CIFS_STATS2 1 /* BB REMOVEME */
 #include <linux/in.h>
 #include <linux/in6.h>
 #include <linux/slab.h>
@@ -708,6 +708,7 @@  struct TCP_Server_Info {
 	atomic_t in_send; /* requests trying to send */
 	atomic_t num_waiters;   /* blocked waiting to get in sendrecv */
 	atomic_t smb2slowcmd[NUMBER_OF_SMB2_COMMANDS]; /* count resps > 1 sec */
+	__u64 time_per_cmd[NUMBER_OF_SMB2_COMMANDS]; /* total time per cmd */
 #endif /* STATS2 */
 	unsigned int	max_read;
 	unsigned int	max_write;
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
index 1de8e996e566..0c7f264403bd 100644
--- a/fs/cifs/transport.c
+++ b/fs/cifs/transport.c
@@ -114,6 +114,9 @@  DeleteMidQEntry(struct mid_q_entry *midEntry)
 		cifs_small_buf_release(midEntry->resp_buf);
 #ifdef CONFIG_CIFS_STATS2
 	now = jiffies;
+	if (le16_to_cpu(midEntry->command) < NUMBER_OF_SMB2_COMMANDS)
+		midEntry->server->time_per_cmd[le16_to_cpu(midEntry->command)]
+			+= (now - midEntry->when_alloc);
 	/*
 	 * commands taking longer than one second (default) can be indications
 	 * that something is wrong, unless it is quite a slow link or a very
-- 
2.17.1