You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
119 lines
3.2 KiB
119 lines
3.2 KiB
.TH filetop 8 "2016-02-08" "USER COMMANDS"
|
|
.SH NAME
|
|
filetop \- File reads and writes by filename and process. Top for files.
|
|
.SH SYNOPSIS
|
|
.B filetop [\-h] [\-C] [\-r MAXROWS] [\-s {reads,writes,rbytes,wbytes}] [\-p PID] [interval] [count]
|
|
.SH DESCRIPTION
|
|
This is top for files.
|
|
|
|
This traces file reads and writes, and prints a per-file summary every interval
|
|
(by default, 1 second). By default the summary is sorted on the highest read
|
|
throughput (Kbytes). Sorting order can be changed via -s option. By default only
|
|
IO on regular files is shown. The -a option will list all file types (sokets,
|
|
FIFOs, etc).
|
|
|
|
This uses in-kernel eBPF maps to store per process summaries for efficiency.
|
|
|
|
This script works by tracing the __vfs_read() and __vfs_write() functions using
|
|
kernel dynamic tracing, which instruments explicit read and write calls. If
|
|
files are read or written using another means (eg, via mmap()), then they
|
|
will not be visible using this tool. Also, this tool will need updating to
|
|
match any code changes to those vfs functions.
|
|
|
|
This should be useful for file system workload characterization when analyzing
|
|
the performance of applications.
|
|
|
|
Note that tracing VFS level reads and writes can be a frequent activity, and
|
|
this tool can begin to cost measurable overhead at high I/O rates.
|
|
|
|
Since this uses BPF, only the root user can use this tool.
|
|
.SH REQUIREMENTS
|
|
CONFIG_BPF and bcc.
|
|
.SH OPTIONS
|
|
.TP
|
|
\-a
|
|
Include non-regular file types (sockets, FIFOs, etc).
|
|
.TP
|
|
\-C
|
|
Don't clear the screen.
|
|
.TP
|
|
\-r MAXROWS
|
|
Maximum number of rows to print. Default is 20.
|
|
.TP
|
|
\-s {reads,writes,rbytes,wbytes}
|
|
Sort column. Default is rbytes (read throughput).
|
|
.TP
|
|
\-p PID
|
|
Trace this PID only.
|
|
.TP
|
|
interval
|
|
Interval between updates, seconds.
|
|
.TP
|
|
count
|
|
Number of interval summaries.
|
|
|
|
.SH EXAMPLES
|
|
.TP
|
|
Summarize block device I/O by process, 1 second screen refresh:
|
|
#
|
|
.B filetop
|
|
.TP
|
|
Don't clear the screen, and top 8 rows only:
|
|
#
|
|
.B filetop -Cr 8
|
|
.TP
|
|
5 second summaries, 10 times only:
|
|
#
|
|
.B filetop 5 10
|
|
.SH FIELDS
|
|
.TP
|
|
loadavg:
|
|
The contents of /proc/loadavg
|
|
.TP
|
|
PID
|
|
Process ID.
|
|
.TP
|
|
COMM
|
|
Process name.
|
|
.TP
|
|
READS
|
|
Count of reads during interval.
|
|
.TP
|
|
WRITES
|
|
Count of writes during interval.
|
|
.TP
|
|
R_Kb
|
|
Total read Kbytes during interval.
|
|
.TP
|
|
W_Kb
|
|
Total write Kbytes during interval.
|
|
.TP
|
|
T
|
|
Type of file: R == regular, S == socket, O == other (pipe, etc).
|
|
.SH OVERHEAD
|
|
Depending on the frequency of application reads and writes, overhead can become
|
|
significant, in the worst case slowing applications by over 50%. Hopefully for
|
|
real world workloads the overhead is much less -- test before use. The reason
|
|
for the high overhead is that VFS reads and writes can be a frequent event, and
|
|
despite the eBPF overhead being very small per event, if you multiply this
|
|
small overhead by a million events per second, it becomes a million times
|
|
worse. Literally. You can gauge the number of reads and writes using the
|
|
vfsstat(8) tool, also from bcc.
|
|
.SH SOURCE
|
|
This is from bcc.
|
|
.IP
|
|
https://github.com/iovisor/bcc
|
|
.PP
|
|
Also look in the bcc distribution for a companion _examples.txt file containing
|
|
example usage, output, and commentary for this tool.
|
|
.SH OS
|
|
Linux
|
|
.SH STABILITY
|
|
Unstable - in development.
|
|
.SH AUTHOR
|
|
Brendan Gregg
|
|
.SH INSPIRATION
|
|
top(1) by William LeFebvre
|
|
.SH SEE ALSO
|
|
vfsstat(8), vfscount(8), fileslower(8)
|