1Documentation for /proc/sys/net/*
2	(c) 1999		Terrehon Bowden <terrehon@pacbell.net>
3				Bodo Bauer <bb@ricochet.net>
4	(c) 2000		Jorge Nerin <comandante@zaralinux.com>
5	(c) 2009		Shen Feng <shen@cn.fujitsu.com>
6
7For general info and legal blurb, please look in README.
8
9==============================================================
10
11This file contains the documentation for the sysctl files in
12/proc/sys/net
13
14The interface  to  the  networking  parts  of  the  kernel  is  located  in
15/proc/sys/net. The following table shows all possible subdirectories.  You may
16see only some of them, depending on your kernel's configuration.
17
18
19Table : Subdirectories in /proc/sys/net
20..............................................................................
21 Directory Content             Directory  Content
22 core      General parameter   appletalk  Appletalk protocol
23 unix      Unix domain sockets netrom     NET/ROM
24 802       E802 protocol       ax25       AX25
25 ethernet  Ethernet protocol   rose       X.25 PLP layer
26 ipv4      IP version 4        x25        X.25 protocol
27 ipx       IPX                 token-ring IBM token ring
28 bridge    Bridging            decnet     DEC net
29 ipv6      IP version 6        tipc       TIPC
30..............................................................................
31
321. /proc/sys/net/core - Network core options
33-------------------------------------------------------
34
35bpf_jit_enable
36--------------
37
38This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
39and efficient infrastructure allowing to execute bytecode at various
40hook points. It is used in a number of Linux kernel subsystems such
41as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
42and security (e.g. seccomp). LLVM has a BPF back end that can compile
43restricted C into a sequence of BPF instructions. After program load
44through bpf(2) and passing a verifier in the kernel, a JIT will then
45translate these BPF proglets into native CPU instructions. There are
46two flavors of JITs, the newer eBPF JIT currently supported on:
47  - x86_64
48  - x86_32
49  - arm64
50  - arm32
51  - ppc64
52  - sparc64
53  - mips64
54  - s390x
55
56And the older cBPF JIT supported on the following archs:
57  - mips
58  - ppc
59  - sparc
60
61eBPF JITs are a superset of cBPF JITs, meaning the kernel will
62migrate cBPF instructions into eBPF instructions and then JIT
63compile them transparently. Older cBPF JITs can only translate
64tcpdump filters, seccomp rules, etc, but not mentioned eBPF
65programs loaded through bpf(2).
66
67Values :
68	0 - disable the JIT (default value)
69	1 - enable the JIT
70	2 - enable the JIT and ask the compiler to emit traces on kernel log.
71
72bpf_jit_harden
73--------------
74
75This enables hardening for the BPF JIT compiler. Supported are eBPF
76JIT backends. Enabling hardening trades off performance, but can
77mitigate JIT spraying.
78Values :
79	0 - disable JIT hardening (default value)
80	1 - enable JIT hardening for unprivileged users only
81	2 - enable JIT hardening for all users
82
83bpf_jit_kallsyms
84----------------
85
86When BPF JIT compiler is enabled, then compiled images are unknown
87addresses to the kernel, meaning they neither show up in traces nor
88in /proc/kallsyms. This enables export of these addresses, which can
89be used for debugging/tracing. If bpf_jit_harden is enabled, this
90feature is disabled.
91Values :
92	0 - disable JIT kallsyms export (default value)
93	1 - enable JIT kallsyms export for privileged users only
94
95dev_weight
96--------------
97
98The maximum number of packets that kernel can handle on a NAPI interrupt,
99it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
100aggregated packet is counted as one packet in this context.
101
102Default: 64
103
104dev_weight_rx_bias
105--------------
106
107RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
108of the driver for the per softirq cycle netdev_budget. This parameter influences
109the proportion of the configured netdev_budget that is spent on RPS based packet
110processing during RX softirq cycles. It is further meant for making current
111dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
112(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
113on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
114Default: 1
115
116dev_weight_tx_bias
117--------------
118
119Scales the maximum number of packets that can be processed during a TX softirq cycle.
120Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
121net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
122Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
123Default: 1
124
125default_qdisc
126--------------
127
128The default queuing discipline to use for network devices. This allows
129overriding the default of pfifo_fast with an alternative. Since the default
130queuing discipline is created without additional parameters so is best suited
131to queuing disciplines that work well without configuration like stochastic
132fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
133queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
134which require setting up classes and bandwidths. Note that physical multiqueue
135interfaces still use mq as root qdisc, which in turn uses this default for its
136leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
137default to noqueue.
138Default: pfifo_fast
139
140busy_read
141----------------
142Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
143Approximate time in us to busy loop waiting for packets on the device queue.
144This sets the default value of the SO_BUSY_POLL socket option.
145Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
146which is the preferred method of enabling. If you need to enable the feature
147globally via sysctl, a value of 50 is recommended.
148Will increase power usage.
149Default: 0 (off)
150
151busy_poll
152----------------
153Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
154Approximate time in us to busy loop waiting for events.
155Recommended value depends on the number of sockets you poll on.
156For several sockets 50, for several hundreds 100.
157For more than that you probably want to use epoll.
158Note that only sockets with SO_BUSY_POLL set will be busy polled,
159so you want to either selectively set SO_BUSY_POLL on those sockets or set
160sysctl.net.busy_read globally.
161Will increase power usage.
162Default: 0 (off)
163
164rmem_default
165------------
166
167The default setting of the socket receive buffer in bytes.
168
169rmem_max
170--------
171
172The maximum receive socket buffer size in bytes.
173
174tstamp_allow_data
175-----------------
176Allow processes to receive tx timestamps looped together with the original
177packet contents. If disabled, transmit timestamp requests from unprivileged
178processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
179Default: 1 (on)
180
181
182wmem_default
183------------
184
185The default setting (in bytes) of the socket send buffer.
186
187wmem_max
188--------
189
190The maximum send socket buffer size in bytes.
191
192message_burst and message_cost
193------------------------------
194
195These parameters  are used to limit the warning messages written to the kernel
196log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
197denial-of-service attack  impossible. A higher message_cost factor, results in
198fewer messages that will be written. Message_burst controls when messages will
199be dropped.  The  default  settings  limit  warning messages to one every five
200seconds.
201
202warnings
203--------
204
205This sysctl is now unused.
206
207This was used to control console messages from the networking stack that
208occur because of problems on the network like duplicate address or bad
209checksums.
210
211These messages are now emitted at KERN_DEBUG and can generally be enabled
212and controlled by the dynamic_debug facility.
213
214netdev_budget
215-------------
216
217Maximum number of packets taken from all interfaces in one polling cycle (NAPI
218poll). In one polling cycle interfaces which are registered to polling are
219probed in a round-robin manner. Also, a polling cycle may not exceed
220netdev_budget_usecs microseconds, even if netdev_budget has not been
221exhausted.
222
223netdev_budget_usecs
224---------------------
225
226Maximum number of microseconds in one NAPI polling cycle. Polling
227will exit when either netdev_budget_usecs have elapsed during the
228poll cycle or the number of packets processed reaches netdev_budget.
229
230netdev_max_backlog
231------------------
232
233Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface
234receives packets faster than kernel can process them.
235
236netdev_rss_key
237--------------
238
239RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
240randomly generated.
241Some user space might need to gather its content even if drivers do not
242provide ethtool -x support yet.
243
244myhost:~# cat /proc/sys/net/core/netdev_rss_key
24584:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
246
247File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
248Note:
249/proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
250but most drivers only use 40 bytes of it.
251
252myhost:~# ethtool -x eth0
253RX flow hash indirection table for eth0 with 8 RX ring(s):
254    0:    0     1     2     3     4     5     6     7
255RSS hash key:
25684:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
257
258netdev_tstamp_prequeue
259----------------------
260
261If set to 0, RX packet timestamps can be sampled after RPS processing, when
262the target CPU processes packets. It might give some delay on timestamps, but
263permit to distribute the load on several cpus.
264
265If set to 1 (default), timestamps are sampled as soon as possible, before
266queueing.
267
268optmem_max
269----------
270
271Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
272of struct cmsghdr structures with appended data.
273
274fb_tunnels_only_for_init_net
275----------------------------
276
277Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
278sit0, ip6tnl0, ip6gre0) are automatically created when a new
279network namespace is created, if corresponding tunnel is present
280in initial network namespace.
281If set to 1, these devices are not automatically created, and
282user space is responsible for creating them if needed.
283
284Default : 0  (for compatibility reasons)
285
2862. /proc/sys/net/unix - Parameters for Unix domain sockets
287-------------------------------------------------------
288
289There is only one file in this directory.
290unix_dgram_qlen limits the max number of datagrams queued in Unix domain
291socket's buffer. It will not take effect unless PF_UNIX flag is specified.
292
293
2943. /proc/sys/net/ipv4 - IPV4 settings
295-------------------------------------------------------
296Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
297descriptions of these entries.
298
299
3004. Appletalk
301-------------------------------------------------------
302
303The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
304when Appletalk is loaded. The configurable parameters are:
305
306aarp-expiry-time
307----------------
308
309The amount  of  time  we keep an ARP entry before expiring it. Used to age out
310old hosts.
311
312aarp-resolve-time
313-----------------
314
315The amount of time we will spend trying to resolve an Appletalk address.
316
317aarp-retransmit-limit
318---------------------
319
320The number of times we will retransmit a query before giving up.
321
322aarp-tick-time
323--------------
324
325Controls the rate at which expires are checked.
326
327The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
328on a machine.
329
330The fields  indicate  the DDP type, the local address (in network:node format)
331the remote  address,  the  size of the transmit pending queue, the size of the
332received queue  (bytes waiting for applications to read) the state and the uid
333owning the socket.
334
335/proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
336shows the  name  of the interface, its Appletalk address, the network range on
337that address  (or  network number for phase 1 networks), and the status of the
338interface.
339
340/proc/net/atalk_route lists  each  known  network  route.  It lists the target
341(network) that the route leads to, the router (may be directly connected), the
342route flags, and the device the route is using.
343
344
3455. IPX
346-------------------------------------------------------
347
348The IPX protocol has no tunable values in proc/sys/net.
349
350The IPX  protocol  does,  however,  provide  proc/net/ipx. This lists each IPX
351socket giving  the  local  and  remote  addresses  in  Novell  format (that is
352network:node:port). In  accordance  with  the  strange  Novell  tradition,
353everything but the port is in hex. Not_Connected is displayed for sockets that
354are not  tied to a specific remote address. The Tx and Rx queue sizes indicate
355the number  of  bytes  pending  for  transmission  and  reception.  The  state
356indicates the  state  the  socket  is  in and the uid is the owning uid of the
357socket.
358
359The /proc/net/ipx_interface  file lists all IPX interfaces. For each interface
360it gives  the network number, the node number, and indicates if the network is
361the primary  network.  It  also  indicates  which  device  it  is bound to (or
362Internal for  internal  networks)  and  the  Frame  Type if appropriate. Linux
363supports 802.3,  802.2,  802.2  SNAP  and DIX (Blue Book) ethernet framing for
364IPX.
365
366The /proc/net/ipx_route  table  holds  a list of IPX routes. For each route it
367gives the  destination  network, the router node (or Directly) and the network
368address of the router (or Connected) for internal networks.
369
3706. TIPC
371-------------------------------------------------------
372
373tipc_rmem
374----------
375
376The TIPC protocol now has a tunable for the receive memory, similar to the
377tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
378
379    # cat /proc/sys/net/tipc/tipc_rmem
380    4252725 34021800        68043600
381    #
382
383The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
384are scaled (shifted) versions of that same value.  Note that the min value
385is not at this point in time used in any meaningful way, but the triplet is
386preserved in order to be consistent with things like tcp_rmem.
387
388named_timeout
389--------------
390
391TIPC name table updates are distributed asynchronously in a cluster, without
392any form of transaction handling. This means that different race scenarios are
393possible. One such is that a name withdrawal sent out by one node and received
394by another node may arrive after a second, overlapping name publication already
395has been accepted from a third node, although the conflicting updates
396originally may have been issued in the correct sequential order.
397If named_timeout is nonzero, failed topology updates will be placed on a defer
398queue until another event arrives that clears the error, or until the timeout
399expires. Value is in milliseconds.
400