netstat -p protocol

The netstat -p protocol shows statistics about the value specified for the protocol variable (udp, tcp, sctp,ip, icmp), which is either a well-known name for a protocol or an alias for it.

Some protocol names and aliases are listed in the /etc/protocols file. A null response indicates that there are no numbers to report. If there is no statistics routine for it, the program report of the value specified for the protocol variable is unknown.

The following example shows the output for the ip protocol:
# netstat -p ip
ip:
        45775 total packets received
        0 bad header checksums
        0 with size smaller than minimum
        0 with data size < data length
        0 with header length < data size
        0 with data length < header length
        0 with bad options
        0 with incorrect version number
        0 fragments received
        0 fragments dropped (dup or out of space)
        0 fragments dropped after timeout
        0 packets reassembled ok
        45721 packets for this host
        51 packets for unknown/unsupported protocol
        0 packets forwarded
        4 packets not forwardable
        0 redirects sent
        33877 packets sent from this host
        0 packets sent with fabricated ip header
        0 output packets dropped due to no bufs, etc.
        0 output packets discarded due to no route
        0 output datagrams fragmented
        0 fragments created
        0 datagrams that can't be fragmented
        0 IP Multicast packets dropped due to no receiver
        0 successful path MTU discovery cycles
        1 path MTU rediscovery cycle attempted
        3 path MTU discovery no-response estimates
        3 path MTU discovery response timeouts
        1 path MTU discovery decrease detected
        8 path MTU discovery packets sent
        0 path MTU discovery memory allocation failures
        0 ipintrq overflows
        0 with illegal source
        0 packets processed by threads
        0 packets dropped by threads
        0 packets dropped due to the full socket receive buffer
        0 dead gateway detection packets sent
        0 dead gateway detection packet allocation failures
        0 dead gateway detection gateway allocation failures

The highlighted fields are described as follows:

  • Total Packets Received

    Number of total IP datagrams received.

  • Bad Header Checksum or Fragments Dropped

    If the output shows bad header checksum or fragments dropped due to dup or out of space, this indicates either a network that is corrupting packets or device driver receive queues that are not large enough.

  • Fragments Received

    Number of total fragments received.

  • Dropped after Timeout

    If the fragments dropped after timeout is other than zero, then the time to life counter of the ip fragments expired due to a busy network before all fragments of the datagram arrived. To avoid this, use the no command to increase the value of the ipfragttl network parameter. Another reason could be a lack of mbufs; increase thewall.

  • Packets Sent from this Host

    Number of IP datagrams that were created and sent out from this system. This counter does not include the forwarded datagrams (passthrough traffic).

  • Fragments Created

    Number of fragments created in this system when IP datagrams were sent out.

When viewing IP statistics, look at the ratio of packets received to fragments received. As a guideline for small MTU networks, if 10 percent or more of the packets are getting fragmented, you should investigate further to determine the cause. A large number of fragments indicates that protocols above the IP layer on remote hosts are passing data to IP with data sizes larger than the MTU for the interface. Gateways/routers in the network path might also have a much smaller MTU size than the other nodes in the network. The same logic can be applied to packets sent and fragments created.

Fragmentation results in additional CPU overhead so it is important to determine its cause. Be aware that some applications, by their very nature, can cause fragmentation to occur. For example, an application that sends small amounts of data can cause fragments to occur. However, if you know the application is sending large amounts of data and fragmentation is still occurring, determine the cause. It is likely that the MTU size used is not the MTU size configured on the systems.

The following example shows the output for the udp protocol:
# netstat -p udp
udp:
        11623 datagrams received
        0 incomplete headers
        0 bad data length fields
        0 bad checksums
        620 dropped due to no socket
        10989 broadcast/multicast datagrams dropped due to no socket
        0 socket buffer overflows
        14 delivered
        12 datagrams output

Statistics of interest are:

  • Bad Checksums

    Bad checksums could happen due to hardware card or cable failure.

  • Dropped Due to No Socket

    Number of received UDP datagrams of that destination socket ports were not opened. As a result, the ICMP Destination Unreachable - Port Unreachable message must have been sent out. But if the received UDP datagrams were broadcast datagrams, ICMP errors are not generated. If this value is high, investigate how the application is handling sockets.

  • Socket Buffer Overflows

    Socket buffer overflows could be due to insufficient transmit and receive UDP sockets, too few nfsd daemons, or too small nfs_socketsize, udp_recvspace and sb_max values.

If the netstat -p udp command indicates socket overflows, then you might need to increase the number of the nfsd daemons on the server. First, check the affected system for CPU or I/O saturation, and verify the recommended setting for the other communication layers by using the no -a command. If the system is saturated, you must either to reduce its load or increase its resources.

The following example shows the output for the tcp protocol:
 # netstat -p tcp
tcp:
        576 packets sent
                512 data packets (62323 bytes)
                0 data packets (0 bytes) retransmitted
                55 ack-only packets (28 delayed)
                0 URG only packets
                0 window probe packets
                0 window update packets
                9 control packets
                0 large sends
                0 bytes sent using largesend
                0 bytes is the biggest largesend
        719 packets received
                504 acks (for 62334 bytes)
                19 duplicate acks
                0 acks for unsent data
                449 packets (4291 bytes) received in-sequence
                8 completely duplicate packets (8 bytes)
                0 old duplicate packets
                0 packets with some dup. data (0 bytes duped)
                5 out-of-order packets (0 bytes)
                0 packets (0 bytes) of data after window
                0 window probes
                2 window update packets
                0 packets received after close
                0 packets with bad hardware assisted checksum
                0 discarded for bad checksums
                0 discarded for bad header offset fields
                0 discarded because packet too short
                0 discarded by listeners
                0 discarded due to listener's queue full
                71 ack packet headers correctly predicted
                172 data packet headers correctly predicted
        6 connection requests
        8 connection accepts
        14 connections established (including accepts)
        6 connections closed (including 0 drops)
        0 connections with ECN capability
        0 times responded to ECN
        0 embryonic connections dropped
        504 segments updated rtt (of 505 attempts)
        0 segments with congestion window reduced bit set
        0 segments with congestion experienced bit set
        0 resends due to path MTU discovery
        0 path MTU discovery terminations due to retransmits
        0 retransmit timeouts
                0 connections dropped by rexmit timeout
        0 fast retransmits
                0 when congestion window less than 4 segments
        0 newreno retransmits
        0 times avoided false fast retransmits
        0 persist timeouts
                0 connections dropped due to persist timeout
        16 keepalive timeouts
                16 keepalive probes sent
                0 connections dropped by keepalive
        0 times SACK blocks array is extended
        0 times SACK holes array is extended
        0 packets dropped due to memory allocation failure
        0 connections in timewait reused
        0 delayed ACKs for SYN
        0 delayed ACKs for FIN
        0 send_and_disconnects
        0 spliced connections
        0 spliced connections closed
        0 spliced connections reset
        0 spliced connections timeout
        0 spliced connections persist timeout
        0 spliced connections keepalive timeout

Statistics of interest are:

  • Packets Sent
  • Data Packets
  • Data Packets Retransmitted
  • Packets Received
  • Completely Duplicate Packets
  • Retransmit Timeouts

For the TCP statistics, compare the number of packets sent to the number of data packets retransmitted. If the number of packets retransmitted is over 10-15 percent of the total packets sent, TCP is experiencing timeouts indicating that network traffic may be too high for acknowledgments (ACKs) to return before a timeout. A bottleneck on the receiving node or general network problems can also cause TCP retransmissions, which will increase network traffic, further adding to any network performance problems.

Also, compare the number of packets received with the number of completely duplicate packets. If TCP on a sending node times out before an ACK is received from the receiving node, it will retransmit the packet. Duplicate packets occur when the receiving node eventually receives all the retransmitted packets. If the number of duplicate packets exceeds 10-15 percent, the problem may again be too much network traffic or a bottleneck at the receiving node. Duplicate packets increase network traffic.

The value for retransmit timeouts occurs when TCP sends a packet but does not receive an ACK in time. It then resends the packet. This value is incremented for any subsequent retransmittals. These continuous retransmittals drive CPU utilization higher, and if the receiving node does not receive the packet, it eventually will be dropped.