[pfSense] Remote syslog issue (syslogd: sendto: No buffer space available) with Traffic Shaping

Seb wzd4j9jxq2 at snkmail.com
Thu May 3 10:52:59 EDT 2012


Hi list,
 
I have set 1 remote syslog server, and the pfsense is set to log everything
to it, including packets blocked by the default rule.  I have also added
Traffic Shaping to limit our bandwidth speed so that we don't get billed by
our ISPs for exceeding the contracted bandwidth speed.  However, the result
is loads of messages like this:
 
	 syslogd: sendto: No buffer space available
 
in the  <https://192.168.4.23/diag_logs.php> Status: System logs: System
page.  (Some, but not all, of these messages are also on the remote syslog
server.  Let's say around 20% are logged on the remote syslog server.)
 
If I turn off the logging packets blocked by the default rule, the number of
these messages is much reduced, but that is what you would expect as we are
logging a lot less.
 
If I remove the traffic shaper, these messages no longer happen, so it is
not an issue with the syslog server, or the network.  (The syslog server is
on the LAN.)
 
These messages are also occuring on the backup CARP firewall, but much less
frequently.  Then again, the backup will block much less packets as most of
them will go to the primary.
 
Also, possibly related, the master and backup CARP members switch about once
a day with messages like:
 
vip16: MASTER -> BACKUP (more frequent advertisement received)

Or:
 
vip16: BACKUP -> MASTER (preempting a slower master)
 
I would really like to get to the bottom of that issue too, which is why I'm
trying to solve the syslog issue, in case they are related.
 
I have googled the syslog error above, and there are several threads on the
forums, but it looks like none have been solved to the poster's
satisfaction.
 
In case it helps, here is netstat -m on the primary, currently in backup
role:
 
# netstat -m
1400/790/2190 mbufs in use (current/cache/total)
1311/689/2000/25600 mbuf clusters in use (current/cache/total/max)
1309/355 mbuf+clusters out of packet secondary zone in use (current/cache)
0/65/65/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
2975K/1835K/4811K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/8/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
#


And the secondary, currently in master role:

# netstat -m
1402/1673/3075 mbufs in use (current/cache/total)
1311/1013/2324/25600 mbuf clusters in use (current/cache/total/max)
1309/483 mbuf+clusters out of packet secondary zone in use (current/cache)
0/103/103/12800 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
2976K/2856K/5832K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/9/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines


Both machines show:

# netstat -s | grep buffer
        0 dropped due to full socket buffers
        0 messages dropped due to full socket buffers


We do have drops occuring on the default LAN queue, but I have added a rule
so that all local traffic goes into a qLocal queue with higher bandwidth and
this queue is not getting drops.  (I tested this queue using an scp file
transfer, but it is set to match all protocols.)

Can anyone suggest what is wrong, or how to find out what is wrong?

Thanks and kind regards, 

Seb

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.pfsense.org/pipermail/list/attachments/20120503/161b04e7/attachment.html>


More information about the List mailing list