[clue] samba performance tuning?

Quentin Hartman qhartman at gmail.com
Wed Sep 18 11:12:25 MDT 2013


does the admin user actually exist? Does it only sometimes fail, or always
fail?


On Wed, Sep 18, 2013 at 10:56 AM, Mike Bean <beandaemon at gmail.com> wrote:

> This is getting interesting:  the common thread is this in the samba log:
>
> [2013/09/18 16:54:11.396694,  2] auth/auth.c:319(check_ntlm_password)
>   check_ntlm_password:  Authentication for user [admin] -> [admin] FAILED
> with error NT_STATUS_NO_SUCH_USER
>
>
> On Wed, Sep 18, 2013 at 9:48 AM, Quentin Hartman <qhartman at gmail.com>wrote:
>
>> Graphite++. I use it for a number of application-level metrics at work
>> and it's suuuuuuper nice.
>>
>>
>> On Wed, Sep 18, 2013 at 9:41 AM, Chris Fedde <chris at fedde.us> wrote:
>>
>>> Add interface bytes in/out and error counts to your monitoring so you
>>> can look back and see if there are any interesting issues there. Also if
>>> you have smart switches add monitoring there too.  It's hard to know what
>>> changed when you don't have some kind of baseline.
>>>
>>> I use nagios and nagios graph for monitoring right now but I'm starting
>>> to get more interested in using Graphite for the time series data.
>>>
>>>
>>> On Tue, Sep 17, 2013 at 8:57 PM, Mike Bean <beandaemon at gmail.com> wrote:
>>>
>>>> Don't believe it,  I've been messing with it for a while, but I don't
>>>> understand the appeal. GPFS just feels like another open source project IBM
>>>> bought up so they could take credit for the work.  (Used to be mmfs).
>>>>
>>>>
>>>> On Tue, Sep 17, 2013 at 4:51 PM, Quentin Hartman <qhartman at gmail.com>wrote:
>>>>
>>>>> I'd throw some instrumentation on the server hosting the samba share
>>>>> and see if you're simply saturating the links (users <-> samba <-> gpfs )
>>>>> if nothing else has changed and the storage performance is generally within
>>>>> "ok" tolerances. I've not worked with gpfs before, sounds like a cool tech.
>>>>>
>>>>> I like Ganglia for general system instrumentation like this. It can
>>>>> take some voodoo to make it work, but the graphs and trends it creates are
>>>>> usually worth it. If you need something quicker / lighter, collectd works
>>>>> well, but I haven't used it in a long time. As i recall it needs a plugin
>>>>> to do net stats.
>>>>>
>>>>> QH
>>>>>
>>>>>
>>>>> On Tue, Sep 17, 2013 at 4:26 PM, Mike Bean <beandaemon at gmail.com>wrote:
>>>>>
>>>>>> Samba version 3.6.5
>>>>>>
>>>>>> "What is slow?"
>>>>>> A good, worthy question to ask.  I don't have an answer yet because
>>>>>> I'm still trying to get in touch with the users.   When I pull up that
>>>>>> share, it's not appreciably degraded to me.  I consider it fine for use.
>>>>>>
>>>>>> Others
>>>>>>
>>>>>> That's the part that gets a little complicated.  The host system
>>>>>> itself is running on a gpfs cluster, but there's been exactly zero
>>>>>> complaints about gpfs access.  The cluster, for all intents and purposes,
>>>>>> seems fine, the only issue they're having is the rate of access on this one
>>>>>> share for the last few weeks or so, and there's not been any appreciable
>>>>>> change that we were aware of that took place approximately two weeks ago.
>>>>>>
>>>>>> (I did notice smbstatus didn't work until I softlinked our .conf file
>>>>>> to user/local/samba/lib...  flimsy, I admit it,.... a coincidence?)
>>>>>>
>>>>>>
>>>>>> On Tue, Sep 17, 2013 at 3:52 PM, Quentin Hartman <qhartman at gmail.com>wrote:
>>>>>>
>>>>>>> Nothing from your config snippet jumps out at me.
>>>>>>>
>>>>>>> Some more info about the environment would be useful:
>>>>>>>
>>>>>>> - What version of Samba?
>>>>>>> - What's the hardware like?
>>>>>>> - Is it a local FS, or is it an NFS export or something like that?
>>>>>>> - What is "slow"?
>>>>>>> - Are only transfers slow, or is it directory browsing? Both?
>>>>>>> - Is the network getting saturated?
>>>>>>> - Is there anything else going on that might be competing for IO?
>>>>>>>
>>>>>>>
>>>>>>> The few times I've had complaints about Samba performance, it's been
>>>>>>> either an IO problem (backup running while people were hitting the server),
>>>>>>> DNS problems (server trying to do lookups and waiting for timeouts), poorly
>>>>>>> configured NFS export from another machine (wsize, rsize anyone?), or the
>>>>>>> network has simply been saturated (adding a second bonded interface solved
>>>>>>> that one). You may be noticing a theme in that the "samba problems" I've
>>>>>>> faced haven't been samba problems at all....
>>>>>>>
>>>>>>> Hope this helps.
>>>>>>>
>>>>>>> QH
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Sep 17, 2013 at 3:26 PM, Mike Bean <beandaemon at gmail.com>wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> At the risk of being really blunt, CLUE has always given me good
>>>>>>>> advice in the past, so I thought I'd ask for some pointers.  We have a
>>>>>>>> situation at work where some of our code monkeys are complaining about the
>>>>>>>> performance on a samba share mounted on a RHEL6.1 server;  I'm trying to
>>>>>>>> get a path out of them (the monkeys) so I can reproduce the issue, but in
>>>>>>>> the meantime we're not seeing an appreciable performance problem or
>>>>>>>> evidence of any large errors.   We're thinking it's going to come down to
>>>>>>>> Samba performance tuning, and wouldn't you know, I know exactly spit and
>>>>>>>> nothing about Samba performance tuning.
>>>>>>>>
>>>>>>>> Prayed at the google altar, as usual, and unless my questing has
>>>>>>>> served me poorly, the biggest gains are to be had in TCP_NODELAY, which was
>>>>>>>> already in our conf.
>>>>>>>>
>>>>>>>> Here's our smb.conf:
>>>>>>>>
>>>>>>>> # Global parameters
>>>>>>>> [global]
>>>>>>>>         socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536
>>>>>>>> SO_SNDBUF=65535
>>>>>>>>         encrypt passwords = Yes
>>>>>>>>         log level = 2
>>>>>>>>         log file = /var/log/samba.log.%m
>>>>>>>>         guest account = admin
>>>>>>>>         security = share
>>>>>>>>         kernel oplocks = no
>>>>>>>>         dead time = 15                     # Default is 0
>>>>>>>>         getwd cache = yes
>>>>>>>>         lpq cache = 30
>>>>>>>>
>>>>>>>> [dqm_share]
>>>>>>>>    comment = Some Share
>>>>>>>>    path = /xxxx/yyyyyyyyyyyy
>>>>>>>>    public = yes
>>>>>>>>    writable = yes
>>>>>>>>    printable = no
>>>>>>>>    create mask = 0664
>>>>>>>>    directory mask = 0775
>>>>>>>> #  strict locking = no                  #commented out to test its
>>>>>>>> effects
>>>>>>>>
>>>>>>>> As I see it, there's not much tuning I can do without benchmarking
>>>>>>>> the share and that's a whole new can of worms;  so I thought I'd solicit
>>>>>>>> suggestions/advice from CLUE members willing to give it.
>>>>>>>>
>>>>>>>> thanks,
>>>>>>>>
>>>>>>>> Mike Bean
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> clue mailing list: clue at cluedenver.org
>>>>>>>> For information, account preferences, or to unsubscribe see:
>>>>>>>> http://cluedenver.org/mailman/listinfo/clue
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> clue mailing list: clue at cluedenver.org
>>>>>>> For information, account preferences, or to unsubscribe see:
>>>>>>> http://cluedenver.org/mailman/listinfo/clue
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> clue mailing list: clue at cluedenver.org
>>>>>> For information, account preferences, or to unsubscribe see:
>>>>>> http://cluedenver.org/mailman/listinfo/clue
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> clue mailing list: clue at cluedenver.org
>>>>> For information, account preferences, or to unsubscribe see:
>>>>> http://cluedenver.org/mailman/listinfo/clue
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> clue mailing list: clue at cluedenver.org
>>>> For information, account preferences, or to unsubscribe see:
>>>> http://cluedenver.org/mailman/listinfo/clue
>>>>
>>>
>>>
>>> _______________________________________________
>>> clue mailing list: clue at cluedenver.org
>>> For information, account preferences, or to unsubscribe see:
>>> http://cluedenver.org/mailman/listinfo/clue
>>>
>>
>>
>> _______________________________________________
>> clue mailing list: clue at cluedenver.org
>> For information, account preferences, or to unsubscribe see:
>> http://cluedenver.org/mailman/listinfo/clue
>>
>
>
> _______________________________________________
> clue mailing list: clue at cluedenver.org
> For information, account preferences, or to unsubscribe see:
> http://cluedenver.org/mailman/listinfo/clue
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://cluedenver.org/pipermail/clue/attachments/20130918/1043995c/attachment.html 


More information about the clue mailing list