Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Josef Bacik <jbacik <at> fusionio.com>
Subject: Re: Varying Leafsize and Nodesize in Btrfs
Newsgroups: gmane.comp.file-systems.btrfs
Date: Thursday 30th August 2012 21:50:08 UTC (over 4 years ago)
On Thu, Aug 30, 2012 at 03:34:49PM -0600, Martin Steigerwald wrote:
> Am Donnerstag, 30. August 2012 schrieb Josef Bacik:
> > On Thu, Aug 30, 2012 at 09:18:07AM -0600, Mitch Harder wrote:
> > > I've been trying out different leafsize/nodesize settings by
> > > benchmarking some typical operations.
> > > 
> > > These changes had more impact than I expected.  Using a
> > > leafsize/nodesize of either 8192 or 16384 provided a noticeable
> > > improvement in my limited testing.
> > > 
> > > These results are similar to some that Chris Mason has already
> > > reported:  https://oss.oracle.com/~mason/blocksizes/
> > > 
> > > I noticed that metadata allocation was more efficient with bigger
> > > block sizes.  My data was git kernel sources, which will utilize
> > > btrfs' inlining.  This may have tilted the scales.
> > > 
> > > Read operations seemed to benefit the most.  Write operations seemed
> > > to get punished when the leafsize/nodesize was increased to 64K.
> > > 
> > > Are there any known downsides to using a leafsize/nodesize bigger
> > > than the default 4096?
> > 
> > Once you cross some hardware dependant threshold (usually past 32k) you
> > start incurring high memmove() overhead in most workloads.  Like all
> > benchmarking its good to test your workload and see what works best,
> > but 16k should generally be the best option.  Thanks,
> 
> I wanted to ask about 32k either.
> 
> I used 32k on one 2,5 inch external esata disk. But I never measured 
> anything so far.
> 
> I wonder what a good value for SSD might be. I tend to not use anymore 
> than 16k, but thats just some gut feeling right now. Nothing based on a 
> well-founded explaination.
>

32k really starts to depend on your workload.  Generally speaking everybody
will
be faster with 16k, but 32k starts to depend on your workload and hardware,
and
then anything about 64k really starts to hurt with memmove().  With this
sort of
thing SSD vs not isn't going to make much of a difference, erase blocks
tend to
be several megs in size so you aren't going to get anywhere close to
avoiding
the internal RMW cycle inside the ssd.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
CD: 3ms