Malfatti circles: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
→‎References: problems of Artemas Martin; survey by Marcus Baker; work of A. G. Quidde (all online; urls to be added)
en>Magioladitis
m Replace unicode entity nbsp for character [NBSP] (or space) per WP:NBSP + other fixes, replaced: → using AWB (10331)
 
Line 1: Line 1:
'''External sorting''' is a term for a class of [[sorting]] [[algorithm]]s that can handle massive amounts of [[data]]. External sorting is required when the data being sorted do not fit into the [[main memory]] of a computing device (usually [[RAM]]) and instead they must reside in the slower [[external memory]] (usually a [[hard drive]]).  External sorting typically uses a [[hybrid algorithm|hybrid]] sort-merge strategy.  In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file.  In the merge phase, the sorted subfiles are combined into a single larger file.
Nice to satisfy you, my title is Numbers Held though I don't really like becoming known as like that. Bookkeeping is what I do. Minnesota is where he's been living for years. Playing baseball is the hobby he will never quit doing.<br><br>My website [http://To.ly/CcIN weight loss food delivery]
 
==External merge sort==
One example of external sorting is the external [[merge sort]] algorithm, which sorts chunks that each fit in RAM, then merges the sorted chunks together.<ref>[[Donald Knuth]], ''The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Second Edition. Addison-Wesley, 1998, ISBN 0-201-89685-0, Section 5.4: External Sorting, pp.248&ndash;379.</ref><ref>* [[Ellis Horowitz]] and [[Sartaj Sahni]], ''Fundamentals of Data Structures'', H. Freeman & Co., ISBN 0-7167-8042-9.</ref> For example, for sorting 900 [[megabyte]]s of data using only 100 megabytes of RAM:
# Read 100 MB of the data in main memory and sort by some conventional method, like [[quicksort]].
# Write the sorted data to disk.
# Repeat steps 1 and 2 until all of the data is in sorted 100 MB chunks (there are 900MB / 100MB = 9 chunks), which now need to be merged into one single output file.
# Read the first 10 MB (= 100MB / (9 chunks + 1)) of each sorted chunk into input buffers in main memory and allocate the remaining 10 MB for an output buffer.  (In practice, it might provide better performance to make the output buffer larger and the input buffers slightly smaller.)
# Perform a 9-way [[Merge algorithm|merge]] and store the result in the output buffer. Whenever the output buffer fills, write it to the final sorted file and empty it. Whenever any of the 9 input buffers empties, fill it with the next 10 MB of its associated 100 MB sorted chunk until no more data from the chunk is available. This is the key step that makes external merge sort work externally -- because the merge algorithm only makes one pass sequentially through each of the chunks, each chunk does not have to be loaded completely; rather, sequential parts of the chunk can be loaded as needed.
 
===Additional passes===
That example shows a two-pass sort: a sort pass followed by a merge pass.  Note that we had one merge pass that merged all the chunks at once, rather than in regular merge sort, where we merge two chunks at each step, and take <math>\log n</math> merge passes total. The reason for this is that every merge pass requires reading and writing ''every value'' in the array from and to disk once. Disk access is usually slow, and so reads and writes should be avoided as much as possible.
 
However, there is a trade-off with using fewer merge passes. As the number of chunks increases, the amount of data we can read from each chunk at a time during the merge process decreases. For sorting, say, 50 GB in 100 MB of RAM, using a single merge pass isn't efficient: the disk seeks required to fill the input buffers with data from each of the 500 chunks (we read 100MB / 501 ~ 200KB from each chunk at a time) take up most of the sort time.  Using two merge passes solves the problem.  Then the sorting process might look like this:
 
# Run the initial chunk-sorting pass as before.
# Run a first merge pass combining 25 chunks at a time, resulting in 20 larger sorted chunks.
# Run a second merge pass to merge the 20 larger sorted chunks.
 
Like in-memory sorts, efficient external sorts require [[Big O notation|O]](''n'' log ''n'') time: exponential increases in data size require linear increases in the number of passes.  If one makes liberal use of the gigabytes of RAM provided by modern computers, the logarithmic factor grows very slowly: under reasonable assumptions, one could sort at least 500 GB of data using 1 GB of main memory before a third pass became advantageous, and could sort many times that before a fourth pass became useful.<ref>Assume a single disk with 200 MB/s transfer, 20 ms seek time, 1 GB of buffers, 500 GB to sort.  The merging phase will have 500 buffers of 2M each, need to do 250K seeks and read then write 500 GB.  It will spend 5,000 sec seeking and 5,000 sec transferring. Doing two passes as described above would nearly eliminate the seek time but add an additional 5,000 sec reading and writing, so this is approximately the break-even point between a two-pass and three-pass sort.</ref>
 
Doubling the memory dedicated to sorting both allows the same amount of data to be sorted using half as many chunks ''and'' allows the merge phase to do half as many buffer-filling reads during the merging phase, potentially reducing the number of seeks required by about three-quarters. So, dedicating more RAM to sorting can be an effective way to increase speed if it allows reducing the number of passes, or if disk seek time accounts for a substantial part of sorting time.
 
===Tuning performance===
The [http://sortbenchmark.org/ Sort Benchmark], created by computer scientist [[Jim Gray (computer scientist)|Jim Gray]], compares external sorting algorithms implemented using finely tuned hardware and software.  Winning implementations use several techniques:
 
* '''Using parallelism'''
** Multiple disk drives can be used in parallel in order to improve sequential read and write speed.  This can be a very cost-efficient improvement: a Sort Benchmark winner in the cost-centric Penny Sort category uses six hard drives in an otherwise midrange machine.<ref>Nikolas Askitis, [http://sortbenchmark.org/ozsort-2010.pdf OzSort 2.0: Sorting up to 252GB for a Penny]</ref>
** Sorting software can use [[Thread (computer science)|multiple threads]], to speed up the process on modern multicore computers.
** Software can use [[asynchronous I/O]] so that one run of data can be sorted or merged while other runs are being read from or written to disk.
** Multiple machines connected by fast network links can each sort part of a huge dataset in parallel.<ref>Rasmussen et al., [http://sortbenchmark.org/tritonsort_2010_May_15.pdf TritonSort]</ref>
* '''Increasing hardware speed'''
** Using more RAM for sorting can reduce the number of disk seeks and avoid the need for more passes.
** Fast external memory, like 15K RPM disks or [[solid-state drives]], can speed sorts (but adds substantial costs proportional to the data size).
** ''Many'' other factors can affect hardware's maximum sorting speed: CPU speed and number of cores, RAM access latency, input/output bandwidth, disk read/write speed, disk seek time, and others. "Balancing" the hardware to minimize bottlenecks is an important part of designing an efficient sorting system.
** Cost-efficiency as well as absolute speed can be critical, especially in cluster environments where lower node costs allow purchasing more nodes.
* '''Increasing software speed'''
** Some Sort Benchmark entrants use a variation on [[radix sort]] for the first phase of sorting: they separate data into one of many "bins" based on the beginning of its value. Sort Benchmark data is random and especially well-suited to this optimization.
** Compacting the input, intermediate files, and output can reduce time spent on I/O, but is not allowed in the Sort Benchmark. 
** Because the Sort Benchmark sorts long (100-byte) records using short (10-byte) keys, sorting software sometimes rearranges the keys separately from the values to reduce memory I/O volume.
 
==Other algorithms==
External merge sort is not the only external sorting algorithm; there are also ''distribution sorts'', which work by partitioning the unsorted values into smaller "buckets" that can be sorted in main memory.  Like [[merge sort]], external distribution sort also has a main-memory sibling; see [[bucket sort]].  There is a [[Duality (mathematics)|duality]], or fundamental similarity, between merge- and distribution-based algorithms that can aid in thinking about sorting and other external memory algorithms.<ref>[[J. S. Vitter]], ''[http://www.ittc.ku.edu/~jsv/Papers/Vit.IO_book.pdf Algorithms and Data Structures for External Memory]'', Series on Foundations and Trends in Theoretical Computer Science, now Publishers, Hanover, MA, 2008, ISBN 978-1-60198-106-6.</ref> There are [[in-place algorithm]]s for external sort, which require no more disk space than the original data.
 
==References==
{{reflist|30em}}
 
==External links==
*[http://stxxl.sourceforge.net/ STXXL, an algorithm toolkit including external mergesort]
*[http://cis.stvincent.edu/html/tutorials/swd/extsort/extsort.html An external mergesort example]
*[http://code.google.com/p/kway A K-Way Merge Implementation]
*[http://code.google.com/p/externalsortinginjava/ External-Memory Sorting in Java]
*[http://code.google.com/p/judyarray A sample pennysort implementation using Judy Arrays]
*[http://sortbenchmark.org/ Sort Benchmark]
 
[[Category:Sorting algorithms]]
[[Category:External memory algorithms]]

Latest revision as of 17:30, 27 July 2014

Nice to satisfy you, my title is Numbers Held though I don't really like becoming known as like that. Bookkeeping is what I do. Minnesota is where he's been living for years. Playing baseball is the hobby he will never quit doing.

My website weight loss food delivery