Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
If you want to talk to me about something, use my [[User_talk:Masudr|talk page]], and I will generally respond there.
{{#if:{{{noshield|}}}||{{#ifeq:{{Jct/shield/{{{country}}}/old|{{{1}}}|{{{2}}}|state={{{state}}}|type={{{1}}}|route={{{2}}}|denom={{{denom|}}}|county={{{county|}}}|dab={{{dab|}}}}}|none||{{#if:{{{1|}}}|{{#ifexist:Media:{{Jct/shield/{{{country}}}/old|{{{1}}}|{{{2}}}|state={{{state}}}|type={{{1}}}|route={{{2}}}|denom={{{denom|}}}|county={{{county|}}}|dab={{{dab|}}}}}|[[File:{{Jct/shield/{{{country}}}/old|{{{1}}}|{{{2}}}|state={{{state}}}|type={{{1}}}|route={{{2}}}|denom={{{denom|}}}|county={{{county|}}}|dab={{{dab|}}}}}|{{#switch:{{{country|}}}
 
|CAN|USA={{Jct/shield/size/{{{country}}}|state={{{state|}}}|type={{{1|}}}|route={{{2|}}}|county={{{county|}}}}}
=Algorithms=
|#default=x20}}px|link=|alt=]]|[[Category:Jct template transclusions with missing shields]]}}|}}}}}}
 
==Algorithmic analysis==
 
:<math>f(n) \text{ is } \Theta(g(n)) \Leftarrow\ \exists\ c_1, c_2, n_0 \text{ s.t. } c_1 g(n) \le f(n) \le c_2 g(n)\ \forall\ n > n_0.</math>
 
:<math>f(n) \text{ is } O(g(n)) \Leftarrow\ \exists\ c, n_0 \text{ s.t. } c g(n) \ge f(n)\ \forall\ n > n_0.</math>
 
:<math>f(n) \text{ is } \Omega(g(n)) \Leftarrow\ \exists\ c, n_0\ \text{ s.t. } c g(n) \le f(n)\ \forall\ n > n_0.</math>
 
Order of functions with increasing growth rate:
 
<math>1,\; \log{\log{n}},\; \log{n},\; \sqrt{n},\; n,\; n\log{n},\; n^2,\; n^2\log{n},\; n^3,\; 2^n,\; n2^n,\; 3^n.</math>
 
==Sorting==
 
A sorting algorithm may have the following properties:
 
* '''in-place''', if it requires Θ(1) extra space.
* '''stable''', if items of the same sorting key remain in the same relative order they were before the sort.
 
===Selection sort===
 
Best time: Θ(n²).<br />
Average time: Θ(n²).<br />
Worst time: Θ(n²).
 
Iterate through decreasing sublists, finding the largest item, and swapping that item with the item at the end.
 
===Insertion sort===
 
Best time: Θ(n).<br />
Average time: Θ(n²).<br />
Worst time: Θ(n²).
 
Iterate through increasing sublists, finding the correct position to place the item immediately following the sublist, shifting all greater elements in the sublist one place to the end. Finally place that item in the correct place.
 
===Bubble sort===
 
Average time: Θ(n²).<br />
Worst time: Θ(n²).
 
Iterate through decreasing sublists, swapping two adjacent items if they are not in order, until the list is sorted.
 
===HeapSort===
 
Worst time: Θ(n log n).
 
Take an array, start with the parent of the last element of the array, and work backwards, to ensure the heap property (see the remove operation on a heap). Then remove an item from the top and place in the newly vacated place, thus leaving a list in ascending order.
 
===QuickSort===
 
Average time: Θ(n log n).<br />
Worst time: Θ(n²).
²
Average space: Θ(log n).<br />
Worst space: Θ(n) can be remedied to Θ(log n) using a custom stack.
 
Select a pivot as the first item in the list, then place items less than the pivot before it, and those greater after it. Use the final position of the pivot to split the list up into two sublists. Repeat (substituting the sublists for the lists) until the sublists are of size one or zero.
 
The worst case for running time is when the list is already sorted. To combat this, the pivot is selected randomly.
 
===Radix sort===
 
Worst time: Θ(n).
 
Place items into ten lists based on least significant digit. Concatenate list. Repeat, but use the next most significant digit instead, until we reach the highest significant digit of any of the numbers.
 
==Searching==
 
===Linear search===
 
Best time: Θ(1).<br />
Average time: Θ(n).<br />
Worst time: Θ(n).
 
Iterate through the list, until we find the desired item, if it exists.
 
===Binary search===
 
Works on lists sorted in non-decreasing order.
 
Best time: Θ(1).<br />
Worst time: Θ(log n).
 
Compare with the item in the middle of the list. If larger, then check the upper half; if smaller check the lower, if matched then return.
 
=Data structures=
 
==Stack==
 
push(x): add an item to the top of the stack.<br />
pop(): returns the item at the top of the stack.<br />
makeEmpty(): removes all items.<br />
isEmpty(): returns true if empty, false otherwise.
 
push is Θ(1) if the underlying array isn't full, and Θ(n) if the underlying array is full. pop is Θ(1).
 
==Queue==
 
enqueue(x): add x to the end of the queue.<br />
dequeue(): return the item at the front of the queue.<br />
makeEmpty(): removes all items.<br />
isEmpty(): returns true if empty, false otherwise.
 
==List==
 
length(): return length of list.<br />
begin(): return position at 0.<br />
end(): return position at end.<br />
next(p): return position at p + 1.<br />
prev(p): return position at p - 1.
 
get(p): return item at p.<br />
set(p, x): set item at position p to x.<br />
insert(p, x): insert item x at position p, right-shifting subsequent items.<br />
remove(p): remove item at position.
 
find(x): return first position at which x exists.<br />
nth(k): return nth position at which x exists.
 
===Array implementation===
 
Operations insert, remove and find have a worst case running time of Θ(n). Everything else is Θ(1).
 
===Linked list implementation===
 
Insert and remove have a running time of Θ(1), random access is Θ(n).
 
Can have a header node, a tail pointer, and can be doubly linked, which improves certain operations.
 
==Priority queue==
 
insert(x, p): add x to the queue, with priority p.<br />
remove(): return and remove the item with the highest priority.<br />
makeEmpty(): removes all items.<br />
isEmpty(): returns true if empty, false otherwise.
 
===List which inserts in sorted order array===
 
remove is Θ(1), but insert is Θ(n).
 
===Unsorted list===
 
insert is Θ(1), but remove is Θ(n).
 
===Heap===
 
insert and remove have a worst time of Θ(log n).
 
A full, binary tree which satisfies the heap property: the priority of every node is greater than or equal to the priority of the child nodes. This can be represented as an array, where the children of the node at index k are located at 2k+1 and 2k+2.
 
insert appends the item to the array (or inserts having vacated the relevant spot), but keeps swapping with parent if it has a higher priority than it.
 
remove returns and removes the item at the top of the array, then inserts the item at the bottom of the array at the top. It then keeps swapping the newly placed item with the node that has greater priority, if possible.
 
==Set==
 
add(x): adds x, if it doesn't already exist.<br />
remove(x): removes x.<br />
contains(x): tests for containership of x.<br />
size(): returns the size.
 
===Open hash table===
 
All operations have an average running time of Θ(1).
 
We have an array of a certain size, each containing a pointer to an empty list. Calculate a hash of the item gives us the array index location. Append the item to the list located at that index.
 
The idea is to have about 25% of the table empty, and an efficient hash function so that the average length of each list is zero or one.
 
===Closed hash table===
 
Use just one array, but in the case of collisions, insert the item by probing linearly from its desired location, until we find an empty spot. Finding items relies on the same strategy.
 
To avoid clustering, try quadratic probing.
 
==Map==
 
get(k): returns the value v if the (k, v) pair exists in the map.<br />
put(k, v): puts (k, v) into the map, or replaces the the v if the k already exists.<br />
remove(k): removes the (k, v) if there is such a pair.<br />
containsKey(k): tests to see if the map contains a pair with key k.<br />
size(): returns the size.
 
For implementations, we can use either of the implementations for the Set, storing the (k, v) pair, and referencing using the key k.

Revision as of 15:59, 25 August 2014