cin >> n;
sum = 0;
for (i=0; i<n; i++)
for (j=0; j<n; j++) sum++;
* How fast will the
above program work? what are the criteria by which its
speed is determined? * We can experimentally check how long the program will run. * In order to investigate more generally its behavior is to run it with different values of n. *The results are summarized in the following table: * The table shows that when we increase n (the dimension of the input) 10 times, the execution time increases 100 times. |
|
* Comparison of two
functions: g1(n)= 2n2 и g2(n)= 200n, which show the execution time of two given algorithms А1 and A2, depending on n. * Asymptotically, the algorithm A2 is faster and its complexity is linear, while that of A1 is quadratic.. |
|
* Let be a task in which the size of the
input data is determined by an integer n.
* Almost all the tasks we will be looking at have this property.
* We will explain the latter by looking at a few examples:
Asymptotic notation
* When we are speaking about the complexity of an algorithm, we
are most often interested in how it will work at a sufficiently
large size n of the input data.
* When formally evaluating the complexity of algorithms, we
examine their behavior at "sufficiently large" n.
1. O(f) defines the set of all functions g that grow no faster
than f, i.e. there is a constant c > 0 such that g(n) <=
cf(n) for all sufficiently large values of n.
2. Theta(f) determines the set of all functions g that grows as
fast as f (up to a constant factor), ie. there are constants c1
> 0 and c2 > 0 such that c1f(n)
<= g(n) <= c1f(n) for all sufficiently
large values of n.
3. Omega(f) defines the set of all functions g that grow no
slower than f, ie. there exists a constant c > 0 such that
g(n)> = cf(n) for all sufficiently large values of n.
O(f): Properties and examples
* Notation О(f) is the most
commonly used in evaluating the complexity of algorithms and
programs.
* More important properties of О(f)
(~ denotes belong to):
Growth rate of commonly used functions:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
** Determining the complexity of an
algorithm.
* Finding the function that determines the relationship between
the size of the input data and the runtime.
* We always look at the worst case - the worst case inputs.
- elementary operation - does not depend on the amount of data
processed - O(1) ;
- operator sequence - is determined by the
asymptotically slowest - f + g ~ max(O( f ), O(g));
-operator composition - multiplication by
complexity- f (g) ~ O( f*g);
- conditional operators - is determined by the asymptotically
slowest between the condition and the different cases;
- loops, two nested loops, p
nested cloops
- O(n), O(n2), O(np)
.
// 1
for (i = 0; i < n; i++)
for (j = 0; j < n; j++,
sum++);
// 2
for (i = 0; i < n; i++)
for (j = 0; j < n; j++) if
(a[i] == b[j]) return;
// 3
for (i = 0; i < n; i++)
for (j = 0; j < n; j++) if
(a[i] != b[j]) return;
// 4
for (i = 0; i < n; i++)
for (j = 0; j < n; j++) if
(a[i] == a[j]) return;
// 5
for (i = 0; i < n; i++)
for (j = 0; j < i; j++)
sum++;
// 6
for (i = 0; i < n; i++)
for (j = 0; j < n*n; j++)
sum++;
// 7
for (i = 0; i < n; i++)
for (j = 0; j < i*i; j++)
sum++;
// 8
for (i = 0; i < n; i++)
for (j = 0; j < i*i; j++)
for (k = 0; k < j*j;
k++) sum++;
Let's look at the loop:
for (sum = 0, i = 0; i < n; i *=
2) sum++;
The variable i has values 1, 2, 4, ..., 2k,
... until it surpasses n. The loop is run
[log n] times. The complexity is O(log n).
Calculation of recursion
complexity
* Binary search in sorted array - recursive algorithm.
int binary_search(vector<int> v, int from, int to, int a) { if (from > to) return -1; int mid = (from + to) / 2; if (v[mid] == a) return mid; else if (v[mid] < a) return binary_search(v, mid + 1, to, a); else return binary_search(v, from, mid - 1, a); }
* We count the references to the elements of the array.
* The recursive function looks at the middle element and makes a
recursive call with a twice smaller array.
* Therefore, if T(n) is the function that specifies
the number of hits, then T(n) = T(n/2)
+ 1.
* From the equality
T(n) = T(n/2) + 1 = T(n/4)
+ 2 = T(n/8) + 3 = ... = T(n/2k)
+ k
we obtain for n = 2k that T(n)
= T(1) + log n, i.e. the complexity of the
algorithm is O(log n).