We've only really scratched the surface of algorithms in this chapter, as there are millions of different algorithms for millions of different problems!
Algorithms are used in maths, route planning, network planning and operation, problem solving, artificial intelligence, genetic programming, computer vision, the list goes on and on!
But by going through this chapter you should have gained an understanding of the key concepts of algorithms and will be well prepared to tackle more complicated ones in the future.

The algorithms introduced in this chapter aren't even necessarily the best for any situation; there are several other common ways of searching (e.g. hashing and search trees) and sorting (e.g. mergesort), and a computer scientist needs to know them, and be able to apply and fine tune the right one to a given situation.

In this chapter we have only talked about the number of comparisons an algorithm makes, and the amount of time a program takes to complete as 'costs' of algorithms.
There are actually many other ways of measuring the cost of an algorithm.
These include the amount of memory the algorithm uses and its computational complexity.

An algorithm often uses computer memory to store temporary data such as a partial sum of a list of numbers or a list of products that match some search criteria.
With the large size of modern computer memory this may seem to not be as important as the number of steps an algorithm takes, but a poorly performing algorithm in terms of computer memory may be limited in its ability to work with the large data sets common in many industry applications.
For example, a query algorithm that stored even a single bit for each record it searched could quickly overwhelm a web server's memory if it was searching a large data set such as Netflix's current movie offerings.
Minimising memory usage while also minimizing the number of steps an algorithm takes is not always possible; there is often a tradeoff between computation and memory usage.

Computer Scientists use 'Big O notation' to more accurately describe the performance or complexity of an algorithm, and you are likely to come across this notation very quickly when investigating the performance of algorithms.
It characterises the resources needed by an algorithm and is usually applied to the execution time required, or sometimes the space used by the algorithm.

Big O notation however requires some advanced mathematics to explore thoroughly so has been intentionally left out of this main chapter, but if you want to learn more check out the Further reading section.
These topics are looked at in more depth in the Complexity and Tractability chapter.

To make things even more complicated, theoretical analysis techniques such as Big O Notation are extremely useful when designing and predicting performance but empirical analysis such as stopwatch timing shows that in practice algorithm performance can vary greatly due to hardware and operating system design.
Most computers have cached memory and virtual memory, where the time to access a particular value can be particularly short, or particularly long.
There is a whole range of algorithms that are used for this situation to make sure that the algorithm still runs efficiently in such environments.
Such algorithms are still based on the ideas we've looked at in this chapter, but require some clever adjustments to ensure that they work well.