Data is all around us, and it always has been. We use data to share photos, finish a biology report, and determine what route to take to a friend's house. We are also creating it constantly. With each day that passes, hundreds of millions of data points are being generated and collected in many different ways. You can visualise it as a stream (or a raging river!) with information constantly rushing by, as more and more data is continually produced.
When you search online for penguins and end up on Wikipedia, get directions to your friend's house on Google Maps, or click a link on Facebook, that generates data points (potentially hundreds of them) that are recorded, along with data from many other people. Together, these numerous data points are referred to as "Big Data", and are used to work out patterns and trends to more accurately predict user behaviour, or relationships between different variables and decisions that people make. These predictions are used by companies in many ways, for example to make whatever you search for online easier to find. Big data isn't only collected through websites though. For example, whenever you swipe a loyalty card at a store that transaction and of your purchases are recorded, and will likely be used by the store to improve their products and marketing in the future.
There are non-commercial uses for big data as well. DNA mapping generates enormous amounts of data, and environmental sensors around the world are constantly collecting data about things like temperature, air pressure, or pollution levels. Big data from sources like these can be used to identify genetic disorders, study cancer cells, predict the weather, or investigate the impacts of climate change.
The availability of big data has revolutionised the way data is used in many industries. For data sets this enormous and complex we need specialised methods for storing, processing, analysing, and visualising this information.