Data and Information Relationship: The Place of Data on the Road to Knowledge

26/06/2025
2 Minute

Have you ever noticed that the words “information” and “data” are often used interchangeably in everyday life? In fact, we rarely use the word data alone; instead, we use it based on the assumption that data essentially means information. However, the process of acquiring information actually involves multiple pieces of data placed within a context and an awareness of the meaning that the data conveys.

The word "data" is the global equivalent of the Turkish word "veri." In the literature, it is defined as follows: "Information, especially facts or numbers, that are examined and evaluated to assist in decision-making, or information in electronic form that can be stored and used by a computer."

Etymologically, the word “data” is the plural form of the Latin word datum, which means “a given thing.” Its first use in English dates back to the 1640s. The term entered the realm of computing in 1946, when it began to be used to refer to “information that can be transferred and stored by a computer.”

The English word “information” comes from the Latin term informatio(n), which means “comprehension,” “instruction,” or “creation.” In information science, information refers to data that has been processed—recorded, stored, queried, organized, and summarized—by an information system. While raw data on its own carries no inherent meaning, it serves as a bridge in the process of acquiring knowledge once it has gone through these processes.

The Relationship Between Data and Information

Data and information are two closely related concepts. A meaningful information experience emerges only after data is collected and analyzed. The critical point in the process of acquiring information lies in the analysis of data. The value of a particular data point is often directly related to how “unexpected” it is. In other words, the less predictable a piece of data is, the more informative it tends to be. The most widely accepted approach to understanding the value of information is entropy theory, which measures the amount of uncertainty reduced by the information provided.

Entropy theory fundamentally measures the value of information. This value depends on how unexpected or surprising a piece of data is. If an event with high probability occurs, the message it conveys holds low informational value and a low level of informativeness. In contrast, when an event with low probability happens, it indicates a high-information event, as it reduces greater uncertainty.

To illustrate the relationship between data and information, consider a researcher interested in birdwatching. Suppose this person has access to data on which bird species are active at certain times of day or which months they migrate, along with other relevant contextual details. Using this data, the birdwatcher can determine the ideal time and location for observation. The combination of individual data points and the characteristics they represent forms what we define as information. It is commonly assumed that data is the closest to the concrete, while information is closer to the abstract—data provides the raw foundation, and information emerges from its meaningful interpretation within a context.

Share this post with
You may also like

Related Posts