On February 27, 2010, an earthquake measuring 8.8 on the Richter scale struck Chile. A day later, copper futures prices skyrocketed. It turned out that Chile is the country with the largest proven copper reserves in the world, nearly a quarter of the world’s total. The earthquake will definitely affect the supply of copper. For this reason, some people regret it: why were they dissatisfied with the warehouse purchase yesterday?
The most basic feature of “intelligence” is to unify the three elements of “perception, decision-making, and execution”. Those who heard the news of the earthquake in Chile and immediately decided to buy copper futures and put them into action are intelligent behaviors.
The opposite of smart is probably “unintelligent”. People who do not fill their warehouses in time to purchase will not produce intelligent behaviors. What is the difference between the two?
Logically speaking, there is still a “distance” between “obtaining information” and “making decisions”. I call the process of crossing this “distance” “cognitive.” For example, from knowing that “an earthquake has occurred” to realizing that “the price of copper will rise” is the process of “cognition”. Those who regret not buying have a breakpoint in the “cognition” link, which hinders the generation of intelligence. A lot of daily information can be converted into business opportunities, and what we lack is often cognitive ability. When Ouyeel Cloud Merchant was established a few years ago, Baosteel’s stock hit a daily limit. I had been involved in planning and had absolutely no idea that there would be a connection to the stock. It is also a typical example.
The “breakpoint” from “perception” to “cognition” is filled by knowledge. For example, recognizing that “copper prices will go up” requires knowledge like “Chile has big copper mines”.
The intelligence that we are promoting now is largely the process of turning “human intelligence” into “wisdom”. That is, the knowledge in the human mind is turned into code that the computer can execute. Imagine if there is such a system: the role of each place in the supply chain is recorded, and once a certain disaster occurs in the place, it will automatically remind investors to pay attention to the relevant procurement business. This is an intelligent system.
It is often seen that people overemphasize the role of machine learning instead of downloading the knowledge in the human brain. This is wrong, why?
In many cases, the knowledge gained by relying on machine learning is limited. Humans are far more capable of acquiring knowledge than machines. Take earthquakes, for example: if learning purely from data, it takes many earthquakes in Chile before machines can learn the link between earthquakes and copper prices. Human beings can gain knowledge by experiencing it once. And it can also be carried forward: from Chile to other countries, from earthquakes to various disasters and accidents, from copper mines to other supply chains… This kind of ability is beyond the reach of machines. Recently, after the Meng Wanzhou incident, all Huawei’s suppliers were quickly found on the Internet, and their market value was reassessed. This is the logic…
Turning the knowledge in the human brain into machine knowledge is often more reliable than direct machine learning.In other words, the knowledge of intelligent systems needs to be provided by humans, which is the norm. In an intelligent system, the knowledge in the human brain is the “staple food”, and the knowledge acquired by machine learning is often “MSG”. Of course, in some cases, the role of machine learning is also irreplaceable. A traditional difficulty in intelligence is that perceptual knowledge cannot be encoded. For example, a human driver would brake at a red traffic light. If a machine is to realize this cognition, it needs to learn to recognize red traffic lights – in the past, the machine’s image recognition ability was weak. An important reason for the rapid development of smart cars recently is the ability to master this knowledge through machine learning. However, these occasions are mainly used to simulate human perceptual cognition, and are not commonly used in industrial production occasions.
Cybernetics regards the unity of “perception, decision-making, and execution” as intelligence.But control theory rarely talks about “cognitive abilities.” I think an important reason is: the input signals of the control system have clear meanings, such as pressure, flow and so on. There is often a “short-circuit” between perception and cognition. However, in the process of moving from “automation” to “intelligence”, the importance of “cognition” has increased.
We often refer to data analysis techniques, which essentially improve cognitive capabilities from data: what does a change in some information mean? For example, according to the boiler thermal efficiency, exhaust gas temperature and other data, comprehensively judge whether the chimney is blocked. This is actually cognition.
This knowledge is also generally the result of a combination of humans and machines: humans know that a chimney will clog, and know which parameters may reflect that a chimney will clog. However, knowledge in the human brain can be ambiguous and needs to be accurately quantified with data analysis. In any case, the knowledge in the human brain is always the main