Not all computer owners know what cache memory is, although it is actively used by absolutely all manufacturers, both processors and programs. Some users who have just recently begun to master a personal computer, sometimes at thematic forums on the global network complain about the poor performance of their electronic assistants. For example, if 5 seconds elapse between the launch of an office program shortcut and the appearance of its window, then this is considered an enormous amount of time. Or it is claimed that those 10-15 seconds that are needed to load the operating system from a hard disk on magnetic plates are a waste of time. Surprisingly: just a dozen years ago, the launch of the program could take almost half a minute, and this was considered fast. One thing is obvious - computer performance has increased significantly and the processor cache played a significant role in this.
The memory modules used in computer technology are based on the DRAM technology (dynamic random access memory). Features of this technology in low cost, high reliability and ... relatively low speed. DRAM was used already ten years ago, however, its even slower modifications. If at that time access to memory cells was carried out with delays of about 200 nanoseconds, then now this value has crossed the threshold of 20 ns. It would seem - the speed should be just fantastic! However, in parallel with the improvement of DRAM, the processor bus bandwidth also increased, so the overall ratio did not change the way it could. So we come to the question of what cache memory is. How can I increase the performance of the computer memory subsystem ? The answer is obvious - you can replace the obsolete DRAM with something more progressive. But Intel’s infamous experience with expensive Rambus slats suggested that replacement should not significantly increase the total cost.
Without this restriction, no one would ever have thought what cache memory is, because such a mechanism would not make much sense. It is enough to replace DRAM with a more advanced SRAM (random access memory) and the problem would be solved. But this would entail a significant increase in value. Therefore, a compromise option was proposed, which turned out to be so successful that it has been used since the first 80286. To increase system performance, blocks of high-speed memory are placed between relatively slow RAM modules and a high-speed processor. Regarding the number of DRAM cells, their volume is extremely small, ranging from 8 kb (first level L1) to tens of megabytes (level L3). A special controller passes a bidirectional data stream through itself and copies part of it to fast memory. At the following processor requests to DRAM, the controller checks to see if there are any data needed "in stock", and if they are found, the transfer to the processor comes from the cache. As you can see, the principle of operation is quite simple. Difficulties arise in implementation methods: developers have to decide which data to duplicate, how to update them, how to increase work efficiency, etc. But this is a very voluminous topic, so more details can be found in specialized sources.
Thus, the answer to the question of what cache memory can be formulated as follows: cache is a kind of buffer in which data is written / read by the controller, increasing the performance of the memory subsystem.
And for network applications, what is cache? The user’s browser uses the same solution when browsing the Internet. When you open any page for the first time, many of its elements (images, data) are stored in a folder on disk, and upon subsequent accesses, these data are not downloaded from the slow Network, but are substituted from the folder, increasing the speed of surfing. Acceleration is especially noticeable at low Internet speeds.