Modern database management systems are used at many sites, but far from everyone knows what they are and how to use the DBMS functions. Such tools are characterized by a huge number of possibilities, therefore, to fully use them, you should understand what they can do and what are useful for the user.
Data management
First of all, the DBMS functions include the processing of information in external memory, and this function is to provide the basic VI structures that are needed not only to store information directly included in the database, but also to perform various service tasks, such as obtaining accelerated access to any files in various cases. In certain modifications, the capabilities of various file systems are actively used, while others provide for work even at the level of external memory devices. But in this case, it is worth noting that in a DBMS function that has a high degree of development, the user is not informed in any case about whether any system is used, and if so, how the files are organized. In particular, the system is engaged in supporting its own naming order of objects included in the database.
RAM buffer management
In the overwhelming majority of cases, DBMS functions are usually used in fairly large databases, and this size is at least often much larger than the available RAM. Of course, if, in the case of access to each data element, an external memory is exchanged, the speed of the latter will correspond to the speed of the system itself, therefore, practically the only way to actually increase it is to buffer information in RAM. Moreover, even if the OS provides system-wide buffering, for example with UNIX, this will not be enough to provide the DBMS with the purpose and main functions, since it has a much larger amount of data about the useful properties of buffering for each specific part of the database used. Due to this, developed systems support their own set of buffers, as well as the unique discipline of replacing them.
It is worth noting the fact that there is a separate direction of control systems, focused on the continuous presence in RAM of the entire database. This direction is based on the assumption that in the near future the amount of RAM in computers can be expanded so that they no longer worry about any buffering, and the main functions of this type of DBMS will come in handy here. At the moment, all these works remain at the testing stage.
Transaction management
A transaction is a sequence of operations with the database used, which the management system considers as a single whole. If the transaction is fully successfully executed, the system records the changes that it made, in the external memory or none of the specified changes will affect the state of the database. This operation is required in order to provide support for the logical integrity of the database used. It is worth noting that maintaining the correct progress of the transaction mechanism is a prerequisite even when using single-user DBMSs, the purpose and functions of which differ significantly from other types of systems.

The property that any transaction begins only with the integrity of the database and at the same time leaves it in the same state after the procedure is completed makes its use extremely convenient as a unit of activity regarding the database. With proper management of concurrently executed transactions by the management system, each individual user, in principle, can feel part of the whole. However, this is somewhat idealized representation, since in many situations during work people will still feel the presence of their colleagues if they use a multi-user system, but in fact this also includes the concept of a DBMS. The functions of a multi-user type DBMS also connect concepts such as serial execution plan and serialization with transaction management.
What do they mean?
Serialization of concurrently executed transactions involves the construction of a special plan of their work, in which the total achieved effect of the mixture is equivalent to the result due to their sequential execution.
A serial execution plan is a defined action structure that leads to serialization. Of course, if the system succeeds in providing truly serial execution of a mixture of transactions, then for any user who creates a transaction, the presence of others will be completely invisible, except that it will work a little slower compared to single-user mode.
There are several basic serialization algorithms. In centralized systems, the most popular today are algorithms that are based on synchronization captures of various database objects. If any serialization algorithms are used, the possibility of conflicts between two or more transactions on access to certain database objects is provided. In such a situation, in order to support this procedure, you need to roll back, that is, eliminate any changes made to the database through one or more processes. This is only one of the situations when in a multi-user system a person feels the presence of others.
Logging
One of the main requirements for modern systems is to ensure the reliability of information storage in external memory. In particular, this provides that the main functions of the DBMS include the ability to restore the last coordinated state of the database after any software or hardware failures occur. In the overwhelming majority of cases, it is customary to consider two options for hardware failures:
- soft ones, which can be interpreted as an unexpected computer shutdown (the most common case is emergency power off);
- hard, which are characterized by partial or complete loss of data stored on external media.
Examples of software failures include a system crash when trying to use some feature that is not among the main functions of the DBMS or a crash of a user utility, as a result of which a certain transaction was not completed. The first situation can be considered as a special type of soft failure, while when the latter occurs, the consequences of a single transaction must be eliminated.
Of course, in any case, for the normal recovery of the database you need to have a certain amount of additional information. In other words, for the normal maintenance of the reliability of data storage in the database, it is necessary to ensure the redundancy of information storage, and some of the data used in the restoration should be carefully guarded. The most common method for maintaining such redundant data is considered to be a change log.
What is it and how is it used?
The journal is a special part of the database, access to which is not among the DBMS functions, and it is maintained with particular care. In some situations, it is even envisaged to support simultaneously two copies of the journal located on different physical media. These repositories receive information about any changes that occur in the main part of the database, and in different control systems, changes can be logged at various levels. In some situations, a log entry is fully consistent with a specific logical change operation , somewhere - a minimal internal operation associated with modifying a page of external memory, while some DBMSs use a combination of two approaches.
In any case, the so-called "write ahead strategy" is used in the log. When it is used, a record indicating a change in any database objects gets into the external memory of the journal before the object being changed. It is known that if the functions of the Access DBMS provide for normal compliance with this protocol, the log solves any problems associated with restoring the database in the event of any failure.
Rollback
The simplest recovery situation is an individual transaction rollback. For this procedure, you do not need to use a system-wide change log, and it is enough to use a local modification operation log for each transaction, and then roll back transactions by performing inverse operations, starting from the end of each record. The structure of the DBMS function often involves the use of just such a structure, but in most cases, local logs are still not supported, and individual rollback even for individual transactions is carried out on a system-wide basis, and for this, all records of each transaction are combined by a reverse list.

In the event of a soft failure, the external memory of the database may include various objects that were modified by transactions that were not completed at the time the failure occurred, and there may also be no various objects modernized by those that successfully completed before the failure due to the use of RAM buffers, contents which completely disappears when similar problems occur. If the protocol providing for the use of local logs is respected, records that relate to the modification of any such objects will necessarily remain in the external memory.
The main goal of the recovery procedure after the occurrence of soft failures is such a state of the external memory of the main database that would occur if the changes in any completed transactions were fixed in the VP and did not contain traces of incomplete procedures. To achieve this effect, the main functions of the DBMS are in this case the rollback of incomplete transactions and the replay of those operations, the results of which ultimately did not appear in the external memory. This process provides a fairly large number of subtleties, which mainly relate to the organization of management of the journal and buffers.
Hard crashes
If it is necessary to restore the database after hard failures, not only the log is used, but also an archive copy of the database. The latter is a complete copy of the database by the time the filling of the journal has begun. Of course, to carry out a normal recovery procedure, the integrity of the log is required, therefore, as mentioned earlier, extremely serious requirements are imposed on its preservation in the external memory. In this case, the restoration of the database consists in the fact that, based on the archive copy, the journal displays all the transactions completed that were completed by the time the failure occurred. If necessary, the work of incomplete transactions and the continuation of their normal operation after the completion of the recovery procedure can even be reproduced, but in most real systems this procedure is not carried out for the reason that recovery from hard failures in itself is a rather lengthy procedure.
Language support
To work with modern databases, various languages ββare used, and in the early DBMS, the purpose, functions and other features of which were significantly different from modern systems, support was provided for several highly specialized languages. These were mainly SDL and DML, designed to define the database schema and manipulate data, respectively.
SDL was used to determine the logical structure of the database, that is, to find out the specific database structure that is presented to users. DML, on the other hand, included a whole complex of information manipulation operators, which allowed entering information into the database, as well as deleting, modifying, or using existing data.
The DBMS functions include different types of support for a single integrated language, which provides for the presence of any means necessary for normal work with databases, starting from its initial creation, and providing a standard user interface. SQL is used as the standard language that provides the basic DBMS functions of the most common relational systems today.
What is he like?
First of all, this language combines the basic functions of DML and SDL, that is, it provides the ability to determine the specific semantics of a relational database and manipulate the necessary information. At the same time, the naming of various database objects is supported directly at the language level in the sense that the compiler converts the names of objects into their internal identifiers, based on specially supported service catalog tables. The core of control systems, in principle, does not interact with tables or their individual columns.
The SQL language includes a whole list of special tools to determine database integrity constraints. Again, any such restrictions are included in special catalog tables, and integrity control is carried out directly at the language level, that is, in the process of reading individual database modification statements, the compiler, based on the integrity restrictions in the database, generates the corresponding program code.