Quick sorting as a programming method

In 1960, K.A. Hoare developed a method for quickly sorting information, which became the most famous. Today it is widely used in programming, as it has a lot of positive properties: it can be used for general cases, requires a small increase in additional memory, is compatible with different types of lists and is convenient for implementation. But there are also disadvantages that fast sorting has: when used in work, many errors are allowed and it is somewhat unstable.

However, this is the most studied version. After the appearance of Hoar's first calculations, many engaged in her dense study. A large base was created on the theoretical questions of finding time spent for work, which was supported by empirical data. There were real proposals for improving the main algorithm and increasing the speed of work.

Quick sorting is very common, you canmeet everywhere. It is based on the TList.Sort method, which exists in all versions (except for 1) of Delphi, the library function of the time spent executing, qsort in C ++.

The basic principle of work can be formulated as"divide and rule". There is a breakdown of the list into two groups and sorting is performed for each part by itself. It follows that more attention is required to be paid to the separation process, during which the following occurs: the base element is determined and the entire list is already shifted relative to it. A group of candidates is formed to the left, the values ​​of which are smaller, all others are transferred to the right. It turns out that the main element in the sorted list is located at its rightful place. The next step is to call the recursive sort function for both sides of the elements relative to the base. The process of work ends only when the list contains only one element, that is, it will be sorted. Thus, in order to master such a programming function as fast sorting, you need to know the operation of lower-level algorithms: a) the choice of the base element; b) the most effective permutation of the list for obtaining two sets with smaller and larger values.

We will become acquainted with the principles of the first. When choosing a base element, ideally the middle one should be selected from the list. Then, when broken, it is divided into two equal halves. Only calculate the average value in the list is very difficult, so even the fastest sorting bypasses this calculus by the side. But choosing the main element with the maximum or minimum value is also not the best option. In the case of such a definition, one of the created lists will be guaranteed to be empty, and the second one is overflowed. Hence the conclusion that as the base element one should choose one that is closer to the average, but further from the maximum and minimum.

Once you have decided on the choice, you cango to the work of the partitioning algorithm. These are the so-called internal quick-sort cycles. Everything is built on two fast-working indices: the first one will follow the elements from left to right, the second, vice versa, from right to left. The execution operation on the right starts: the index goes through the list and compares all the values ​​with the main one. A cycle is considered complete if there is an element less than or equal to the base one. That is, the value of the index is compared and decreases. On the left side, the work is finished when a greater or equal value is found. And here the value of the comparison increases.

At this stage of the partitioning algorithm,which contains a quick sort, two situations can arise. The first is that the index on the left will be less than the right. This indicates an error, that is, the items to which it was specified are in the wrong order in the list. The way out is changing places. The second situation is when both columns are equal or intersected. This indicates a successful division of the list, that is, the work can be considered complete.