Website loading speed optimization

Website loading speed optimization

Site loading speed both directly and indirectly affects its ranking in search engines. Search engine algorithms are friendly to fast sites, and they are popular with users, especially if they are also optimized for mobile devices. Here it is necessary to clarify that the speed of the site is estimated not only on stationary computers but also on iPhones and tablets.

The site may require general improvement in terms of speed (when the pages are loaded with braking even on a PC), or it may need revision only for mobile devices. The fact is that often a page that is quickly built on a stationary computer loads rather slowly on a small mobile device (for obvious reasons, the difference in system characteristics of devices). There are several tactics to speed up website loading:

  • reducing the volume of the loaded page;

  • decrease in the percentage of graphic information on the site;

  • reducing the frequency of browser requests;

  • high-quality cache;

  • reducing the size of the CSS and JavaScript code.

In fact, many of the above operations are part of page shrinking (cache, delete graphics, CSS and Java shrinking). We can say that the optimization of the website load speed by 80-90% consists of operations to reduce the page volume. This is quite obvious, because it is the “weight” of the page that is the determining factor in the speed of work. While things like reducing the number of images on a page are pretty simple, others require detailed consideration.

Reducing the page load

To do this, use the gzip utility (GNU Zip), which is based on the Deflate algorithm. This algorithm refers to a lossless compression algorithm. When, after compressing in reverse recovery, the file is restored 100%, up to the last bit/pixel, etc. If micro-losses are acceptable with graphic and audio files that will be invisible to the human eye and ear, then, for example, for files containing source code for system files, the loss of a couple of characters may result in the fact that these program instructions will no longer adequately perform their functions. This is why lossless compression is so important here.

Such compression and, in general, the arrangement of information is closely related to the Dirichlet principle in combinatorics:

The proof of the theorem is very easy and involves comparing the sum of two rows (cages and rabbits). In both cases, there will be a residue (rabbit or cage) to support the stated statement.

Moreover, only the statement “at least one” is important (at least one cage will remain free, at least one cage will contain more than one rabbit), since there can be a lot of combinations of rabbits in the cages. For example, there are 7 rabbits and 5 cages. You can put 2 rabbits in 2 cages and 1 rabbit in the remaining 3. You can put 3 rabbits in one cage, and 1 in all the rest. Or you can put all seven rabbits in one cage, leaving all the rest free. Using a formula from combinatorics, n! / (N-k)! * K! (n-rabbits, k-cells,! -factorial, means such an operation with a number, for example, 5! = 1 * 2 * 3 * 4 * 5 = 120), we get 7! / 5! * 2! = 21 options for placing 7 rabbits in 5 cages. And in all, the Dirichlet principle will be observed.

Mathematically, Dirichlet’s principle can be expressed as follows: there is no injection of a more powerful set into a less powerful one (that is, it is impossible to observe the principle “each rabbit in a cage” if there are more rabbits than cages). When working with electronic storage media, the Dirichlet principle is one of the fundamentals. In particular, when compressing data: if the “cells” of memory are less than the initial amount of information (bytes), therefore, at least one cell of bytes will be more than usual, more than a “certain norm”. However, in computer science, data compression does not mean simply tamping large amounts of information into a limited sector of memory. It is useful here to consider how the Huffman prefix code, which is used in the Deflate algorithm, works:

As you know, any computer works exclusively with binary code, and all other programming languages ​​are tools with which a human user manipulates binary code, thereby giving commands to the computer. Let’s say you want to compress a piece of binary code in which zeros prevail over ones (that is, in the source file, 0 is more significant than 1). The first replacement is 00 – 0. Everything is simple here: let one 0 be equal in importance to two, thereby reducing the file without losing the information it carries.

Live A Reply

This website uses cookies to remember users and understand ways to enhance their experience.

Some cookies are essential, others help us improve your experience by providing insights into how the site is used. For more information, please visit our Cookie Notice.

Manage Cookie Preferences
Required Cookies

These cookies are essential for enabling user movement around our website and providing access to features such as your profile and purchases, member-only resources, and other secure areas of the website. These cookies do not gather information about you that could be used for marketing purposes and do not remember where you have been on the internet. This category of cookies cannot be disabled.

Always Active
Analytics Cookies

We use Google Analytics cookies to collect information about how visitors use our website. These cookies collect information in the aggregate to give us insight into how our website is being used. We anonymize IP addresses in Google Analytics, and the anonymized data is transmitted to and stored by Google on servers in the United States. Google may also transfer this information to third parties where required to do so by law, or where such third parties process the information on Google's behalf. Google will not associate your IP address with any other data held by Google

Save
We use cookies to optimize site functionality and give you the best possible experience. Learn more.
Allow Settings
cookie-icon