Do accelerates the release of disk space your computer?
From the Author: Looks like I accidentally wrote a whole book. Pour yourself a cup of coffee before reading. I>
Accelerates whether the release of disk space your computer? Blockquote> When space does not speed up your computer, at least not by itself. It's really common myth. This myth is so widespread because filling your hard drive often occurs simultaneously with other processes that have traditionally can i> * slow down your computer. Performance SSD может decline as filling , but this is a relatively new problem inherent SSD, and, in fact, hardly noticeable for ordinary users. In general, the lack of space - just a red rag to a bull ( distracting - approx. Interpreter i >).
Prim. Author: * «Slow down" - a term with a very broad interpretation. Here I use it in relation to the processes associated with the I / O (that is, if your computer has been purely calculations contents of the disc has no effect), or associated with the processor and competing with the process consumes a lot of CPU resources (ie Antivirus , scanning a large number of files) i> blockquote>
For example, such things as:
File fragmentation. File fragmentation is a problem **, but the lack of space, though, and is one of the many i> of factors is not only i> cause fragmentation. Highlights:
Prim. Author: ** Fragmentation affects i> on the SSD due to the fact that sequential reads are usually much faster than random access, although SSD does not exist to the same restrictions that mechanical devices (even in this case, lack of fragmentation does not guarantee serial access by virtue of the distribution of wear and similar processes). However, practically any typical usage scenario is not a problem. Differences in SSD performance usually associated with the fragmentation of processes invisible to launch applications, start the computer and other. I> blockquote> The probability of file fragmentation is not related to the amount of free disk space. It depends on the size of the largest contiguous block of free space on the disk (ie "gaps" of free space) that is bounded from above i> the amount of free space. Another dependency is a technique used when the file system (file allocation on this later).
For example: If the disk space occupied by 95%, and all that is freely represented by one continuous block, then a new file is fragmented, with a probability of 0% ( if the course is not a normal file system fragments files specifically - approx.'s i>) (also possible fragmentation of the extensible file does not depend on the amount of free space). On the other hand, the disc filled with 5% of the data is uniformly distributed on it has a very high probability of fragmentation. Please note that file fragmentation affects the performance of only when accessing these files i >. For example: Do you have a good, defragment the drive with plenty of 'gaps' on it. A typical situation. Everything works well. However, at some point you come to a situation where there are no more large free blocks. You are downloading a large movie, and this file is highly fragmented. It does not slow down your computer i>. Files of your applications and other things that were in perfect order, will not be immediately fragmented. The film certainly can longer be loaded (but typical bitrate movies are so much lower reading speed hard drives, it is likely it will pass unnoticed), it can also affect the performance of I / O while the movie is loaded, but nothing else will change. < While fragmentation is a problem, the problem is often compensated for caching and buffering from the operating system and hardware. Delayed entry, read-ahead, and so help to solve the problems caused by fragmentation. In general, you do not notice anything i>, while the level of fragmentation becomes too high (I even venture to say that as long as your swap file is not fragmented, you will not notice anything) < / Another example - Search engine indexing. Suppose that you have enabled automatic indexing, and the operating system is not well it sells. As you store more and more indexed files on your computer (documents, etc.), indexing begins to take more and more time and can begin to have a noticeable effect on the observed performance of the computer in the course of their work, eating at the same time I / O and CPU time. It is not associated with free space, but due to the amount of data being indexed. However, running out of disk space occurs at the same time preserving more content, so many set wrong relationship. Antivirus. It's very similar to the example of the search index. Let's say you have an antivirus, performing background scanning your disk. Do you get more and more files to scan, search begins to consume more and more resources, I / O and CPU may interfere with your activities. Again, the problem is related to the number of the scanned content. More content - less space, but the lack of space e is causing the problem. Installed Programs. Suppose you have multiple programs that run when the computer starts, which increases load time. This slowdown is because many programs loaded. At the same time, installed programs take up space on the disk. Consequently, the amount of free space decreases simultaneously with the slowdown, which may lead to wrong conclusions. There are many other similar examples that give illusion i> connection running out of disk space and slow performance. < /
Written above illustrates another reason for the prevalence of this myth: although the exhaustion of free space is not a direct cause of delay, uninstall various applications, removing indexed and scanned the content, etc. sometimes (but not always, such cases are beyond the scope of this text) results in increase i> performance for reasons not related to the amount of free space. When this storage space is released naturally. Consequently, here is shown a false link between "more space" and "fast computer».
View
Whether it is something to do with finding a place for the files? Blockquote>
No, it is not connected. There are two important points:
your hard drive is not engaged in the search space for files Your operating system also is looking for a place to stay. There is no "search» filesystem , for example, FAT32 (older computers with DOS and Windows), NTFS (New system Windows), HFS + (Mac), ext4 (some systems Linux) and many others. Even the concept of "file" or "directory» ( «Folders" - approx. Interpreter i>) - just a figment of a typical file system: hard drives know nothing about such beasts as "files." Details of this are outside the text. However, in fact, all common file systems include a way to track disk space, and because the "search" space, under normal circumstances (ie, the normal state of the file system) is not necessary. Examples:
NTFS contains glavnuyu file table (master file table) which includes special files (for example, $ Bitmap) and a plurality of metadata describing the drive. In fact, it keeps track of the next free blocks, so that files can be written to disk without the need to scan the disk every time. Another example, ext4 has an entity called «bitmap allocator », an improvement over ext2 and ext3, which helps to directly determine the position of free blocks, instead of scanning the list of free blocks. Ext4 also supports "delayed allocation" is essentially an operating system buffering data in memory before being written to disk in order to make the best decision on placement to reduce fragmentation. Many other examples.
Maybe it to move files back and forth to select a sufficiently long continuous space while maintaining? Blockquote>
No, this is not happening. At least any one of file system I know. Just fragmented files.
The process of "moving files back and forth to separate the long continuous block" called Defrag i>. This does not happen when writing files. This occurs when you run Disk Defragmenter. at least in the new Windows systems, this is done automatically on a schedule, but the record of the file is never a reason to start this process.
The possibility of avoid i> need to move files in this way is crucial for the performance of file systems, and the reason why there is fragmentation and defragmentation is a separate step.
How much free space should be left on the disk? Blockquote>
This is a more complicated question, and I have already written so much.
The basic rules that you can follow:
For all types of discs:
The most important thing - to leave enough space to to use the computer effectively i>. If short on space, you may need a large capacity hard disks. Many disk defragmentation utility requires a certain minimum space (I think that comes together with Windows in the worst case requires 15% free space) for their work. They use this space for temporary storage of fragmented files until the transfer is made of other objects. Leave room for other functions of the operating system. For example, if your computer does not have a large amount of physical RAM and virtual memory paging file is included with the dynamic volume, you should leave enough space to accommodate the maximum paging file size. If you have a laptop that you send in hibernation (hibernation), you will need enough space to save the file hibernation state. Such are the things. As for SSD:
For optimum reliability (and to a lesser extent productivity) on the SSD must be some free space, which, without going into detail, is used to distribute data across the disk to avoid a permanent record in the same place ( which leads to the depletion of the resource). The concept of reserving space called перезакладывание (Over-provisionning) . This is important, but in many SSD mandatory reserve space already allocated i>. That is, there is often a disk into several dozen gigabytes more space than they exhibit operating system. Cheaper drives often require you to leave unallocated portion of the space. But when using with disks that have forced redundancy, it is not required i>. It is important to note that additional space is often taken only from unallocated not always i> will work option when your partition occupies the entire disk, and you leave some free space on it. Manual remortgage requires you to make your partition is smaller than the size of the disk. Refer to the user manual of your SSD. TRIM and garbage collection (garbage collection) and things like that, too, have an impact, but they are beyond the scope of this text.
Personally, I usually buy a new larger drive when I remains about 20-25% free space. This is not linked to the performance, just when I get to this point - this means that soon the place over, and then it's time to buy a new disk.
More important, rather than tracking the free space is to verify that the scheduled defragmentation is enabled where necessary (not SSD), so you will never come to the point where it is large enough to have a significant impact.
Afterword
There is one more thing that should be mentioned. One of the other answers to this question mentions that half-duplex SATA makes it impossible to read and write at the same time. While this is true, this is an oversimplification and does not, for the most part associated with performance problems discussed herein. In fact, it simply means that the data can not be transferred through the wire i> in two directions simultaneously. However, спецификация SATA includes tiny maximum block size (I think about 8KB block for transmission over the wire), the queue read and write operations, and so on, and there is nothing stopping to write data into the buffer until the reading is done, and the like intersecting operation .
Any blockage that may occur will be due to competition for natural resources, which is usually offset by large amounts of cache. Duplex mode SATA has almost nothing to do with this case.
Source: habrahabr.ru/post/256231/