Recently have a lot of web-projects used different CMS so it is frequently that the 'sore thumb' of system efficiency is the speed of the database where the information of a website is stored.
There are several partial solutions to this problem, namely: enlarging memory capabilities, maximum caching, improving these parameters or those ones.. but, finally, everything oftener hinges on the disk.
On our own projects we have encountered with alike 'bottlenecks', occasionally observing close to 100% utilization of disk in iostat. Needless to say the same about hosting servers: too many websites, too many queries into MySQL.
The solution to the problem is obvious, so to say, using faster disks. As an alternative, to assemble a raid array from several disks spreading the load among them or to use faster SSD.
Initially we have chosen the second variant, for more than a year we have been using fast SSD disks in the capacity of disk subsystem for our clients' databases.
For the period of time there haven't been any problems with SSD disk failures which are being talked so much about. Everything works stably and without failures. Consequently the chosen method counts as a justified and tested for further usage by default.
Though we continue backing up users' files automatically on schedule.
Best regards, Besthosting LLC.
Do you need help?