PerformanceTutorial Monday

How to configure RAID Systems – A complete walkthrough

31st August 2015 — by That IT Guy0

If you haven’t done so yet,
I recommend you read “RAID systems – What are they and why on earth do we care” first!

Is my computer RAID compatible?

The first thing we need to check is if our computer has the ability of using Firmware RAID. The reason why I specify Firmware RAID is because every computer with Windows 7 or older (or certain Linux distros) and 2 or more hard drives is able to do Software RAID as it does not depend on specific hardware. Hardware RAID implies you already have a dedicated RAID PCI/PCI-E card and you know how to use it (and even if you didn’t, it’d be pointless to try and make a guide since every RAID card is different. If after the following you still have doubts about compatibility I recommend you read your motherboard’s manual in order to find out if it does have the capability for RAID.

What follows is a list of compatible chipsets (that covers the last 5-6 years) from Intel, AMD and VIA (remember, this is chipsets, not motherboards. All common modern day motherboards use a chipset manufactured by one of these 3 companies). that come with RAID functionality.

Intel Chipset

H87 – H97
X79 – X99
Z68 – Z77 – Z87 – Z97

Essentially, all chipsets except Hx1 y Bx5 have integrated Firmware RAID. For older models please check your manual

AMD Chipset

A58 – A78
A85X – A88X
970 – 990X – 990FX

Essentially, all chipsets except AM1 have integrated Firmware RAID. For older models please check your manual

VIA Chipset

VX(CX)700 – VX800(UT) – VX820(UT) – VX900 – VX900M
VT6410 – VT6420 – VT6421 – VT6421/A
VT8237 – VT8237R – VT8237R Plus – VT8237S – VT8237A – VT8251

Due to the nature of VIA systems, pretty much all chipsets have integrated Firmware RAID. For older models please check your manual

Unsure of what chipset you have?

If your first thought is “this list is useless to me as I have no idea what chipset I have”, do not panic. You will not have to sacrifice a goat in some ritualistic fashion in order to find out, I have a simple solution! (and frankly, considerably less messy). HWiNFO will detect what chipset you have, you can download it right here!

Once installed, you’ll be able to identify your chipset (and see if it’s on the previously detailed list).

What chipset do I have?

How do I enable RAID on my PC?

As mentioned in the previous article, there’s 3 types of RAID. Due to the fact that “Hardware RAID” (a dedicated PCI/PCI-E card) has hundreds of models and variations I will not go in to it since each one has their own installation and configuration process and in some cases it is not even possible to use the resulting RAID array as an operating system boot drive. Because of this, in this guide I will only show you how to configure and use “Firmware RAID” and “Software RAID”.

Firmware RAID:

Now that we’ve determined that our motherboard is based around a RAID compatible chipset we can move forward with the configuration. Due to the fact that most, if not all chipsets mentioned (except VIA chipsets) have a UEFI bios rather than a classic bios, I will be using UEFI screenshots to show you where to go. That said, the terms tend to be the same so if your compatible chipset is not UEFI enabled, you can configure RAID through classic bios by simply looking for the mentioned terms.

The first thing we need to do (which is VERY important) is save all data within the hard drives (or SSDs) elsewhere as drives that become part of a RAID array are wiped clean once the process is complete. Once we’re finished we can safely move the files back in to the resulting drive. Furthermore, if we’re doing a RAID array where the operating system will live in, (for example, 2x SSDs in RAID 0 for an ultra fast experience) we will need to reinstall Windows on it afterwards (obviously, since all files are deleted).

I’ll be using an ASUS motherboard for this guide. Note that all UEFIs change a bit in design depending on the manufacturer but the terms used tend to be the same so don’t let yours looking different discourage you (yes, I’m sure there’s a joke in there somewhere). Of course if you have any questions feel free to leave a comment at the end of the article.

How to configure RAID Systems
How to configure RAID Systems

Step 1: Under the Advanced menu, configure the SATA to [RAID Mode] (by default it will normally be in IDE or AHCI modes).

Step 2: Under the Boot/CSM section, set [Boot from Storage Device] from [Legacy OpROM First] to [UEFI Driver First], then press the “F10” key to save and exit.

How to configure RAID Systems
How to configure RAID Systems

Step 3: Upon the next POST, re-enter the BIOS again, then you will see the following differences in BIOS options: For X79 series (for example, Rampage IV), you will find the [Intel(R) Rapid Storage Technology] menu under the Advanced menu, which did not show up before.

Step 3a: For Z77 series (Maximus V), just press the right key from the [Tool] tab and you will reveal additional UEFI options.

How to configure RAID Systems
How to configure RAID Systems

Step 4: Now we choose the type of RAID we want.

Step 5: and we select the settings. Generally speaking we want the biggest possible “Strip Size” and we can, if we want to, leave some of the available space out of the RAID Array to use as a traditional partition later on (though that doesn’t make much sense).

How to configure RAID Systems
How to configure RAID Systems

Step 6: …or delete them if we’ve made a mistake.

Step 7: After everything is done, save changes and exit. Your computer will now restart. If you’ve configured a RAID array that includes the drive where Windows lives you can now proceed and reinstall Windows as you would in a normal hard drive scenario. If you’ve only setup RAID on storage drives, you can now start Windows and you’ll see the resulting logical drive appear as a new drive in Windows and you’re all done.

I have UEFI, I see the IRST OPTION but not the rest…

I have an Intel Chipset that’s mentioned in the compatible list, I have UEFI and I see the “Intel Rapid Storage Technology” (or IRST) option but I simply get the Enable or Disable options within it and thing else or I simply do not get any RAID creation options, just an option to enable it.

No problem, no need to panic. Many of modern motherboards place RAID configuration outside of the UEFI. arguably so it doesn’t depend on us installing an operating system in UEFI mode (which older operating systems do not support such as XP, Vista and in some cases, 7, in which case we would ignore Step 2 in the visual guide above).

If this is the case. We activate (or enable) “IRST” within the UEFI, save changes and reboot. Just after the first screen where we see the manufacturer logo or POST details if you do not have that enabled we will see a RAID Status screen which will enumerate all drives (but pay attention as this screen only lasts a second or two!) at which point we need to be pressing “ctrl+i” in order to enter the configuration screen.

IRST Configuration

By constantly pressing “ctrl+i” and entering the configuration menu, the first thing we will notice is that it’s not as pretty as the UEFI interface but rather, looks more like an old MS-DOS program. You could say that this makes it simpler and straight to the point. At the end of the screen it tells us what keys to use to navigate the menu and we will be able to create our RAID array from here and you’ll notice it’s basically the same options as mentioned in the steps above, just not as pretty.

Once we’ve finished configuring our new RAID array, we save and reboot. At this point if you’ve had to reinstall windows (due to the fact that you’re using your newly created RAID array as a boot device (where your operating system lives) we need to go back in the UEFI or bios and tell it to boot from our new RAID array (it will appear in the hard drive list just as if it was one big drive).

Software RAID:

In order to explain how to use Software RAID I’ll be using Windows 7 as the example. The process is pretty much the same on Windows 8, 8.1 and 10 due to the fact that while visibly these operating systems may be different, their control panel options, in regards to drive operations, do not change.

Even though I did mention it in the previous article where I explain what RAID is and the types that are available, I will mention again that Software RAID only applies to storage drives due to the fact that Windows is required to already be installed since we need to go to its control panel in order to set RAID up, therefore only secondary drives can be added to the RAID array.

Software RAID is a decent option if we want to have redundancy for our files (aside from the fact that we should always keep our files in a secondary drive so if Windows gets corrup beyond fixing and we need to reinstall we will not have to worry about loosing them) or more speed if we’re using these secondary drives as program or game installation locations, specially if our motherboard does not have a RAID compatible chipset.

We should have in mind that if Windows stops working due to corruption, dead drive or any other reason or we simply wish to reinstall and start fresh, we will not be able to access our RAID array and therefore, our files straight up. HOWEVER, once our fresh new install of windows is finished we can simply go to “Control Panel – System and Security – Administrative Tools – Computer Management – Disk Management” and it will automatically detect that we had a previous RAID array in Windows and reconfigure it for us without loosing any files.

As you can see in the video, I show you how to setup RAID 1 (Mirror, for data redundancy). You’ll also notice other options and it’s entirely up to you which one to use (you can read the previous article for reference on what RAID to choose). For this to work, you’ll need 2 or more drives that have no partition (if yours do, simply select the partition and delete it, remember to move your files elsewhere first of course).

It’s worth mentioning this converts the selected drives to Dynamic drives which means we will not able to install any operating system on them (in case you where thinking about installing a secondary system on them).

And that’s it! I hope you found this guide helpful and of course if you have any questions about it or simply can’t figure out how to do it on your computer, feel free to leave a comment below.

Explanation WednesdayPerformance

RAID Systems – What are they and why on earth do we care?

26th August 2015 — by That IT Guy0


Lately I’ve been recommending RAID systems to most of my clients (of course, depending on their needs) in different flavours (some just need speed, some just need peace of mind and some are just greedy and want both).

Unfortunately, explaining every single time WHY can be somewhat tedious as it’s not as easy as explaining why the latest fantastic 4 move is awful in every possible way and I wish the cinema had some sort of refunding policy similar to what Steam has (if you don’t know what Steam is, it’s a digital distribution platform that sells you games. Yes, I do play games from time to time, I’m guilty).

So, I thought it’d be a good moment to explain what RAID Systems are and why on earth you should care, because after all, don’t we want to use the full potential of the PC we paid for?

RAID, What is it?

RAID Systems (Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple disks (Magnetic [HDD] or Solid State [SSD] in one single logical unit with the purpose of data redundancy (security), an increase of transfer rate (speed) or in certain types of RAID, both.

How does this sorcery work?

Data is distributed along several drives in different ways which are denominated as RAID levels (nothing to do with what level your Pokemon or World of Warcraft character is though) followed by a number (for example, RAID 0 or RAID 1). Each level provides a balances of pros and cons depending on our needs. All levels above RAID 0 (not included) provide a level of data redundancy (in lamest terms, keep your data safe from hard drive failure) against non-recoverable segments and read errors including complete hard drive failure. This aside, there’s three types of obtaining RAID; “Hardware RAID”, “Software RAID” and “Firmware RAID”.

A brief history lesson:

The “RAID” term was invented by David Patterson, Garth A. Gibson and Randy Katz in 1987 at Berkeley University, California. The concept, however, has existed (or at least, in part) since 1977. In 1983, the tech company DEC started to sell RA8X disk systems (now known as RAID 1) and in 1986 IBM registered a patent which would later turn in its modern day equivalent RAID 5. As the technology became a standard in the industry, the word represented by the “i” in the RAID acronym went from meaning “inexpensive” which was a marketing strategy at the time considering hardware prices to “independent” which is a more accurate representation of the technology itself anyway.

Until a few years ago, RAID was most an industry exclusive and in itself, pretty much a standard, it was unheard of to not set some sort of RAID system when it came to industry data as loosing data there wasn’t exactly the same as loosing a few photos your parents took of you as a kid (which in retrospective, considering the general weirdness of those, might not be a bad thing, and therefore not a great example) but it was uncommon in consumer grade hardware due to it being incredibly expensive and therefore not affordable by the average family. Today, however, where SATA is the standard and IDE has disappeared, RAID has become a technology where, at a basic level, it’s cheap and easy to manufacture which means most motherboards included it at one level or another.

What “levels” can I use?

I’m going to explain what levels we can generally find in today’s consumer market motherboards without much technicism in order to make it as simple as possible so you can decide which to choose if and when you do so and discarding other levels that while exist, are generally reserved for enterprise grade systems or servers. If you’re curious however, here are all the modern-day RAID system levels: Conventional: 0, 1, 2, 3, 4, 5, 5E, 6E. Hybrid: 0+1, 1+0 (o 10), 30, 50, 100, 10+1. Propriety: 50EE, Double Parity, 1.5, 7, S (or Parity RAID), Matrix (no, not like the film), Linux MD RAID 10, IBM ServeRAID 1E, Z.


RAID 0 uses 2 or more drivers (any amount above 2 works) in “stripe”. Regardless of what operating system we use (any version of windows, linux, etc), it will detect these drivers in RAID 0 as one single drive. Its size is the result of the sum of the sizes of all drives we chose to add to RAID 0 and a considerable (if not huge) speed increase. The formula to calculate it is the following:

“Speed of the slowest drive” x “amount of drives” – “5% of the total”.

Unlike every other RAID level, RAID 0 has no redundancy or security which unfortunately means that before you think of adding 4 SSDs in RAID 0 for a 2Gb per second read rate, you should know that if one of the drives dies, you  lose the information in that drive and unless we have some very specific equipment and tools, we essentially lose the information in the other drives within that RAID 0 array. This is because being a single logical volume, it does not stop and think where to store each bit which means that every drive within the array has a huge chance of not having one single complete file within in. This in turn is what makes it so fast but needs compromise so I do not recommend, not matter how tempting it may be, using more than 2 drives in a RAID 0 array and use it for something that benefits of the speed but does not keep critical files (so for example, the operating system, worst case scenario you can just reinstall it).


RAID 1 is a basic redundancy level. It requires 2 or more drives but it always has to be in pairs (2, 4, 6, 8, 10, etc). By using RAID 1 we lose the ability to use one out of every two drives. Our operating system will see just one Logical drive (just like on RAID 0) that is the size of the smallest drive within the array and is as fast as the slowest drive within the array.

Each written bit is simultaneously written to all disks within the RAID 1 array. Because of this if one of the drives suffers from read errors or has corrupted sectors or even just plain dies our system will not be affected, it will just let us know that one of the drives needs replacing as soon as possible but Windows or whatever OS you’re running will continue to work, thus preventing data and time loss.

So basically RAID 1 protects us from data loss caused by drive errors (not caused by human error, if you delete a file, that file is gone, obviously) at the expense of loosing access to half the drives that take part in the array. A RAID 1 array can survive the sudden death of up to half its drives, at which point we simply replace the dead drives and resync the remaining drives with the new drives. Today, the whole process can be done without even turning off our PC thanks to the “hot plug” technology (as long as we have it active within our bios/uefi) which is available in almost all motherboards that have SATA connections and is commonly used in basic servers.

As with RAID 0, it’s convenient to use drives that have the same speed due to the fact that the overall speed will be that of the slowest drive (the difference being that in RAID 0 we can experience speed fluctuations depending on what drive is in use at that very second but in RAID 1 the speed is consistent due to all drives being used exactly at the same time all the time and therefore having to stick to the speed of the slowest drive) so it would be wasteful to use fast and slow drives in the same RAID 1 array. We should also use drives of the same size as if we use one 500Gb and one 1000Gb drive, the resulting RAID logical drive would only be 500Gb so we’d be wasting space.


RAID 5 is a level which we’re not going to talk much about because while it is available in consumer motherboards it just isn’t something I’d (or most people would) recommend due to it have a high write cost and so reducing the life expectancy of our drives considerably. It’s original intention is to expand the functionality of RAID 1 with a lower cost in drives (not at a monetary level but rather, at a unit and usable space level).

RAID 5 requires a minimum of 3 drives. As with RAID 1, it is a redundancy level but unlike RAID 1, we do not lose 50% of our storage, we only loose 33%. The mirror information, unlike RAID 1 being direct, is kept in all drives, which is why there’s a bigger continuous write cost and why it isn’t recommended much today due to RAID 6 (not available in consumer motherboards) solves this issue and pretty much replaces RAID 5 when it comes to servers and enterprise level systems.

RAID 0+1

Hybrid RAID (0+1 & 1+0) is, in essence, a RAID array of two other RAID arrays. We’re going to focus on RAID 0 and RAID 1. On one hand we have the speed and appeal (specially using SSDs) that RAID 0 offers us but we’ll also always have the fear that if one of the drives within the RAID 0 array dies, we’ll lose the data and all of them. On the other hand we have the peace of mind and tranquility that RAID 1 offers us but the annoyance of not gaining speed and “loosing” storage space as half of the space is being used for, what in essence is, an insta-backup. From this, we get the brilliant (yes, brilliant, there’s no other way of describing it) Hybrid RAID that allows us to have in one single array the advantages of both and only some of the disadvantages, not all of them. So, within this type of RAID there’s 2.

RAID 0+1 creates two RAID 0 arrays (speed and capacity with no redundancy) and binds them in a RAID 1 array which gives us redundancy in a system that normally wouldn’t have it.

This system can sustain one or more drive failures in one of the 2 RAID 0 systems that compose our RAID 0+1 system, even all of the drives within one of the two RAID 0 systems but if we lose drives in both RAID 0 arrays (regardless of quantity) we will lose all our data. So, we would have to lose a drive in each RAID 0 array at the same time which while unlikely, it’s not impossible and due to this risk, this system is no longer used much at an enterprise level.


RAID 1+0 (or simply, RAID 10), is, as far as I’m concerned, the ideal everyday consumer solution due to the fact that it offers us something similar to RAID 0+1 but with less disadvantages. RAID 1+0 creates a RAID 0 array out of two RAID 1 arrays. As with RAID 0+1 we obtain an increase in speed but somewhat less due to the fact that in essence, we’re only obtaining the speed of 2 drives put together (remember each RAID 1 is the speed of the slowest drive within its array) following the formula specified in the description of RAID 0. That said, we can lose drives from both RAID 1 arrays as long as we don’t lose all the drives from each array. You could argue that this has still the same chance of complete failure, after all, we have the same unlikely chance of loosing both drives in a RAID 1 array within a RAID 1+0 array than loosing one disk on each array within the RAID 0+1 which is why there’s an ideal setup to prevent this.

The ideal setup would be for each RAID 1 array within the RAID 1+0 to contain 4 drives, giving us the capacity of 4 drives overall with the speed of 2 drives in RAID 0 (following the formula in the RAID 0 explanation) with the peace of mind that it is incredibly unlikely that all 4 drives within of the RAID 1 arrays would die at the very same time. (if 2 and 2 drives, or 3 and 1 drives die, we could continue working as if nothing happened).

An ideal example of personal use would be 8x 256Gb SSDs in RAID 1+0 giving us 1000Gb of usable storage, an average of 1Gb/s read-write rate with the peace of mind that we’d have to be the most unlucky person in the world in order to lose our data due to drive failure.

What methods can I use to obtain RAID?

RAID levels aside, there’s 3 ways of obtaining and using RAID. In some cases we will have more than one of these at our disposal but each way is considerably different to the rest but I’ve organized them from less desirable to most desirable so this way you’ll know which to go for if you have more than one available.

Software RAID
Software RAID

This way of acquiring RAID does not require any specific hardware and can be done on any computer that has 2 or more drives (even though IDE). The only thing we need is Windows 7 or newer or certain Linux distros. The obvious advantage of this is that we can set up RAID on any system without worrying about compatibility and we can use drives of any connection type, size or speed. The disadvantage however is a considerable one, specially on low powered computers due to the fact that Software RAID uses a considerable amount of processor and ram resources. We also do not have the option to easily swap out one drive for another (on Hardware and Firmware RAID we can easily do so without any issue, we just swap the drive or drives, synchronise and done) and due to this it is not advisable to use Software RAID on the drive that our operating system lives on due to the fact that if our RAID array fails we will not be able to boot to WIndows or Linux and fix this issue. So, software RAID should only be used for storage purposes.

Firmware RAID
Firmware RAID

This is the most common type of acquiring RAID today due to the fact that Firmware RAID refers to the integrated systems that come with consumer grade motherboards which means that for most readers, assuming their motherboard is compatible (which you’ll find out about on our next article on the matter) this is the way to go. The main difference between this and Hardware RAID is that we continue to depend on resources from our processor and ram but, the fact is that considering the motherboards that contain this technology plus the power of modern-day processors and ram mean that we will not notice the loss of resources we will have by using this type of RAID.

Hardware RAID
Hardware RAID

There’s a strong chance you will never use or even see a Hardware RAID system. These depend on a PCI/PCI-E card that have their own processor and ram which means it does not require to use any of our resources which is great, on the hand, the main issue with Hardware RAID is that if the card dies (which granted is unlikely for many years as these things are seriously built to last) and is a discontinued model (again, it would have to be very old for this to happen) you will have an issue due to the fact you will need the very same model in order to use the RAID system you had when your card died and therefore get access to your data.

That said, not all PCI/PCI-E RAID cards fall within the Hardware RAID category as many of the cheaper models do not have their own processor and/or ram so these fall within the Firmware RAID category. Unfortunately, the ones that do fit within the Hardware RAID category are incredibly expensive and due to this tend to be used almost exclusively in the world of servers and enterprise systems that require maximum security and efficiency.

Would you like to know if your computer is compatible and how to set up a RAID system to get the most of your PC?
Click here!