Friday, March 26, 2010

Raid Technology

Raid Technology

RAID stands for Redundant Array of Inexpensive (or sometimes "Independent") Disks.

RAID is a method of combining several hard disk drives into one logical unit (two or more disks grouped together to appear as a single device to the host system). RAID technology was developed to address the fault-tolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications

HISTORY
RAID technology was first defined by a group of computer scientists at the University of California at Berkeley in 1987. The scientists studied the possibility of using two or more disks to appear as a single device to the host system.
Although the array's performance was better than that of large, single-disk storage systems, reliability was unacceptably low. To address this, the scientists proposed redundant architectures to provide ways of achieving storage fault tolerance. In addition to defining RAID levels 1 through 5, the scientists also studied data striping -- a non-redundant array configuration that distributes files across multiple disks in an array. Often known as RAID 0, this configuration actually provides no data protection. However, it does offer maximum throughput for some data-intensive applications such as desktop digital video production.
THE DRIVING FACTORS BEHIND RAID

A number of factors are responsible for the growing adoption of arrays for critical network storage.
More and more organizations have created enterprise-wide networks to improve productivity and streamline information flow. While the distributed data stored on network servers provides substantial cost benefits, these savings can be quickly offset if information is frequently lost or becomes inaccessible. As today's applications create larger files, network storage needs have increased proportionately. In addition, accelerating CPU speeds have outstripped data transfer rates to storage media, creating bottlenecks in today's systems.
RAID storage solutions overcome these challenges by providing a combination of outstanding data availability, extraordinary and highly scalable performance, high capacity, and recovery with no loss of data or interruption of user access.
By integrating multiple drives into a single array -- which is viewed by the network operating system as a single disk drive -- organizations can create cost-effective, minicomputersized solutions of up to a terabyte or more of storage.

RAID LEVELS

There are several different RAID "levels" or redundancy schemes, each with inherent cost, performance, and availability (fault-tolerance) characteristics designed to meet different storage needs. No individual RAID level is inherently superior to any other. Each of the five array architectures is well-suited for certain types of applications and computing environments. For client/server applications, storage systems based on RAID levels 1, 0/1, and 5 have been the most widely used. This is because popular NOSs such as Windows NT® Server and NetWare manage data in ways similar to how these RAID architectures perform.

RAID 0
Data striping without redundancy (no protection).

• Minimum number of drives: 2
• Strengths: Highest performance.

• Weaknesses: No data protection; One drive fails, all data is lost.


DRIVE 1 DRIVE 2
Data A Data A
Data B Data B
Data C Data C

RAID 1
Disk mirroring.

• Minimum number of drives: 2
• Strengths: Very high performance; Very high data protection; Very minimal penalty on write performance.

• Weaknesses: High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required.
Mirroring
Standard Host
Adapter
DRIVE 1 DRIVE 2
Data A Data A
Data B Data B
Data C Data C
Original Data Mirrored Data
Duplexing
Standard Host
Adapter 1 Standard Host
Adapter 2
DRIVE 1 DRIVE 2
Data A Data A
Data B Data B
Data C Data C
Original Data Mirrored Data

RAID 2

No practical use.

• Minimum number of drives: Not used in LAN
• Strengths: Previously used for RAM error environments correction (known as Hamming Code ) and in disk drives before he use of embedded error correction.

• Weaknesses: No practical use; Same performance can be achieved by RAID 3 at lower cost

RAID 3
Byte-level data striping with dedicated parity drive.

• Minimum number of drives: 3
• Strengths: Excellent performance for large, sequential data requests.

• Weaknesses: Not well-suited for transaction-oriented network applications; Single parity drive does not support multiple, simultaneous read and write requests
RAID 4
Block-level data striping with dedicated parity drive.

• Minimum number of drives: 3 (Not widely used)
• Strengths: Data striping supports multiple simultaneous read requests.

• Weaknesses: Write requests suffer from same single parity-drive bottleneck as RAID 3; RAID 5 offers equal data protection and better performance at same cost.,

RAID 5
Block-level data striping with distributed parity.

• Minimum number of drives: 3
• Strengths: Best cost/performance for transaction-oriented networks; Very high performance, very high data protection; Supports multiple simultaneous reads and writes; Can also be optimized for large, sequential requests.

• Weaknesses: Write performance is slower than RAID 0 or RAID 1.
DRIVE 1 DRIVE 2 DRIVE 3
Parity A Data A Data A
Data B Parity B Data B
Data C Data C Parity C

RAID 01(0+1) AND RAID 10(1+0)
Combination of RAID 0 (data striping) and RAID 1 (mirroring). RAID 01 (0+1) is a mirrored configuration of two striped sets (mirror of stripes); RAID 10 (1+0) is a stripe across a number of mirrored sets(stripe of mirrors). RAID 10 provides better fault tolerance and rebuild performance than RAID 01. Both array types provide very good to excellent overall performance by combining the speed of RAID 0 with the redundancy of RAID 1 without requiring parity calculations.

• Minimum number of drives: 4
• Strengths: Highest performance, highest data protection (can tolerate multiple drive failures).

• Weaknesses: High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives

RAID 01 (0+1 mirror of stripes)
DRIVE 1 DRIVE 2 DRIVE 3 DRIVE 4
Data A Data A mA mA
Data B Data B mB mB
Data C Data C mC mC
Original Data Original Data Mirrored Data Mirrored Data

RAID 10 (1+0 stripe of mirrors)
DRIVE 1 DRIVE 2 DRIVE 3 DRIVE 4
Data A mA Data B mB
Data C mC Data D mD
Data E mE Data F mF
Original Data Mirrored Data Original Data Mirrored Data

TYPES OF RAID

There are three primary array implementations: software-based arrays, bus-based array adapters/controllers, and subsystem-based external array controllers. As with the various RAID levels, no one implementation is clearly better than another -- although software-based arrays are rapidly losing favor as high-performance, low-cost array adapters become increasingly available. Each array solution meets different server and network requirements, depending on the number of users, applications, and storage requirements.
It is important to note that all RAID code is based on software. The difference among the solutions is where that software code is executed -- on the host CPU (software-based arrays) or offloaded to an on-board processor (bus-based and external array controllers).
Description Advantages
Software-based RAID Primarily used with entry-level servers, software-based arrays rely on a standard host adapter and execute all I/O commands and mathematically intensive RAID algorithms in the host server CPU. This can slow system performance by increasing host PCI bus traffic, CPU utilization, and CPU interrupts. Some NOSs such as NetWare and Windows NT include embedded RAID software. The chief advantage of this embedded RAID software has been its lower cost compared to higher-priced RAID alternatives. However, this advantage is disappearing with the advent of lower-cost, bus-based array adapters. • Low price
• Only requires a standard controller.
Hardware-based RAID Unlike software-based arrays, bus-based array adapters/controllers plug into a host bus slot [typically a 133 MByte (MB)/sec PCI bus] and offload some or all of the I/O commands and RAID operations to one or more secondary processors. Originally used only with mid- to high-end servers due to cost, lower-cost bus-based array adapters are now available specifically for entry-level server network applications.

In addition to offering the fault-tolerant benefits of RAID, bus-based array adapters/controllers perform connectivity functions that are similar to standard host adapters. By residing directly on a host PCI bus, they provide the highest performance of all array types. Bus-based arrays also deliver more robust fault-tolerant features than embedded NOS RAID software.

As newer, high-end technologies such as Fibre Channel become readily available, the performance advantage of bus-based arrays compared to external array controller solutions may diminish. • Data protection and performance benefits of RAID
• More robust fault-tolerant features and increased performance versus software-based RAID.
External Hardware RAID Card Intelligent external array controllers "bridge" between one or more server I/O interfaces and single- or multiple-device channels. These controllers feature an on-board microprocessor, which provides high performance and handles functions such as executing RAID software code and supporting data caching.

External array controllers offer complete operating system independence, the highest availability, and the ability to scale storage to extraordinarily large capacities (up to a terabyte and beyond). These controllers are usually installed in networks of stand alone Intel-based and UNIX-based servers as well as clustered server environments. • OS independent
• Build super high-capacity storage systems for high-end servers.


SERVER TECHNOLOGY COMPARISON

UDMA SCSI Fibre Channel
Best Suited For Low-cost entry level server with limited expandability Low to high-end server when scalability is desired Server-to-Server campus networks
Advantages • Uses low-cost ATA drives • Performance: up to 160 MB/s
• Reliability
• Connectivity to the largest variety of peripherals
• Expandability • Performance: up to 100 MB/s
• Dual active loop data path capability
• Infinitely scalable

PARITY

The concept behind RAID is relatively simple. The fundamental premise is to be able to recover data on-line in the event of a disk failure by using a form of redundancy called parity. In its simplest form, parity is an addition of all the drives used in an array. Recovery from a drive failure is achieved by reading the remaining good data and checking it against parity data stored by the array. Parity is used by RAID levels 2, 3, 4, and 5. RAID 1 does not use parity because all data is completely duplicated (mirrored). RAID 0, used only to increase performance, offers no data redundancy at all.
A + B + C + D = PARITY


1 + 2 + 3 + 4 = 10
1 + 2 + X + 4 = 10

7 + X = 10
-7 + = -7
--------- ----------
X 3
MISSING RECOVERED
DATA DATA


FAULT TOLERANCE

RAID technology does not prevent drive failures. However, RAID does provide insurance against disk drive failures by enabling real-time data recovery without data loss.
The fault tolerance of arrays can also be significantly enhanced by choosing the right storage enclosure. Enclosures that feature redundant, hot-swappable drives, power supplies, and fans can greatly increase storage subsystem uptime based on a number of widely accepted measures:

• MTDL:
Mean Time to Data Loss. The average time before the failure of an array component causes data to be lost or corrupted.
• MTDA:
Mean Time between Data Access (or availability). The average time before non-redundant components fail, causing data inaccessibility without loss or corruption.
• MTTR:
Mean Time To Repair. The average time required to bring an array storage subsystem back to full fault tolerance.
• MTBF:
Mean Time Between Failure. Used to measure computer component average reliability/life expectancy. MTBF is not as well-suited for measuring the reliability of array storage systems as MTDL, MTTR or MTDA (see below) because it does not account for an array's ability to recover from a drive failure. In addition, enhanced enclosure environments used with arrays to increase uptime can further limit the applicability of MTBF ratings for array solutions

Friday, March 12, 2010

1. You have installed windows service pack 2 and after updating windows up to service pack 3, you are able to log in system but receiving a continuous message that it is not a genuine copy of windows. What are the solutions available to this problem in both manner legal of illegal?
IllegalWindows Genuine Notification popped up because Windows Update again installed it via Automatic Updates. It pops up while a user logs in to windows, displays a message near the system tray and keeps on reminding you in between work that the copy of windows is not genuine. It has been reported since its first release that even genuine users are getting this prompt, so Microsoft has them self release instructions for its removal. When I searched on Google about this issue, I landed up on pages which were providing many methods of its removal including those patching up existing files with their cracked versions which I would highly recommend avoiding them as they might contain malicious code and can be used to get you into more trouble.I found out this method of removal of Windows Genuine Notification :1. Launch Windows Task Manager. 2. End wgatray.exe process in Task Manager. 3. Restart Windows XP in Safe Mode. 4. Delete WgaTray.exe from C:\Windows\System32. 5. Delete WgaTray.exe from C:\Windows\System32\dllcache. 6. Lauch RegEdit. 7. Browse to the following location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify 8. Delete the folder ‘WgaLogon’ and all its contents 9. Reboot Windows XP. But the latest version of the WGN tool is a little tricky to handle. It will pop up again as soon as you end it from the task manager and while it is running in the memory, you can’t delete it too.
IllegalDownload a patch from the Internet and run it in your windows
LegalRegister your windows from Microsoft official website.

2.You have downloaded windows 7 from Microsoft official website in December 2009 on present day your system is rebooting after 2 hours. What are the solutions available to overcome this problem. Legal or Illegal?

Ans:If you have a warm fuzzy feeling inside when thinking about Microsoft and their decision to let you play with their new OS for free until August next year; get ready for the kicker. From March that release candidate you are running is going to start reminding you a commercial copy of the OS needs to be purchased to continue enjoying the benefits of Windows 7 in the most intrusive way possible.You can understand Microsoft wanting to remind users that they need to buy Windows 7, but it’s the method they have decided to employ that is going to annoy and frustrate users. From March 2010 Windows 7 RC will start automatically rebooting your PC every two hours. So, if you happen to be doing something important you’ll have to stop as the friendly “buy me!” shutdown reminder is invoked.For the RC, bi-hourly shutdowns will begin on March 1st, 2010. You will be alerted to install a released version of Windows and your PC will shut down automatically every 2 hours. On June 1st, 2010 if you are still on the Windows 7 RC your license for the Windows 7 RC will expire and the non-genuine experience is triggered where your wallpaper is removed and “This copy of Windows is not genuine” will be displayed in the lower right corner above the taskbar.This isn’t a new tactic Microsoft has implemented to remind users they need to upgrade and it did the same thing with Vista previews. Windows 7 is expected to release in October this year, but at the very latest will be out by January next year giving you plenty of time to buy a copy before the automatic shutdowns begin.

Wednesday, March 10, 2010

NTFS vs FAT

FAT16

FAT32

NTFS

1.) The FAT16 file system was introduced way back with MS–DOS in 1981, and it's showing its age. It was designed originally to handle files on a floppy drive, and has had minor modifications over the years so it can handle hard disks, and even file names longer than the original limitation of 8.3 characters, but it's still the lowest common denominator. The biggest advantage of FAT16 is that it is compatible across a wide variety of operating systems, including Windows 95/98/Me, OS/2, Linux, and some versions of UNIX. The biggest problem of FAT16 is that it has a fixed maximum number of clusters per partition, so as hard disks get bigger and bigger, the size of each cluster has to get larger. In a 2–GB partition, each cluster is 32 kilobytes, meaning that even the smallest file on the partition will take up 32 KB of space. FAT16 also doesn't support compression, encryption, or advanced security using access control lists.
2.) A FAT volume has a maximum size of 2GB and supports MS-DOS as well as being used for some dual boot configurations, but backward compatibility is about the only reason one can think of that FAT should ever be used, other than for the occasional floppy diskette.

1.) The FAT32 file system, originally introduced in Windows 95 Service Pack 2, is really just an extension of the original FAT16 file system that provides for a much larger number of clusters per partition. As such, it greatly improves the overall disk utilization when compared to a FAT16 file system. However, FAT32 shares all of the other limitations of FAT16, and adds an important additional limitation—many operating systems that can recognize FAT16 will not work with FAT32—most notably Windows NT, but also Linux and UNIX as well. Now this isn't a problem if you're running FAT32 on a Windows XP computer and sharing your drive out to other computers on your network—they don't need to know (and generally don't really care) what your underlying file system is.

1.) The NTFS file system, introduced with first version of Windows NT, is a completely different file system from FAT. It provides for greatly increased security, file–by–file compression, quotas, and even encryption. It is the default file system for new installations of Windows XP, and if you're doing an upgrade from a previous version of Windows, you'll be asked if you want to convert your existing file systems to NTFS. Don't worry. If you've already upgraded to Windows XP and didn't do the conversion then, it's not a problem. You can convert FAT16 or FAT32 volumes to NTFS at any point. Just remember that you can't easily go back to FAT or FAT32 (without reformatting the drive or partition), not that I think you'll want to.
The NTFS file system is generally not compatible with other operating systems installed on the same computer, nor is it available when you've booted a computer from a floppy disk. For this reason, many system administrators, myself included, used to recommend that users format at least a small partition at the beginning of their main hard disk as FAT. This partition provided a place to store emergency recovery tools or special drivers needed for reinstallation, and was a mechanism for digging yourself out of the hole you'd just dug into. But with the enhanced recovery abilities built into Windows XP (more on that in a future column), I don't think it's necessary or desirable to create that initial FAT partition.



FAT16, FAT32 and NTFS each use different cluster sizes depending on the size of the volume, and each file system has a maximum number of clusters it can support. The smaller the cluster size, the more efficiently a disk stores information because unused space within a cluster cannot be used by other files; the more clusters supported, the larger the volumes or partitions that can be created.
The table below provides a comparison of volume and default cluster sizes for the different Windows file systems still commonly in use:


Which File System to Choose?
As much as everyone would like for there to be a stock answer to the selection question, there isn't. Different situations and needs will play a large role in the decision of which file system to adopt. There isn't any argument that NTFS offers better security and reliability. Some also say that NTFS is more flexible, but that can get rather subjective depending on the situation and work habits, whereas NTFS superiority in security and reliability is seldom challenged. Listed below are some of the most common factors to consider when deciding between FAT32 and NTFS.
· Security
FAT32 provides very little security. A user with access to a drive using FAT32 has access to the files on that drive.
NTFS allows the use of NTFS Permissions. It's much more difficult to implement, but folder and file access can be controlled individually, down to an an extreme degree if necessary. The down side of using NTFS Permissions is the chance for error and screwing up the system is greatly magnified.
Windows XP Professional supports file encryption.
· Compatibility
NTFS volumes are not recognized by Windows 95/98/Me. This is only a concern when the system is set up for dual or multi-booting. FAT32 must be be used for any drives that must be accessed when the computer is booted from Windows 95/98 or Windows Me.
An additional note to the previous statement. Users on the network have access to shared folders no matter what disk format is being used or what version of Windows is installed.
FAT and FAT32 volumes can be converted to NTFS volumes. NTFS cannot be converted to FAT32 without reformatting.
· Space Efficiency
NTFS supports disk quotas, allowing you to control the amount of disk usage on a per user basis.
NTFS supports file compression. FAT32 does not.
How a volume manages data is outside the scope of this article, but once you pass the 8GB partition size, NTFS handles space management much more efficiently than FAT32. Cluster sizes play an important part in how much disk space is wasted storing files. NTFS provides smaller cluster sizes and less disk space waste than FAT32.
In Windows XP, the maximum partition size that can be created using FAT32 is 32GB. This increases to 16TB (terabytes) using NTFS. There is a workaround for the 32GB limitation under FAT32, but it is a nuisance especially considering the size of drives currently being manufactured.
· Reliability
FAT32 drives are much more susceptible to disk errors.
NTFS volumes have the ability to recover from errors more readily than similar FAT32 volumes.
Log files are created under NTFS which can be used for automatic file system repairs.
NTFS supports dynamic cluster remapping for bad sectors and prevent them from being used in the future.
The Final Choice

As the prior versions of Windows continue to age and are replaced in the home and workplace there will be no need for the older file systems. Hard drives aren't going to get smaller, networks are likely to get larger and more complex, and security is evolving almost daily as more and more users become connected. For all the innovations that Windows 95 brought to the desktop, it's now a virtual dinosaur. Windows 98 is fast on the way out and that leaves NT and Windows 2000, both well suited to NTFS. To wrap up, there may be compelling reasons why your current situation requires a file system other than NTFS or a combination of different systems for compatibility, but if at all possible go with NTFS. Even if you don't utilize its full scope of features, the stability and reliability it offers make it the hands down choice.