Sunday, February 21, 2010

novell netware

Novell NetWare


The NetWare console screen (August 22, 2006)
Company / developer
Novell, Inc.

Working state Current
Source model Closed source

Initial release 1983
Latest stable release
6.5 SP8 / May 6, 2009
Available language(s)
English

Kernel type
Hybrid kernel

Default user interface
Command line interface

License
Proprietary

Official Website
www.novell.com

NetWare is a network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, and the network protocols were based on the archetypal Xerox Network Systems stack.
NetWare has been superseded by Open Enterprise Server (OES). The latest version of NetWare is v6.5 Support Pack 8, which is identical to OES 2 SP1, NetWare Kernel.
History
NetWare evolved from a very simple concept: file sharing instead of disk sharing. In 1983 when the first versions of NetWare were designed, all other competing products were based on the concept of providing shared direct disk access. Novell's alternative approach was validated by IBM in 1984 and helped promote their product.
With Novell NetWare, disk space was shared in the form of NetWare volumes, comparable to DOS volumes. Clients running MS-DOS would run a special terminate and stay resident (TSR) program that allowed them to map a local drive letter to a NetWare volume. Clients had to log in to a server in order to be allowed to map volumes, and access could be restricted according to the login name. Similarly, they could connect to shared printers on the dedicated server, and print as if the printer was connected locally.
At the end of the 1990s, with Internet connectivity booming, the Internet's TCP/IP protocol became dominant on LANs. Novell had introduced limited TCP/IP support in NetWare v3.x (circa 1992) and v4.x (circa 1995), consisting mainly of FTP services and UNIX-style LPR/LPD printing (available in NetWare v3.x), and a Novell-developed webserver (in NetWare v4.x). Native TCP/IP support for the client file and print services normally associated with NetWare was introduced in NetWare v5.0 (released in 1998).
During the early-to-mid 1980s Microsoft introduced their own LAN system in LAN Manager based on the competing NBF protocol. Early attempts to muscle in on NetWare were not successful, but this changed with the inclusion of improved networking support in Windows for Workgroups, and then the hugely successful Windows NT and Windows 95. NT, in particular, offered services similar to those offered by NetWare, but on a system that could also be used on a desktop, and connected directly to other Windows desktops where NBF was now almost universal.
The rise of NetWare
The popular use and growth of Novell NetWare began in 1985 with the simultaneous release of NetWare 286 2.0a and the Intel 80286 16-bit processor. The 80286 CPU featured a new 16-bit protected mode that provided access to up to 16 MB RAM as well as new mechanisms to aid multi-tasking. Prior to the 80286 CPU servers were based on the Intel 8086/8088 8/16-bit processors, which were limited to an address space of 1MB with not more than 640 KB of directly addressable RAM.
The combination of a higher 16 MB RAM limit, 80286 processor feature utilization, and 256 MB NetWare volume size limit allowed reliable, cost-effective server-based local area networks to be built for the first time. The 16 MB RAM limit was especially important, since it made enough RAM available for disk caching to significantly improve performance. This became the key to Novell's performance while also allowing larger networks to be built.
Another significant difference of NetWare 286 was that it was hardware-independent, unlike competing server systems from 3Com. Novell servers could be assembled using any brand system with an Intel 80286 or higher CPU, any MFM, RLL, ESDI, or SCSI hard drive and any 8- or 16-bit network adapter for which Netware drivers were available.
Novell also designed a compact and simple DOS client software program that allowed DOS stations to connect to a server and access the shared server hard drive. While the NetWare server file system introduced a new, proprietary file system design, it looked like a standard DOS volume to the workstation, ensuring compatibility with all existing DOS programs.
Early years
NetWare was based on the consulting work by SuperSet Software, a group founded by the friends Drew Major, Dale Neibaur, Kyle Powell and later Mark Hurst. This work was based on their classwork at Brigham Young University in Provo, Utah, starting in October 1981.
In 1983, Raymond Noorda engaged the work by the SuperSet team. The team was originally assigned to create a CP/M disk sharing system to help network the CP/M hardware that Novell was selling at the time. The team was privately convinced that CP/M was a doomed platform and instead came up with a successful file sharing system for the newly introduced IBM-compatible PC. They also wrote an application called Snipes, a text-mode game and used it to test the new network and demonstrate its capabilities. Snipes was the first network application ever written for a commercial personal computer, and it is recognized as one of the precursors of many popular multiplayer games such as Doom and Quake.
This network operating system (NOS) was later called Novell NetWare. NetWare was based on the NetWare Core Protocol (NCP), which is a packet-based protocol that enables a client to send requests to and receive replies from a NetWare server. Initially NCP was directly tied to the IPX/SPX protocol, and NetWare communicated natively using only IPX/SPX.
The first product to bear the NetWare name was released in 1983. It was called Netware 68 (aka S-Net); it ran on the Motorola 68000 processor on a proprietary Novell-built file server and used a star network topology. This was soon joined by NetWare 86 V4.x, which was written for the Intel 8086. This was replaced in 1985 with Advanced NetWare 86 version 1.0a which allowed more than one server on the same network. In 1986, after the Intel 80286 processor became available, Novell released Advanced NetWare 286 V1.0a and subsequently V2.0B (that used IPX routing to allow up to 4 network cards in a server). In 1989, with the Intel 80386 available, Novell released NetWare 386. Later Novell consolidated the numbering of their NetWare releases, with NetWare 386 becoming NetWare 3.x.
NetWare 286 2.x
NetWare version 2 was notoriously difficult to configure, since the operating system was provided as a set of compiled object modules that required configuration and linking. Compounding this inconvenience was that the process was designed to run from multiple diskettes, which was slow and unreliable. Any change to the operating system required a re-linking of the kernel and a reboot of the system, requiring at least 20 diskette swaps. An additional complication in early versions was that the installation contained a proprietary low-level format program for MFM hard drives, which was run automatically before the software could be loaded, called COMPSURF.
NetWare was administered using text-based utilities such as SYSCON. The file system used by NetWare 2 was NetWare File System 286, or NWFS 286, supporting volumes of up to 256 MB. NetWare 286 recognized 80286 protected mode, extending NetWare's support of RAM from 1 MB to the full 16 MB addressable by the 80286. A minimum of 2 MB was required to start up the operating system; any additional RAM was used for FAT, DET and file caching. Since 16-bit protected mode was implemented the i80286 and every subsequent Intel x86 processor, NetWare 286 version 2.x would run on any 80286 or later compatible processor.
NetWare 2 implemented a number of features inspired by mainframe and minicomputer systems that were not available in other operating systems of the day. The System Fault Tolerance (SFT) features included standard read-after-write verification (SFT-I) with on-the-fly bad block re-mapping (at the time, disks did not have that feature built in) and software RAID1 (disk mirroring, SFT-II). The Transaction Tracking System (TTS) optionally protected files against incomplete updates. For single files, this required only a file attribute to be set. Transactions over multiple files and controlled roll-backs were possible by programming to the TTS API.
NetWare 286 2.x supported two modes of operation: dedicated and non-dedicated. In dedicated mode, the server used DOS only as a boot loader to execute the operating system file net$os.exe. All memory was allocated to NetWare; no DOS ran on the server. For non-dedicated operation, DOS 3.3 or higher would remain in memory, and the processor would time-slice between the DOS and NetWare programs, allowing the server computer to be used simultaneously as network file server and as a user workstation. All extended memory (RAM above 1 MB) was allocated to NetWare, so DOS was limited to only 640kB; an expanded memory manager would not work because NetWare 286 had control of 80286 protected mode and the upper RAM, both of which were required for DOS to use expanded memory. Time slicing was accomplished using the keyboard interrupt. This feature required strict compliance with the IBM PC design model, otherwise performance was affected. Non-dedicated NetWare was popular on small networks, although it was more susceptible to lockups due to DOS program problems. In some implementations, users would experience significant network slowdown when someone was using the console as a workstation. NetWare 386 3.x and later supported only dedicated operation.
Server licensing on early versions of NetWare 286 was accomplished by using a key card. The key card was designed for an 8-bit ISA bus, and had a serial number encoded on a ROM chip. The serial number had to match the serial number of the NetWare software running on the server. To broaden the hardware base, particularly to machines using the IBM MCA bus, later versions of NetWare 2.x did not require the key card; serialised license floppy disks were used in place of the key cards.
NetWare 3.x
Starting with NetWare 3.x, support for 32-bit protected mode was added, eliminating the 16 mb memory limit of NetWare 286. This allowed larger hard drives to be supported, since NetWare 3.x cached (copied) the entire file allocation table (FAT) and directory entry table (DET) into memory for improved performance.
By accident or design, the initial releases of the client TSR programs modified the high 16 bits of the 32-bit 80386 registers, making them unusable by any other program until this was fixed. The problem was noticed by Phil Katz who added a switch to his PKZIP suite of programs to enable 32-bit register use only when the Netware TSRs were not present.
NetWare version 3 eased development and administration by modularization. Each functionality was controlled by a software module called a NetWare Loadable Module (NLM) loaded either at startup or when it was needed. It was then possible to add functionality such as anti-virus software, backup software, database and web servers, long name support (standard filenames were limited to 8 characters plus a three letter extension, matching MS-DOS) or Macintosh style files.
NetWare continued to be administered using console-based utilities. The file system introduced by NetWare 3.x and used by default until NetWare 5.x was NetWare File System 386, or NWFS 386, which significantly extended volume capacity (1 TB, 4 GB files) and could handle up to 16 volume segments spanning multiple physical disk drives. Volume segments could be added while the server was in use and the volume was mounted, allowing a server to be expanded without interruption.
Initially, NetWare used Bindery services for authentication. This was a stand-alone database system where all user access and security data resided individually on each server. When an infrastructure contained more than one server, users had to log-in to each of them individually, and each server had to be configured with the list of all allowed users.
"NetWare Name Services" was a product that allowed user data to be extended across multiple servers, and the Windows "Domain" concept is functionally equivalent to NetWare v3.x Bindery services with NetWare Name Services added on (e.g. a 2-dimensional database, with a flat namespace and a static schema).
For a while, Novell also marketed an OEM version of NetWare 3, called Portable NetWare, together with OEMs such as Hewlett-Packard, DEC and Data General, who ported Novell source code to run on top of their Unix operating systems. Portable NetWare did not sell well.
While Netware 3.x was current, Novell introduced its first high-availability clustering system, named NetWare SFT-III, which allowed a logical server to be completely mirrored to a separate physical machine. Implemented as a shared-nothing cluster, under SFT-III the OS was logically split into an interrupt-driven I/O engine and the event-driven OS core. The I/O engines serialized their interrupts (disk, network etc.) into a combined event stream that was fed to two identical copies of the system engine through a fast (typically 100 Mbit/s) inter-server link. Because of its non-preemptive nature, the OS core, stripped of non-deterministic I/O, behaves deterministically, like a large finite state machine.
The outputs of the two system engines were compared to ensure proper operation, and two copies fed back to the I/O engines. Using the existing SFT-II software RAID functionality present in the core, disks could be mirrored between the two machines without special hardware. The two machines could be separated as far as the server-to-server link would permit. In case of a server or disk failure, the surviving server could take over client sessions transparently after a short pause since it had full state information and did not, for example, have to re-mount the volumes - a process at which NetWare was notoriously slow. SFT-III was the first NetWare version able to make use of SMP hardware - the I/O engine could optionally be run on its own CPU. The modern incarnation of NetWare's clustering, Novell Cluster Services (introduced in NetWare v5.0), is very different from SFT-III. NetWare SFT-III, ahead of its time in several ways, was a mixed success.
NetWare 386 3.x was designed to run all applications on the server at the same level of processor memory protection, known as "ring 0". While this provided the best possible performance, it sacrificed reliability. The result was that crashing (known as abends, short for abnormal ends) were possible and would result in stopping the system. Starting with NetWare 5.x, software modules (NetWare Loadable Modules or NLM's) could be assigned to run in different processor protection rings, ensuring that a software error would not crash the system.
NetWare 4.x
Version 4 in 1993 also introduced NetWare Directory Services, later re-branded as Novell Directory Services (NDS), based on X.500, which replaced the Bindery with a global directory service, in which the infrastructure was described and managed in a single place. Additionally, NDS provided an extensible schema, allowing the introduction of new object types. This allowed a single user authentication to NDS to govern access to any server in the directory tree structure. Users could therefore access network resources no matter on which server they resided, although user license counts were still tied to individual servers. (Large enterprises could opt for a license model giving them essentially unlimited per-server users if they let Novell audit their total user count)
Version 4 also introduced a number of useful tools and features, such as transparent compression at file system level and RSA public/private encryption.
Another new feature was the NetWare Asynchronous Services Interface (NASI). It allowed network sharing of multiple serial devices, such as modems. Client port redirection occurred via an MS-DOS or Microsoft Windows driver allowing companies to consolidate modems and analog phone lines.[2]
Strategic mistakes
Novell's strategy with NetWare 286 2.x and 3.x was very successful; before the arrival of Windows NT Server, Novell claimed 90% of the market for PC based servers.
While the design of NetWare 3.x and later involved a DOS partition to load NetWare server files, this feature became a liability as new users preferred the Windows graphical interface to learning DOS commands necessary to build and control a NetWare server. Novell could have eliminated this technical liability by retaining the design of NetWare 286, which installed the server file into a Novell partition and allowed the server to boot from the Novell partition without creating a bootable DOS partition. Novell finally added support for this in a Support Pack for NetWare 6.5.
As Novell used IPX/SPX instead of TCP/IP, they were poorly positioned to take advantage of the Internet in 1995. This resulted in Novell servers being bypassed for routing and Internet access, in favor of hardware routers, Unix-based operating systems such as FreeBSD, and SOCKS and HTTP Proxy Servers on Windows and other operating systems.[citation needed]
NetWare 4.1x and NetWare for Small Business: Novell begins to recover
Novell priced NetWare 4.10 similarly to NetWare 3.12, allowing customers who resisted NDS (typically small businesses) to try it at no cost.
Later Novell released NetWare version 4.11 in 1996 which included many enhancements that made the operating system easier to install, easier to operate, faster, and more stable. It also included the first full 32-bit client for Microsoft Windows-based workstations, SMP support and the NetWare Administrator (NWADMIN or NWADMN32), a GUI-based administration tool for NetWare. Previous administration tools used the Cworthy interface, the character-based GUI tools such as SYSCON and PCONSOLE with blue text-based background. Some of these tools survive to this day, for instance MONITOR.NLM.
Novell packaged NetWare 4.11 with its Web server, TCP/IP support and Netscape browser into a bundle dubbed IntranetWare (also written as intraNetWare). A version designed for networks of 25 or fewer users was named IntranetWare for Small Business and contained a limited version of NDS and tried to simplify NDS administration. The intranetWare name was dropped in NetWare 5.
During this time Novell also began to leverage its directory service, NDS, by tying their other products into the directory. Their e-mail system, GroupWise, was integrated with NDS, and Novell released many other directory-enabled products such as ZENworks and BorderManager.
NetWare still required IPX/SPX as NCP used it, but Novell started to acknowledge the demand for TCP/IP with NetWare 4.11 by including tools and utilities that made it easier to create intranets and link networks to the Internet. Novell bundled tools, such as the IPX/IP gateway, to ease the connection between IPX workstations and IP networks. It also began integrating Internet technologies and support through features such as a natively hosted web server.
NetWare 5.x
With the release of NetWare 5 in October 1998, Novell finally acknowledged the prominence of the Internet by switching its primary NCP interface from the IPX/SPX network protocol to TCP/IP. IPX/SPX was still supported, but the emphasis shifted to TCP/IP. Novell also added a GUI to NetWare. Other new features were:
• Novell Storage Services (NSS), a new file system to replace the traditional NetWare File System - which was still supported
• Java virtual machine for NetWare
• Novell Distributed Print Services (NDPS)
• ConsoleOne, a new Java-based GUI administration console
• directory-enabled Public key infrastructure services (PKIS)
• directory-enabled DNS and DHCP servers
• support for Storage Area Networks (SANs)
• Novell Cluster Services (NCS)
• Oracle 8i with a 5-user license
The Cluster Services were a major advance over SFT-III, as NCS does not require specialized hardware or identical server configurations.
NetWare 5 was released during a time when NetWare market share dropped precipitously; many companies and organizations were replacing their NetWare servers with servers running Microsoft's Windows NT operating system. Novell also released their last upgrade to the NetWare 4 operating system, NetWare 4.2.
NetWare 5.1 was released in January 2000, shortly after its predecessor. It introduced a number of useful tools, such as:
• IBM WebSphere Application Server
• NetWare Management Portal (later renamed Novell Remote Manager), web-based management of the operating system
• FTP, NNTP and streaming media servers
• NetWare Web Search Server
• WebDAV support
NetWare 6.0
NetWare 6 was released in October 2001. This version has a simplified licensing scheme based on users, not servers. This allows unlimited connections per user.
NetWare 6.5
NetWare 6.5 was released in August 2003. Some of the new features in this version were:
• more open-source products such as PHP, MySQL and OpenSSH
• a port of the Bash shell and a lot of traditional Unix utilities such as wget, grep, awk and sed to provide additional capabilities for scripting
• iSCSI support (both target and initiator)
• Virtual Office - an "out of the box" web portal for end users providing access to e-mail, personal file storage, company address book, etc.
• Domain controller functionality
• Universal password
• DirXML Starter Pack - synchronization of user accounts with another eDirectory tree, a Windows NT domain or Active Directory.
• exteNd Application Server - a J2EE 1.3-compatible application server
• support for customized printer driver profiles and printer usage auditing
• NX bit support
• support for USB storage devices
• support for encrypted volumes
The latest - and apparently last - Service Pack for Netware 6.5 is SP8, released October 2008.
Open Enterprise Server
Main article: Novell Open Enterprise Server
1.0
In 2003, Novell announced the successor product to NetWare: Open Enterprise Server (OES). First released in March 2005, OES completes the separation of the services traditionally associated with NetWare (e.g. Directory Services, file-and-print) from the platform underlying the delivery of those services. OES is essentially a set of applications (eDirectory, NetWare Core Protocol services, iPrint, etc.) that can run atop either a Linux or a NetWare kernel platform. Clustered OES implementations can even migrate services from Linux to NetWare and back again, making Novell one of the very few vendors to offer a multi-platform clustering solution.
Consequent to Novell's acquisitions of Ximian and SuSE, a German Linux distributor, it is widely observed that Novell is moving away from NetWare and shifting its focus towards Linux. Much recent marketing seems to be focussed on getting faithful NetWare users to move to the Linux platform in future releases. The clearest indication of this direction is Novell's controversial decision to release Open Enterprise Server in Linux form only. Novell later watered down this decision and stated that NetWare's 90 million users would be supported until at least 2015. Some of Novell's more perverse NetWare supporters have taken it upon themselves to petition Novell to keep NetWare in development.
2.0
OES 2 was released on October 8, 2007. It includes NetWare 6.5 SP7, which supports running as a paravirtualized guest inside the Xen hypervisor and new Linux based version using SLES10.
New features include
• 64bit support
• Virtualization
• Dynamic Storage Technology, which provide Shadow Volumes
• Domain services for Windows (provided in OES 2 service pack 1)
Current NetWare situation
While Novell NetWare is still used by some organizations, its ongoing decline in popularity began in the mid-1990s, when NetWare was the de facto standard for file and print software for the Intel x86 server platform. Modern (2009) NetWare and OES installations are used by larger organizations that may need the added flexibility they provide.
Microsoft successfully shifted market share away from NetWare products toward their own in the late-1990s. Microsoft's more aggressive marketing was aimed directly to management through major magazines; Novell NetWare's was through IT specialist magazines with distribution limited to select IT personnel.
Novell did not adapt their pricing structure accordingly and NetWare sales suffered at the hands of those corporate decision makers whose valuation was based on initial licensing fees. As a result organizations that still use NetWare, eDirectory, and Novell software often have a hybrid infrastructure of NetWare, Linux, and Windows servers.
Netware Lite / Personal Netware
In 1991 Novell introduced a radically different and cheaper product - Netware Litez in answer to Artisoft's similar LANtastic. Both were peer to peer systems, where no specialist server was required, but instead all PCs on the network could share their resources.
The product line became Personal Netware in 1993.
Performance
NetWare dominated the network operating system (NOS) market from the mid-80s through the mid- to late-90s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) to a dedicated Auspex NFS server and a SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware. There were several reasons for NetWare's performance.
File service instead of disk service
At the time NetWare was first developed, nearly all LAN storage was based on the disk server model. This meant that if a client computer wanted to read a particular block from a particular file it would have to issue the following requests across the relatively slow LAN:
1. Read first block of directory
2. Continue reading subsequent directory blocks until the directory block containing the information on the desired file was found, could be many directory blocks
3. Read through multiple file entry blocks until the block containing the location of the desired file block was found, could be many directory blocks
4. Read the desired data block
NetWare, since it was based on a file service model, interacted with the client at the file API level:
1. Send file open request (if this hadn't already been done)
2. Send a request for the desired data from the file
All of the work of searching the directory to figure out where the desired data was physically located on the disk was performed at high speed locally on the server. By the mid-1980s, most NOS products had shifted from the disk service to the file service model. Today, the disk service model is making a comeback, see SAN.
Aggressive caching
From the start, NetWare was designed to be used on servers with copious amounts of RAM. The entire file allocation table (FAT) was read into RAM when a volume was mounted, thereby requiring a minimum amount of RAM proportional to online disk space; adding a disk to a server would often require a RAM upgrade as well. Unlike most competing network operating systems prior to Windows NT, NetWare automatically used all otherwise unused RAM for caching active files, employing delayed write-backs to facilitate re-ordering of disk requests (elevator seeks). An unexpected shutdown could therefore corrupt data, making an uninterruptible power supply practically a mandatory part of a server installation.
The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed ("dirty") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.
Efficiency of NetWare Core Protocol (NCP)
Most network protocols in use at the time NetWare was developed didn't trust the network to deliver messages. A typical client file read would work something like this:
1. Client sends read request to server
2. Server acknowledges request
3. Client acknowledges acknowledgement
4. Server sends requested data to client
5. Client acknowledges data
6. Server acknowledges acknowledgement
In contrast, NCP was based on the idea that networks worked perfectly most of the time, so the reply to a request served as the acknowledgement. Here is an example of a client read request using this model:
1. Client sends read request to server
2. Server sends requested data to client
All requests contained a sequence number, so if the client didn't receive a response within an appropriate amount of time it would re-send the request with the same sequence number. If the server had already processed the request it would resend the cached response, if it had not yet had time to process the request it would only send a "positive acknowledgement". The bottom line to this 'trust the network' approach was a 2/3 reduction in network transactions and the associated latency.

Non-preemptive OS designed for network services
One of the raging debates of the 90s was whether it was more appropriate for network file service to be performed by a software layer running on top of a general purpose operating system, or by a special purpose operating system. NetWare was a special purpose operating system, not a timesharing OS. It was written from the ground up as a platform for client-server processing services. Initially it focused on file and print services, but later demonstrated its flexibility by running database, email, web and other services as well. It also performed efficiently as a router, supporting IPX, TCP/IP, and Appletalk, though it never offered the flexibility of a 'hardware' router.
In 4.x and earlier versions, NetWare did not support preemption, virtual memory, graphical user interfaces, etc. Processes and services running under the NetWare OS were expected to be cooperative, that is to process a request and return control to the OS in a timely fashion. On the down side, this trust of application processes to manage themselves could lead to a misbehaving application bringing down the server.
By comparison, general purpose operating systems such as Unix or Microsoft Windows were based on an interactive, time-sharing model where competing programs would consume all available resources if not held in check by the Operating System. Such environments operated by preemption, memory virtualization, etc., generating significant overhead because there were never enough resources to do everything every application desired. These systems improved over time as network services shed their “application” stigma and moved deeper into the kernel of the “general purpose” OS, but they never equaled the efficiency of NetWare

Probably the single greatest reason for Novell's success during the 80's and 90's was the efficiency of NetWare compared to general purpose operating systems. However, as microprocessors increased in power, efficiency became less and less of an issue. With the introduction of the Pentium processor, NetWare's performance advantage began to be outweighed by the complexity of managing and developing applications for the NetWare environment.[

novell netware

Novell NetWare


The NetWare console screen (August 22, 2006)
Company / developer
Novell, Inc.

Working state Current
Source model Closed source

Initial release 1983
Latest stable release
6.5 SP8 / May 6, 2009
Available language(s)
English

Kernel type
Hybrid kernel

Default user interface
Command line interface

License
Proprietary

Official Website
www.novell.com

NetWare is a network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, and the network protocols were based on the archetypal Xerox Network Systems stack.
NetWare has been superseded by Open Enterprise Server (OES). The latest version of NetWare is v6.5 Support Pack 8, which is identical to OES 2 SP1, NetWare Kernel.
History
NetWare evolved from a very simple concept: file sharing instead of disk sharing. In 1983 when the first versions of NetWare were designed, all other competing products were based on the concept of providing shared direct disk access. Novell's alternative approach was validated by IBM in 1984 and helped promote their product.
With Novell NetWare, disk space was shared in the form of NetWare volumes, comparable to DOS volumes. Clients running MS-DOS would run a special terminate and stay resident (TSR) program that allowed them to map a local drive letter to a NetWare volume. Clients had to log in to a server in order to be allowed to map volumes, and access could be restricted according to the login name. Similarly, they could connect to shared printers on the dedicated server, and print as if the printer was connected locally.
At the end of the 1990s, with Internet connectivity booming, the Internet's TCP/IP protocol became dominant on LANs. Novell had introduced limited TCP/IP support in NetWare v3.x (circa 1992) and v4.x (circa 1995), consisting mainly of FTP services and UNIX-style LPR/LPD printing (available in NetWare v3.x), and a Novell-developed webserver (in NetWare v4.x). Native TCP/IP support for the client file and print services normally associated with NetWare was introduced in NetWare v5.0 (released in 1998).
During the early-to-mid 1980s Microsoft introduced their own LAN system in LAN Manager based on the competing NBF protocol. Early attempts to muscle in on NetWare were not successful, but this changed with the inclusion of improved networking support in Windows for Workgroups, and then the hugely successful Windows NT and Windows 95. NT, in particular, offered services similar to those offered by NetWare, but on a system that could also be used on a desktop, and connected directly to other Windows desktops where NBF was now almost universal.
The rise of NetWare
The popular use and growth of Novell NetWare began in 1985 with the simultaneous release of NetWare 286 2.0a and the Intel 80286 16-bit processor. The 80286 CPU featured a new 16-bit protected mode that provided access to up to 16 MB RAM as well as new mechanisms to aid multi-tasking. Prior to the 80286 CPU servers were based on the Intel 8086/8088 8/16-bit processors, which were limited to an address space of 1MB with not more than 640 KB of directly addressable RAM.
The combination of a higher 16 MB RAM limit, 80286 processor feature utilization, and 256 MB NetWare volume size limit allowed reliable, cost-effective server-based local area networks to be built for the first time. The 16 MB RAM limit was especially important, since it made enough RAM available for disk caching to significantly improve performance. This became the key to Novell's performance while also allowing larger networks to be built.
Another significant difference of NetWare 286 was that it was hardware-independent, unlike competing server systems from 3Com. Novell servers could be assembled using any brand system with an Intel 80286 or higher CPU, any MFM, RLL, ESDI, or SCSI hard drive and any 8- or 16-bit network adapter for which Netware drivers were available.
Novell also designed a compact and simple DOS client software program that allowed DOS stations to connect to a server and access the shared server hard drive. While the NetWare server file system introduced a new, proprietary file system design, it looked like a standard DOS volume to the workstation, ensuring compatibility with all existing DOS programs.
Early years
NetWare was based on the consulting work by SuperSet Software, a group founded by the friends Drew Major, Dale Neibaur, Kyle Powell and later Mark Hurst. This work was based on their classwork at Brigham Young University in Provo, Utah, starting in October 1981.
In 1983, Raymond Noorda engaged the work by the SuperSet team. The team was originally assigned to create a CP/M disk sharing system to help network the CP/M hardware that Novell was selling at the time. The team was privately convinced that CP/M was a doomed platform and instead came up with a successful file sharing system for the newly introduced IBM-compatible PC. They also wrote an application called Snipes, a text-mode game and used it to test the new network and demonstrate its capabilities. Snipes was the first network application ever written for a commercial personal computer, and it is recognized as one of the precursors of many popular multiplayer games such as Doom and Quake.
This network operating system (NOS) was later called Novell NetWare. NetWare was based on the NetWare Core Protocol (NCP), which is a packet-based protocol that enables a client to send requests to and receive replies from a NetWare server. Initially NCP was directly tied to the IPX/SPX protocol, and NetWare communicated natively using only IPX/SPX.
The first product to bear the NetWare name was released in 1983. It was called Netware 68 (aka S-Net); it ran on the Motorola 68000 processor on a proprietary Novell-built file server and used a star network topology. This was soon joined by NetWare 86 V4.x, which was written for the Intel 8086. This was replaced in 1985 with Advanced NetWare 86 version 1.0a which allowed more than one server on the same network. In 1986, after the Intel 80286 processor became available, Novell released Advanced NetWare 286 V1.0a and subsequently V2.0B (that used IPX routing to allow up to 4 network cards in a server). In 1989, with the Intel 80386 available, Novell released NetWare 386. Later Novell consolidated the numbering of their NetWare releases, with NetWare 386 becoming NetWare 3.x.
NetWare 286 2.x
NetWare version 2 was notoriously difficult to configure, since the operating system was provided as a set of compiled object modules that required configuration and linking. Compounding this inconvenience was that the process was designed to run from multiple diskettes, which was slow and unreliable. Any change to the operating system required a re-linking of the kernel and a reboot of the system, requiring at least 20 diskette swaps. An additional complication in early versions was that the installation contained a proprietary low-level format program for MFM hard drives, which was run automatically before the software could be loaded, called COMPSURF.
NetWare was administered using text-based utilities such as SYSCON. The file system used by NetWare 2 was NetWare File System 286, or NWFS 286, supporting volumes of up to 256 MB. NetWare 286 recognized 80286 protected mode, extending NetWare's support of RAM from 1 MB to the full 16 MB addressable by the 80286. A minimum of 2 MB was required to start up the operating system; any additional RAM was used for FAT, DET and file caching. Since 16-bit protected mode was implemented the i80286 and every subsequent Intel x86 processor, NetWare 286 version 2.x would run on any 80286 or later compatible processor.
NetWare 2 implemented a number of features inspired by mainframe and minicomputer systems that were not available in other operating systems of the day. The System Fault Tolerance (SFT) features included standard read-after-write verification (SFT-I) with on-the-fly bad block re-mapping (at the time, disks did not have that feature built in) and software RAID1 (disk mirroring, SFT-II). The Transaction Tracking System (TTS) optionally protected files against incomplete updates. For single files, this required only a file attribute to be set. Transactions over multiple files and controlled roll-backs were possible by programming to the TTS API.
NetWare 286 2.x supported two modes of operation: dedicated and non-dedicated. In dedicated mode, the server used DOS only as a boot loader to execute the operating system file net$os.exe. All memory was allocated to NetWare; no DOS ran on the server. For non-dedicated operation, DOS 3.3 or higher would remain in memory, and the processor would time-slice between the DOS and NetWare programs, allowing the server computer to be used simultaneously as network file server and as a user workstation. All extended memory (RAM above 1 MB) was allocated to NetWare, so DOS was limited to only 640kB; an expanded memory manager would not work because NetWare 286 had control of 80286 protected mode and the upper RAM, both of which were required for DOS to use expanded memory. Time slicing was accomplished using the keyboard interrupt. This feature required strict compliance with the IBM PC design model, otherwise performance was affected. Non-dedicated NetWare was popular on small networks, although it was more susceptible to lockups due to DOS program problems. In some implementations, users would experience significant network slowdown when someone was using the console as a workstation. NetWare 386 3.x and later supported only dedicated operation.
Server licensing on early versions of NetWare 286 was accomplished by using a key card. The key card was designed for an 8-bit ISA bus, and had a serial number encoded on a ROM chip. The serial number had to match the serial number of the NetWare software running on the server. To broaden the hardware base, particularly to machines using the IBM MCA bus, later versions of NetWare 2.x did not require the key card; serialised license floppy disks were used in place of the key cards.
NetWare 3.x
Starting with NetWare 3.x, support for 32-bit protected mode was added, eliminating the 16 mb memory limit of NetWare 286. This allowed larger hard drives to be supported, since NetWare 3.x cached (copied) the entire file allocation table (FAT) and directory entry table (DET) into memory for improved performance.
By accident or design, the initial releases of the client TSR programs modified the high 16 bits of the 32-bit 80386 registers, making them unusable by any other program until this was fixed. The problem was noticed by Phil Katz who added a switch to his PKZIP suite of programs to enable 32-bit register use only when the Netware TSRs were not present.
NetWare version 3 eased development and administration by modularization. Each functionality was controlled by a software module called a NetWare Loadable Module (NLM) loaded either at startup or when it was needed. It was then possible to add functionality such as anti-virus software, backup software, database and web servers, long name support (standard filenames were limited to 8 characters plus a three letter extension, matching MS-DOS) or Macintosh style files.
NetWare continued to be administered using console-based utilities. The file system introduced by NetWare 3.x and used by default until NetWare 5.x was NetWare File System 386, or NWFS 386, which significantly extended volume capacity (1 TB, 4 GB files) and could handle up to 16 volume segments spanning multiple physical disk drives. Volume segments could be added while the server was in use and the volume was mounted, allowing a server to be expanded without interruption.
Initially, NetWare used Bindery services for authentication. This was a stand-alone database system where all user access and security data resided individually on each server. When an infrastructure contained more than one server, users had to log-in to each of them individually, and each server had to be configured with the list of all allowed users.
"NetWare Name Services" was a product that allowed user data to be extended across multiple servers, and the Windows "Domain" concept is functionally equivalent to NetWare v3.x Bindery services with NetWare Name Services added on (e.g. a 2-dimensional database, with a flat namespace and a static schema).
For a while, Novell also marketed an OEM version of NetWare 3, called Portable NetWare, together with OEMs such as Hewlett-Packard, DEC and Data General, who ported Novell source code to run on top of their Unix operating systems. Portable NetWare did not sell well.
While Netware 3.x was current, Novell introduced its first high-availability clustering system, named NetWare SFT-III, which allowed a logical server to be completely mirrored to a separate physical machine. Implemented as a shared-nothing cluster, under SFT-III the OS was logically split into an interrupt-driven I/O engine and the event-driven OS core. The I/O engines serialized their interrupts (disk, network etc.) into a combined event stream that was fed to two identical copies of the system engine through a fast (typically 100 Mbit/s) inter-server link. Because of its non-preemptive nature, the OS core, stripped of non-deterministic I/O, behaves deterministically, like a large finite state machine.
The outputs of the two system engines were compared to ensure proper operation, and two copies fed back to the I/O engines. Using the existing SFT-II software RAID functionality present in the core, disks could be mirrored between the two machines without special hardware. The two machines could be separated as far as the server-to-server link would permit. In case of a server or disk failure, the surviving server could take over client sessions transparently after a short pause since it had full state information and did not, for example, have to re-mount the volumes - a process at which NetWare was notoriously slow. SFT-III was the first NetWare version able to make use of SMP hardware - the I/O engine could optionally be run on its own CPU. The modern incarnation of NetWare's clustering, Novell Cluster Services (introduced in NetWare v5.0), is very different from SFT-III. NetWare SFT-III, ahead of its time in several ways, was a mixed success.
NetWare 386 3.x was designed to run all applications on the server at the same level of processor memory protection, known as "ring 0". While this provided the best possible performance, it sacrificed reliability. The result was that crashing (known as abends, short for abnormal ends) were possible and would result in stopping the system. Starting with NetWare 5.x, software modules (NetWare Loadable Modules or NLM's) could be assigned to run in different processor protection rings, ensuring that a software error would not crash the system.
NetWare 4.x
Version 4 in 1993 also introduced NetWare Directory Services, later re-branded as Novell Directory Services (NDS), based on X.500, which replaced the Bindery with a global directory service, in which the infrastructure was described and managed in a single place. Additionally, NDS provided an extensible schema, allowing the introduction of new object types. This allowed a single user authentication to NDS to govern access to any server in the directory tree structure. Users could therefore access network resources no matter on which server they resided, although user license counts were still tied to individual servers. (Large enterprises could opt for a license model giving them essentially unlimited per-server users if they let Novell audit their total user count)
Version 4 also introduced a number of useful tools and features, such as transparent compression at file system level and RSA public/private encryption.
Another new feature was the NetWare Asynchronous Services Interface (NASI). It allowed network sharing of multiple serial devices, such as modems. Client port redirection occurred via an MS-DOS or Microsoft Windows driver allowing companies to consolidate modems and analog phone lines.[2]
Strategic mistakes
Novell's strategy with NetWare 286 2.x and 3.x was very successful; before the arrival of Windows NT Server, Novell claimed 90% of the market for PC based servers.
While the design of NetWare 3.x and later involved a DOS partition to load NetWare server files, this feature became a liability as new users preferred the Windows graphical interface to learning DOS commands necessary to build and control a NetWare server. Novell could have eliminated this technical liability by retaining the design of NetWare 286, which installed the server file into a Novell partition and allowed the server to boot from the Novell partition without creating a bootable DOS partition. Novell finally added support for this in a Support Pack for NetWare 6.5.
As Novell used IPX/SPX instead of TCP/IP, they were poorly positioned to take advantage of the Internet in 1995. This resulted in Novell servers being bypassed for routing and Internet access, in favor of hardware routers, Unix-based operating systems such as FreeBSD, and SOCKS and HTTP Proxy Servers on Windows and other operating systems.[citation needed]
NetWare 4.1x and NetWare for Small Business: Novell begins to recover
Novell priced NetWare 4.10 similarly to NetWare 3.12, allowing customers who resisted NDS (typically small businesses) to try it at no cost.
Later Novell released NetWare version 4.11 in 1996 which included many enhancements that made the operating system easier to install, easier to operate, faster, and more stable. It also included the first full 32-bit client for Microsoft Windows-based workstations, SMP support and the NetWare Administrator (NWADMIN or NWADMN32), a GUI-based administration tool for NetWare. Previous administration tools used the Cworthy interface, the character-based GUI tools such as SYSCON and PCONSOLE with blue text-based background. Some of these tools survive to this day, for instance MONITOR.NLM.
Novell packaged NetWare 4.11 with its Web server, TCP/IP support and Netscape browser into a bundle dubbed IntranetWare (also written as intraNetWare). A version designed for networks of 25 or fewer users was named IntranetWare for Small Business and contained a limited version of NDS and tried to simplify NDS administration. The intranetWare name was dropped in NetWare 5.
During this time Novell also began to leverage its directory service, NDS, by tying their other products into the directory. Their e-mail system, GroupWise, was integrated with NDS, and Novell released many other directory-enabled products such as ZENworks and BorderManager.
NetWare still required IPX/SPX as NCP used it, but Novell started to acknowledge the demand for TCP/IP with NetWare 4.11 by including tools and utilities that made it easier to create intranets and link networks to the Internet. Novell bundled tools, such as the IPX/IP gateway, to ease the connection between IPX workstations and IP networks. It also began integrating Internet technologies and support through features such as a natively hosted web server.
NetWare 5.x
With the release of NetWare 5 in October 1998, Novell finally acknowledged the prominence of the Internet by switching its primary NCP interface from the IPX/SPX network protocol to TCP/IP. IPX/SPX was still supported, but the emphasis shifted to TCP/IP. Novell also added a GUI to NetWare. Other new features were:
• Novell Storage Services (NSS), a new file system to replace the traditional NetWare File System - which was still supported
• Java virtual machine for NetWare
• Novell Distributed Print Services (NDPS)
• ConsoleOne, a new Java-based GUI administration console
• directory-enabled Public key infrastructure services (PKIS)
• directory-enabled DNS and DHCP servers
• support for Storage Area Networks (SANs)
• Novell Cluster Services (NCS)
• Oracle 8i with a 5-user license
The Cluster Services were a major advance over SFT-III, as NCS does not require specialized hardware or identical server configurations.
NetWare 5 was released during a time when NetWare market share dropped precipitously; many companies and organizations were replacing their NetWare servers with servers running Microsoft's Windows NT operating system. Novell also released their last upgrade to the NetWare 4 operating system, NetWare 4.2.
NetWare 5.1 was released in January 2000, shortly after its predecessor. It introduced a number of useful tools, such as:
• IBM WebSphere Application Server
• NetWare Management Portal (later renamed Novell Remote Manager), web-based management of the operating system
• FTP, NNTP and streaming media servers
• NetWare Web Search Server
• WebDAV support
NetWare 6.0
NetWare 6 was released in October 2001. This version has a simplified licensing scheme based on users, not servers. This allows unlimited connections per user.
NetWare 6.5
NetWare 6.5 was released in August 2003. Some of the new features in this version were:
• more open-source products such as PHP, MySQL and OpenSSH
• a port of the Bash shell and a lot of traditional Unix utilities such as wget, grep, awk and sed to provide additional capabilities for scripting
• iSCSI support (both target and initiator)
• Virtual Office - an "out of the box" web portal for end users providing access to e-mail, personal file storage, company address book, etc.
• Domain controller functionality
• Universal password
• DirXML Starter Pack - synchronization of user accounts with another eDirectory tree, a Windows NT domain or Active Directory.
• exteNd Application Server - a J2EE 1.3-compatible application server
• support for customized printer driver profiles and printer usage auditing
• NX bit support
• support for USB storage devices
• support for encrypted volumes
The latest - and apparently last - Service Pack for Netware 6.5 is SP8, released October 2008.
Open Enterprise Server
Main article: Novell Open Enterprise Server
1.0
In 2003, Novell announced the successor product to NetWare: Open Enterprise Server (OES). First released in March 2005, OES completes the separation of the services traditionally associated with NetWare (e.g. Directory Services, file-and-print) from the platform underlying the delivery of those services. OES is essentially a set of applications (eDirectory, NetWare Core Protocol services, iPrint, etc.) that can run atop either a Linux or a NetWare kernel platform. Clustered OES implementations can even migrate services from Linux to NetWare and back again, making Novell one of the very few vendors to offer a multi-platform clustering solution.
Consequent to Novell's acquisitions of Ximian and SuSE, a German Linux distributor, it is widely observed that Novell is moving away from NetWare and shifting its focus towards Linux. Much recent marketing seems to be focussed on getting faithful NetWare users to move to the Linux platform in future releases. The clearest indication of this direction is Novell's controversial decision to release Open Enterprise Server in Linux form only. Novell later watered down this decision and stated that NetWare's 90 million users would be supported until at least 2015. Some of Novell's more perverse NetWare supporters have taken it upon themselves to petition Novell to keep NetWare in development.
2.0
OES 2 was released on October 8, 2007. It includes NetWare 6.5 SP7, which supports running as a paravirtualized guest inside the Xen hypervisor and new Linux based version using SLES10.
New features include
• 64bit support
• Virtualization
• Dynamic Storage Technology, which provide Shadow Volumes
• Domain services for Windows (provided in OES 2 service pack 1)
Current NetWare situation
While Novell NetWare is still used by some organizations, its ongoing decline in popularity began in the mid-1990s, when NetWare was the de facto standard for file and print software for the Intel x86 server platform. Modern (2009) NetWare and OES installations are used by larger organizations that may need the added flexibility they provide.
Microsoft successfully shifted market share away from NetWare products toward their own in the late-1990s. Microsoft's more aggressive marketing was aimed directly to management through major magazines; Novell NetWare's was through IT specialist magazines with distribution limited to select IT personnel.
Novell did not adapt their pricing structure accordingly and NetWare sales suffered at the hands of those corporate decision makers whose valuation was based on initial licensing fees. As a result organizations that still use NetWare, eDirectory, and Novell software often have a hybrid infrastructure of NetWare, Linux, and Windows servers.
Netware Lite / Personal Netware
In 1991 Novell introduced a radically different and cheaper product - Netware Litez in answer to Artisoft's similar LANtastic. Both were peer to peer systems, where no specialist server was required, but instead all PCs on the network could share their resources.
The product line became Personal Netware in 1993.
Performance
NetWare dominated the network operating system (NOS) market from the mid-80s through the mid- to late-90s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) to a dedicated Auspex NFS server and a SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware. There were several reasons for NetWare's performance.
File service instead of disk service
At the time NetWare was first developed, nearly all LAN storage was based on the disk server model. This meant that if a client computer wanted to read a particular block from a particular file it would have to issue the following requests across the relatively slow LAN:
1. Read first block of directory
2. Continue reading subsequent directory blocks until the directory block containing the information on the desired file was found, could be many directory blocks
3. Read through multiple file entry blocks until the block containing the location of the desired file block was found, could be many directory blocks
4. Read the desired data block
NetWare, since it was based on a file service model, interacted with the client at the file API level:
1. Send file open request (if this hadn't already been done)
2. Send a request for the desired data from the file
All of the work of searching the directory to figure out where the desired data was physically located on the disk was performed at high speed locally on the server. By the mid-1980s, most NOS products had shifted from the disk service to the file service model. Today, the disk service model is making a comeback, see SAN.
Aggressive caching
From the start, NetWare was designed to be used on servers with copious amounts of RAM. The entire file allocation table (FAT) was read into RAM when a volume was mounted, thereby requiring a minimum amount of RAM proportional to online disk space; adding a disk to a server would often require a RAM upgrade as well. Unlike most competing network operating systems prior to Windows NT, NetWare automatically used all otherwise unused RAM for caching active files, employing delayed write-backs to facilitate re-ordering of disk requests (elevator seeks). An unexpected shutdown could therefore corrupt data, making an uninterruptible power supply practically a mandatory part of a server installation.
The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed ("dirty") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.
Efficiency of NetWare Core Protocol (NCP)
Most network protocols in use at the time NetWare was developed didn't trust the network to deliver messages. A typical client file read would work something like this:
1. Client sends read request to server
2. Server acknowledges request
3. Client acknowledges acknowledgement
4. Server sends requested data to client
5. Client acknowledges data
6. Server acknowledges acknowledgement
In contrast, NCP was based on the idea that networks worked perfectly most of the time, so the reply to a request served as the acknowledgement. Here is an example of a client read request using this model:
1. Client sends read request to server
2. Server sends requested data to client
All requests contained a sequence number, so if the client didn't receive a response within an appropriate amount of time it would re-send the request with the same sequence number. If the server had already processed the request it would resend the cached response, if it had not yet had time to process the request it would only send a "positive acknowledgement". The bottom line to this 'trust the network' approach was a 2/3 reduction in network transactions and the associated latency.

Non-preemptive OS designed for network services
One of the raging debates of the 90s was whether it was more appropriate for network file service to be performed by a software layer running on top of a general purpose operating system, or by a special purpose operating system. NetWare was a special purpose operating system, not a timesharing OS. It was written from the ground up as a platform for client-server processing services. Initially it focused on file and print services, but later demonstrated its flexibility by running database, email, web and other services as well. It also performed efficiently as a router, supporting IPX, TCP/IP, and Appletalk, though it never offered the flexibility of a 'hardware' router.
In 4.x and earlier versions, NetWare did not support preemption, virtual memory, graphical user interfaces, etc. Processes and services running under the NetWare OS were expected to be cooperative, that is to process a request and return control to the OS in a timely fashion. On the down side, this trust of application processes to manage themselves could lead to a misbehaving application bringing down the server.
By comparison, general purpose operating systems such as Unix or Microsoft Windows were based on an interactive, time-sharing model where competing programs would consume all available resources if not held in check by the Operating System. Such environments operated by preemption, memory virtualization, etc., generating significant overhead because there were never enough resources to do everything every application desired. These systems improved over time as network services shed their “application” stigma and moved deeper into the kernel of the “general purpose” OS, but they never equaled the efficiency of NetWare

Probably the single greatest reason for Novell's success during the 80's and 90's was the efficiency of NetWare compared to general purpose operating systems. However, as microprocessors increased in power, efficiency became less and less of an issue. With the introduction of the Pentium processor, NetWare's performance advantage began to be outweighed by the complexity of managing and developing applications for the NetWare environment.[

Monday, February 15, 2010

MEMORY MANAGEMENT

Memory
Monoprogramming without Swapping or Paging
Monoprogramming with fixed partitions
Swapping
Virtual Memory

MEMORY

Memory is the electronic holding place for instructions and data that the computer's microprocessor can reach quickly. When the computer is in normal operation, its memory usually contains the main parts of the operating system and some or all of the application programs and related data that are being used. Memory is often used as a shorter synonym for random access memory (RAM). This kind of memory is located on one or more microchips that are physically close to the microprocessor in the computer. Most desktop and notebook computers sold today include at least 16 megabytes of RAM, and are upgradeable to include more. The more RAM you have, the less frequently the computer has to access instructions and data from the more slowly accessed hard disk form of storage.
Memory is sometimes distinguished from storage, or the physical medium that holds the much larger amounts of data that won't fit into RAM and may not be immediately needed there. Storage devices include hard disks, floppy disks, CD-ROM, and tape backup systems. The terms auxiliary storage, auxiliary memory, and secondary memory have also been used for this kind of data repository.
Additional kinds of integrated and quickly accessible memory are read-only memory (ROM), programmable ROM (PROMO), erasable programmable ROM (EPROM). These are used to keep special programs and data, such as the basic input/output system, that need to be in the computer all the time.


The memory is a resource that needs to be managed carefully. Most computers have a memory hierarchy, with a small amount of very fast, expensive, volatile cache memory, some number of megabytes of medium-speed, medium-price, volatile main memory (RAM), and hundreds of thousands of megabytes of slow, cheap, non-volatile disk storage. It is the job of the operating system to coordinate how these memories are used.
The part of the operating system that manages the memory hierarchy is the memory manager. It keeps track of parts of memory that are in use and those that are not in use, to allocate memory to processes when they need it and de-allocate it when they are done, and to manage swapping between main memory and disk when main memory is too small to hold all the processes.
Systems for managing memory can be divided into two categories: the system of moving processes back and forth between main memory and disk during execution (known as swapping and paging) and the process that does not do so (that is, no swapping and ping).

Monoprogramming without Swapping or Paging

The most simple memory management scheme is to run one program at a time, sharing the memory between that program and the operating system. As shown in the diagram bellow, there are three variations of this type. The operating system may be at the bottom of the memory in TAM (random access memory), as shown in the diagram 1, or it may be in ROM (read-only memory) at the top of the memory, as shown in diagram 2, or the device drivers may be at the top of the memory in a ROM and the rest of the system in RAM down bellow

With the system organised in this way, only one process at a time can be running. As soon as the user types a command, the operating system copies the request program from a disk to memory and executes it. When the process finishes, the operating system displays a prompt character and waits for a new command. When it receives the command, it loads a new program into memory overwriting the first one.

Monoprogramming with fixed partitions

It is often desirable to allow multiple processes to run at the same time, even on simple operating systems in which multiprogramming is sometimes used. On time-sharing systems, having multiple processes in memory at once means that when one process is blocked waiting for the I/O to finish, another one can use the CPU. This way, multiprogramming increases the CPU utilisation. It is however preferable to be able to run two or more programs at once even on personal computers.

Swapping

The process of organising memory into fixed partitions on batch system is simple compared to time sharing systems or graphically oriented personal computers. On batch systems, each job is loaded into a partition when it gets to the head of the queue. It stays in memory until it has finished. As long as enough jobs can be kept in memory to keep the CPU busy all the time, there is no reason to use anything more complicated. On time-sharing machines, sometimes there is not enough main memory to hold all the currently active processes, therefore excess processes must be kept on disk and brought in to run dynamically.
Swapping as an approach to memory management consists of bringing each process in its entirety, running it for a while, then putting it back on the disk. For details and illustrations, see OS design and implementation, A.S. Tanenbaum & A.S. Woodhull, Prentice Hall 1997, pg.310.


Virtual Memory

Another strategy for managing memory is the virtual memory, which allows programs to run even when they are only partially in main memory. The basic idea behind this strategy is that the combine size of the program, data, and stack may exceed the amount of physical memory available for it. The operating system keeps those parts of the program currently in use in main memory, and the rest on the disk.

Virtual memory can also work in a multiprogramming system, with bits and pieces pf many programs in memory at once. While a program is waiting for a part of itself to be brought in, it is waiting for I/O and cannot run, so the CPU can be given to another process, the same way as for any other multiprogramming system. Most virtual memory systems use a technique called paging.

PROCESS SCHEDULING

Scheduling is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than there are available CPUs. This assignment is carried out by software known as a scheduler or dispatcher.
The scheduler is concerned mainly with:
CPU utilization - to keep the CPU as busy as possible.
Throughput - number of processes that complete their execution per time unit.
Turnaround - total time between submission of a process and its completion.
Waiting time - amount of time a process has been waiting in the ready queue.
Response time - amount of time it takes from when a request was submitted until the first response is produced.
Fairness - Equal CPU time to each thread.
In real-time environments, such as mobile devices for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks are sent to mobile devices and managed through an administrative back end.

Types of operating system schedulers

Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and a short-term scheduler (also known as a dispatcher). The names suggest the relative frequency with which these functions are performed.

Long-term Scheduler

The long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. [Stallings, 399].1
Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers and render farms. In these cases, special purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.

Mid-term Scheduler

The mid-term scheduler temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. [Stallings, 396] [Stallings, 370]
In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or "lazy loaded".

Short-term Scheduler

The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU.

Dispatcher

Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

Scheduling criteria

Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria include the following:
CPU Utilization. We want to keep the CPU as busy as possible.
Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10 processes per second.
Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time. The CPU scheduling algorithm does not affect the amount of the time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of periods spend waiting in the ready queue.
Response time. In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response. The turnaround time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, under some circumstances, it is desirable to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users get good service, we may want to minimize the maximum response time. Investigators have suggested that, for interactive systems, it is more important to minimize the variance in the response time than to minimize the average response time. A system with reasonable and predictable response time may be considered more desirable than a system that is faster on the average but is highly variable. However, little work has been done on CPU-scheduling algorithms that minimize variance.

Wednesday, February 3, 2010

kernal

Kernel Definition
The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system.
A kernel can be contrasted with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.
The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains there for the entire duration of the computer session because its services are required continuously. Thus it is important for it to be as small as possible while still providing all the essential services needed by the other parts of the operating system and by the various application programs.
Because of its critical nature, the kernel code is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by application programs. The kernel performs its tasks, such as executing processes and handling interrupts, in kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation is made in order to prevent user data and kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing).
When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such program is a critical to the operation of the kernel, the entire computer could stall or shut down.
The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.
Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process obtains its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.
The contents of a kernel vary considerably according to the operating system, but they typically include (1) a scheduler, which determines how the various processes share the kernel's processing time (including in what order), (2) a supervisor, which grants use of the computer to each process when it is scheduled, (3) an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel's services and (4) a memory manager, which allocates the system's address spaces (i.e., locations in memory) among all users of the kernel's services.
The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel.
Most kernels have been developed for a specific operating system, and there is usually only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the only kernel for Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel for Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.
A few kernels have been designed with the goal of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.
It is not necessary for a computer to have a kernel in order for it to be usable, the reason being that it is not necessary for it to have an operating system. That is, it is possible to load and run programs directly on bare metal machines (i.e., computers without any operating system installed), although this is usually not very practical.
In fact, the first generations of computers used bare metal operation. However, it was eventually realized that convenience and efficiency could be increased by retaining small utility programs, such as program loaders and debuggers, in memory between applications. These programs gradually evolved into operating system kernels.
The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as the Microsoft Windows systems. The reasons are that the kernel is highly configurable in the case of Linux and users are encouraged to learn about and modify it and to download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.
Categories of Kernels
Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels. Each has its own advocates and detractors.
Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space.
A microkernel usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.
Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux).
Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. DragonFly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.
Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.
Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications.
The Monolithic Versus Micro Controversy
In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war (i.e., a war of words on the Internet) between Andrew Tanenbaum, the developer of the MINIX operating system, and Linus Torvalds, who originally developed Linux based largely on MINIX.
Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.
Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture (i.e., processor type) that the operating system is to be used on. However, in practice, this has not appeared to be a major disadvantage, and it has not prevented Linux from being ported to numerous processors.
Monolithic kernels also appear to have the disadvantage that their source code can become extremely large. Source code is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters) and before it is converted by a compiler into object code that a computer's processor can directly read and execute.
For example, the source code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer science students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can be improved more quickly than can microkernel-based systems.
Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of Linux version 2.4 on a typical Red Hat Linux 9 desktop installation. Contributing to the small size of the compiled Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules.
The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems (i.e., computer circuitry built into other products).
Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency

features of window xp

1. Inbuilt drivers support for devices like Key board, mouse and Pen drives like peripherials.2.Dramatically reduced reboot scenarios3.Windows Firewall4.Windows Security Center5.Search:Microsoft introduced animated “Search Companions” in an attempt to make searching more engaging and friendly; the default character is a puppy named Rover, with three other characters (Merlin the magician, Earl the surfer, and Courtney) also available. These search companions powered by Microsoft Agent technology, bear a great deal of similarity to Microsoft Office’s Office Assistants, even incorporating “tricks” and sound effects. However, search companion can be turned off and the user can revert to using classic search.6.Image handlingFolder thumbnail previewWindows XP improves image preview by offering a Filmstrip view which shows images in a single horizontal row and a large preview of the currently selected image above it. “Back” and “Previous” buttons facilitate navigation through the pictures, and a pair of “Rotate” buttons offer 90-degree clockwise and counter-clockwise rotation of images. Aside from the Filmstrip view mode, there is a 'Thumbnails' view, which displays thumbnail-sized images in the folder and also displays images a subfolder may be containing (4 by default) overlaid on a large folder icon. A folder's thumbnail view can be customized from the Customize tab accessible from its Properties, where users can also change the folder's icon and specify a template type (pictures, music, videos, documents) for that folder and optionally all its subfolders.7. Faster boot and logon:The ability to boot in 30 seconds was a design goal for Windows XP, and Microsoft's developers made efforts to streamline the system as much as possible; The Prefetcher is a significant part of this; it monitors what files are loaded during boot, optimizes the locations of these files on disk so that less time is spent waiting for the hard drive's heads to move and issues large I/O requests that can be overlapped with device detection and initialization. The prefetcher also uses the same algorithms to reduce application startup times. Activities such as system and service cleanup, and layout optimization of files and file metadata take place at idle time.8. Power management improvementsSupport for the Simple Boot Specification Wake-on-Battery support so that the system has time to power off or hibernate LCD dimming when on battery power Processor power and performance control including C-state (run in lower power state when idle) and throttling [11] USB selective suspend feature Significantly noticeable fast boot and resume from hibernation compared to previous Windows versions owing to the boot loader caching file and directory metadata sequentially and in large chunks in a most recently used manner, overlapping device initialization. Hibernation is faster as memory pages are compressed using an improved algorithm, unused memory pages are freed and DMA transfers are used during I/O. Faster resume from standby as the algorithm used by the Power Manager for notifying hardware and software of power state changes by dispatching power IRPs has been rewritten to maximize parallelism and important system drivers (PCMCIA, keyboard, mouse) have been rewritten to eliminate blocking interactions. Built-in support for processor power management technologies such as Intel SpeedStep and AMD PowerNow!. 9.Other hardware improvementsWindows XP's user interface for Plug and Play changed with all messages being shown in the notification area as balloon tips. Generic USB 2.0 EHCI drivers beginning with Windows XP SP1 [12] and support for USB device classes such as Bluetooth, USB video device class, imaging (still image capture device class) and Media Transfer Protocol with Windows Media Player 10 [13] USB audio devices support GFX filters FireWire (IEEE 1394) support for digital video cameras and audio video devices, MPEG-2 and S/PDIF across IEEE 1394. AutoPlay The read-only attribute of files and folders is automatically removed when copying files from optical media using Windows Explorer. Improved mouse pointer ballistics. DualView for multi-monitor setups. The Scanner and Camera Wizard based on Windows Image Acquisition and other common dialogs for WIA devices have been improved in Windows XP to show the media information and metadata, rotate images as necessary, categorize them into subfolders, capture images and video in case of a still or video camera, crop and scan images to a single or multi-page TIFF in case of a scanner. Multichannel audio output and playback of additional audio formats. Volume can be set for each speaker in a multichannel configuration. Acoustic Echo Cancellation for USB microphones Global Effects (GFX) engine Support for reading UDF 2.01 upgradeable to UDF 2.50 by installing Windows Feature Pack for Storage. [18] Sound Blaster 2.0 emulation support in NTVDM 10. System administration improvements10.1 Remote DesktopUsers can log into Windows XP Professional remotely through the Remote Desktop service. It is built on Terminal Services technology (RDP), and is similar to "Remote Assistance", but allows remote users to access local resources such as printers. [19]. Any Terminal Services client, a special "Remote Desktop Connection" client, or a web-based client using an ActiveX control may be used to connect to the Remote Desktop. [20] (Remote Desktop clients for earlier versions of Windows, Windows 95, Windows 98 and 98 Second Edition, Windows Me, Windows NT 4.0, or Windows 2000 have been made available by Microsoft [21]. This permits earlier versions of Windows to connect to a Windows XP system running Remote Desktop, but not vice-versa.)There are several resources that users can redirect from the remote server machine to the local client, depending upon the capabilities of the client software used. For instance, "File System Redirection" allows users to use their local files on a remote desktop within the terminal session, while "Printer Redirection" allows users to use their local printer within the terminal session as they would with a locally or network shared printer. "Port Redirection" allows applications running within the terminal session to access local serial and parallel ports directly, and "Audio" allows users to run an audio program on the remote desktop and have the sound redirected to their local computer. The clipboard can also be shared between the remote computer and the local computer.10.2 Remote AssistanceRemote Assistance allows a Windows XP user to temporarily take over a remote Windows XP computer over a network or the Internet to resolve issues. As it can be a hassle for system administrators to personally visit the affected computer, Remote Assistance allows them to diagnose and possibly even repair problems with a computer without ever personally visiting it. 10.3 Fast user switchingFast user switching allows another user to log in and use the system without having to log out the previous user and quit his or her applications. Previously (on both Windows ME and Windows 2000) only one user at a time could be logged in (except through Terminal Services), which was a serious drawback to multi-user activity. Fast User Switching, like Terminal Services, requires more system resources than having only a single user logged in at a time and although more than one user can be logged in, only one user can be actively using their account at a time. This feature is not available when the Welcome Screen is turned off, such as when joined to a Windows Server Domain or with Novell Client installed. 10.4 Other manageability featuresWindows Disk Defragmenter was updated to alleviate some restrictions.[25]:728 It no longer relies on the Windows NT Cache Manager, which prevented the defragmenter from moving pieces of a file that cross a 256KB boundary within the file. NTFS metadata files can also be defragmented. A command-line tool, defrag.exe, has been included, providing access to the defragmenter from cmd.exe and Task Scheduler. Non-persistent Shadow Copy support NTBackup has a wizard-based interface for ease of use and supports backing up locked files using Shadow Copy. 10.5 Windows Error Reporting Driver Verifier Manager, a GUI for Driver Verifier. Automated System Recovery Unified Registry editor Files and Settings Transfer Wizard and User State Migration Tool Several deployment tools improvements including enhancements to Sysprep, Setup Manager, introduction of WinPE Increased number of Group Policies and Resultant Set of Policy (RSoP) management console which allows administrators to see applied policies in logging mode or simulate policy settings that will be applied before committing to changes to objects in planning mode. A Desktop Cleanup Wizard was introduced to help users reduce clutter on their desktops, by looking at the shortcuts on the Desktop and moving any unused ones into a directory called "Unused Desktop Shortcuts". The Desktop Cleanup Wizard operates as a scheduled task that runs once a day to determine if it's been 60 days since the last time the wizard was run