Windows NT Basics

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

By Jim Mohr

Chapter 1 from Supporting Windows NT and 2000 Workstation and Server, published by Prentice Hall

On This Page

Introduction to Operating Systems
What is an Operating System?
Processes
File Access
Windows NT Internals
Task Priorities and Resources
Environment Subsystems
NT Executive
Memory Management
Changing the Size of the Paging File
The Kernel's Dispatcher and Process Scheduling
Changing Default Task Priority
Input/Output
File Systems
The File Allocation Table (FAT) File System
NTFS
IMAGE OF MFT
NTFS Attributes
Some System-Defined Attributes
Security on the NTFS
NTFS Strategy: Recoverable File System
NTFS Advantages
The System Registry
Viewing the Registry
Starting Windows

Introduction to Operating Systems

It is a common occurrence to find users who are not even aware of what operating system they are running. On occasion, you also find an administrator who may know the name of the operating system but nothing about its inner workings. For many, they have no time, as they are often clerical workers or other personnel who were reluctantly appointed to be the system administrator.

Microsoft has sold Windows NT as a system that is easy to use and to administer. Although this is true to some extent, few operating systems are easy to use or to administer when things go wrong. I know many administrators whose most common administrative tool is the big red switch at the back of the computer. They do not know enough about computers, let alone Windows NT, to do much else. Fortunately, this corrects the symptoms in most cases, but it does not solve the underlying problem.Being able to run or work on a Windows NT system does not mean you have to understand the intricate details of how it functions internally. However, there are some operating system concepts that will help you and your users not only interact better with the system, but also serve as the foundation for many of the issues we're going to cover in this book.

In this chapter, we are going to go through the basics of what makes an operating system. First, we will talk about what an operating system is and why it is important. We are also going to address how the different components work and work together.

My goal is not to make you an expert on operating system concepts. Instead, I want to provide you with a starting point from which we can go on to other topics. If you want to go into more details about operating systems I would suggest Modern Operating Systems by Andrew Tanenbaum, published by Prentice Hall, and Operating System Concepts by Silberschatz, Peterson, and Galvin, published by Addison-Wesley. For an excellent examination of Windows internals, take a look at Inside Windows NT by Helen Custer, from Microsoft Press.

What is an Operating System?

In simple terms, the operating system is a manager. It manages all the resources available on a computer. These resources can be the hard disk, a printer, or the monitor screen. Even memory is a resource that needs to be managed. Within an operating system are the management functions that determine who gets to read data from the hard disk, what file is going to be printed next, what characters appear on the screen, and how much memory a certain program gets.

Once upon a time, there was no such thing as operating systems. The computers of 40 years ago ran one program at a time. The computer programmer would load the program he (they were almost universally male at that time) had written and run it. If there was a mistake that caused the program to stop sooner than expected, the programmer had to start over. because there were many other people waiting for their turn to try their programs, it may have been several days before the first programmer got a another chance to run his deck of cards through the machine. Even if the program did run correctly, the programmer probably never got to work on the machine directly. The program (punched cards) was fed into the computer by an operator who then passed the printed output back to the programmer several hours later.

As technology advanced, many such programs, or jobs, were all loaded onto a single tape. This tape was then loaded and manipulated by another program, which was the ancestor of today's operating systems. This program would monitor the behavior of the running program, and if it misbehaved (crashed), the monitor could then immediately load and run another. Such programs were called (logically) monitors.

In the 1960s, technology and operating system theory advanced to the point that many different programs could be held in memory at once. This was the concept of "multiprogramming." If one program needed to wait for some external event, such as the tape to rewind to the right spot, another program could have access to the CPU. This improved performance dramatically and allowed the CPU to be busy almost 100% of the time.

By the end of the 1960s, something wonderful happened: UNIX was born. It began as a one-man project designed by Ken Thompson of Bell Labs and has grown to become the most widely used operating system. In the time since UNIX was first developed, it has gone through many different generations and even mutations. Some differ substantially from the original version, like BSD (Berkeley Software Distribution) UNIX or Linux. Others still contain major portions that are based on the original source code. (A friend of mine described UNIX as the only operating system where you can throw the manual onto the keyboard and get a real command.)

In the early 1990s, Microsoft introduced its new operating system, Windows NT. Window NT is one of many operating system, such as DOS, VMS, OS/360,and CP/M. It performs many of the same tasks in very similar ways. It is the manager and administrator of all of the system resources and facilities. Without it, nothing works. Despite this, most users can go on indefinitely without knowing even which operating system they are on, let alone the basics of how an operating system works.

For example, if you own a car, you don't really need to know the details of the internal combustion engine to understand that this is what makes the car move forward. You don't need to know the principles of hydraulics to understand what isn't happening if pressing the brake pedal has no effect.

An operating system is like that. You can work productively for years without even knowing what operating system you're running on, let alone how it works. Sometimes things go wrong. In many companies, you are given a number to call when problems arise, you tell them what happened, and they deal with it.

If the computer is not backed up within a few minutes, you get upset and call back, demanding to know when "that darned thing will be up and running again." When the technician (or whoever has to deal with the problem) tries to explain what is happening and what is being done to correct the problem, the response is usually along the lines of, "Well, yeah, I need it backed up now."

The problem is that many people heard the explanation, but didn't understand it. It is not unexpected for people to not want to acknowledge they didn't understand the answer. Instead, they try to deflect the other person's attention away from that fact. Had they understood the explanation, they would be in a better position to understand what the technician was doing, and that the problem was actually being worked on.

By having a working knowledge of the principles of an operating system, you are in a better position to understand not only the problems that can arise, but also what steps are necessary to find a solution. There is also the attitude that you have a better relationship with things you understand. Like in a car, if you see steam pouring out from under the hood, you know that you need to add water. This logic also applies to an operating system.

In this section, that's what we're going to talk about. What goes into an operating system and what does it do? How does it do it? How are you, the user, affected by all this?

Because of advances in both hardware design and performance, computers are able to process increasingly larger amounts of information. The speed at which computer transactions occur is often talked about in terms of billionths of a second. Because of this speed, today's computers can give the appearance of doing many things simultaneously by actually switching back and forth between each task extremely fast. This is the concept of multitasking. That is, the computer is working on multiple tasks at the same time.

Another function of the operating system is to keep track of what each program is doing. That is, the operating system needs to keep track of whose program, or task, is currently writing its file to the printer, which program needs to read a certain spot on the hard disk, etc. This is the concept of multiuser, as multiple users have access to the same resources.

Processes

One of the basic concepts of an operating system is the process. If we think of the program as the file stored on the hard disk or floppy and the process as the program stored in memory, we can better understand the difference between a program and a process. Although these two terms are often interchanged or even misused in "casual" conversation, the difference is very important for issues that we talk about later.

A process is more than just a program. Especially in a multitasking operating system such as Windows NT, there is much more to consider. Each program has a set of data that it uses to do what it requires. Often this data is not part of the program. For example, if you are using a text editor, the file you are editing is not part of the program on disk but is part of the process in memory. If someone else were to be using the same editor, both of you would be using the same program. However, each of you would have a different process in memory. How this looks graphically is seen in Figure 1-1.

Although Windows NT is not a multiuser operating system like UNIX, many different users can still access resources on the system at the same time, or the system can be doing things on their behalf. In other words, the users all have processes that are in memory at the same time. The system needs to keep track of what user is running what process, which terminal the process is being run on, and what other resources the process has (such as open files). All of this is part of the process, or task, as it is often called in Windows NT.

Cc768132.fig01-01(en-us,TechNet.10).gif

Figure 1-1: From program to process

When you log on, you might want to start MS Word. As you are writing your letter, you notice you need information out of a database, so you start up MS Access. The database is large, so it takes awhile to find what you want, so you go back to the letter. As you continue to edit, you delete words, insert new lines, sort your text, and write it occasionally to the disk. All this time, the database search is continuing. Someone else on the system may be accessing a file in a directory you share. Another is using a printer on your system. No one seems to notice that there are other people using the system. For them, the processor is working for them alone. Well, that's the way it seems.

As I am writing this sentence, the operating system needs to know whether the keys I press are part of the text or are commands I want to pass to the editor. Each keystroke needs to be interpreted. Despite the fact that I can clip along at about 30 words per minute, the central processing unit (CPU) is spending approximately 95% of its time doing nothing.

The reason for this is that for a computer, the time between successive keystrokes is an eternity. Let's take my Intel Pentium, running at a clock speed of 133 MHz as an example. The clock speed of 133 MHz means that there are 133 million(!) clock cycles per second. Because the Pentium gets close to one instruction per clock cycle, this means that within 1 second, the CPU can execute close to 133 million instructions! No wonder it is spending most of its time idle. (Note: This is an oversimplification of what is actually happening.)

A single computer instruction doesn't really do much by itself. However, being able to do 133 million little things in 1 second allows the CPU to give the user an impression of being the only person on the system. Actually, it is simply switching between the different processes so fast that no one is aware of it happening.

Compare this with an operating system such as standard Windows (not Windows NT). The program will hang on to the CPU until it decides to give it up. An ill-behaved program can hold on to the CPU forever. This is the cause of many system hangs, because nothing, not even the operating system itself, can gain control of the CPU.

Depending on the load of the system (how busy it is) a process may get several time-slices per second. However, after it has run through its time-slice, the operating system checks to see if some other process needs a turn. If so, that process gets to run for a time-slice, and then it's someone else's turn: maybe the first process, maybe a new one.

As your process is running, it will be given full use of the CPU for the entire time-slice unless one of three things happens. Your process may need to wait for some event. For example, the editor I am writing this text in is waiting for me to type in characters. I said that I type about 30 words per minute, so if we assume an average of 6 letters per word, that's 180 characters per minute or 3 characters per second. That means that on the average, a character is pressed once every 1/3 of a second. Assuming a time-slice is 1/100th of a second, over 30 processes can have a turn on the CPU between each keystroke! Rather than tying everything up, the program waits until the next key is pressed. It puts itself to sleep until it is awakened by some external event, such as my pressing a key. Compare this with a "busy loop," where the process keeps checking for a key being pressed.

File Access

When I want to write to the disk to save my file, it may appear that it happens instantaneously, but like the "complete-use-of-the-CPU myth," this is only appearance. The system will gather requests to write to or read from the disk and do it in chunks. This is much more efficient than satisfying everyone's request when they ask for it.

Gathering up requests and accessing the disk at once has another advantage. Oftentimes, for example, the data that was just written are needed again in a database application. If the system were to write everything to the disk immediately, you would have to perform another read to get back that same data. Instead, the system holds that data in a special buffer; it "caches" that data in the buffer.

If a file is being written to or read from, the system first checks the cache. If on a read, it finds what it's looking for in that buffer, it has just saved itself a trip to the disk. Because the cache is in memory, it is substantially faster to read from memory than from the disk. Writes are normally written to the buffer cache, which is then written out in larger chunks. If the data being written already exists in the buffer cache, it is overwritten.

When your process is running and you make a request to read from the hard disk, you can't do anything until you have completed the write to the disk. If you haven't completed your time-slice yet, it would be a waste not to let someone else have a turn. That's exactly what the system does. If you decided you need access to some resource that the system cannot immediately give to you, you are "put to sleep" to wait. It is said that you are put to sleep waiting on an event: the event being the disk access. This is the second case where you may not get your full time on the CPU.

The third way that you might not get your full time-slice is the result of an external event as well. If a device (such as a keyboard, the clock, hard disk, etc.) needs to communicate with the operating system, it signals this need through the use of an interrupt (which behaves someone like an alarm). When an interrupt is generated, the CPU itself will stop execution of the process and immediately start executing a routine in the operating system to handle interrupts. Once the operating system has satisfied this interrupt, it returns to its regularly scheduled process.(Note: Things are much more complicated than that. The "priority" of both the interrupt and the process are factors here.)

As I mentioned earlier, there are certain things that the operating system keeps track of when a process is running. The information the operating system is keeping track of is referred to as the process's context. This might be the terminal you are running on or what files you have open. The context even includes the internal state of the CPU; that is, what the content of each register is.

What happens when a process's time-slice has run out or for some other reason another process gets to run? Well, if things go right (and they usually do), eventually that process gets another turn. However, to do things right, the process must be allowed to return to the exact place where it left off. Any difference could result in disaster.

You may have heard of the classic banking problem of deducting an amount from an account. If the process returned to the balance that preceded the deduction, it would have deducted it twice. If the deduction hadn't yet been made, but the process started up again at a point after it would have made the deduction, it appears as if the deduction was made. Good for you, not so good for the bank. Therefore, everything must be put back the way it was.

The processors used by Windows NT have built-in capabilities to manage both multiple users and multiple tasks. We will get into the details of this in later chapters. For now, just be aware of the fact that the CPU assists the operating system in managing users and processes.

In addition to user processes, such as Windows Explorer, word processors, and databases, there are system processes that are running. These are processes that were started by the system. They deal with security, mail, printing, and other tasks that we take for granted. In principle, both user and system processes are identical. However, system processes can run at much higher priorities and therefore run more often than user processes.

I have talked to customers who have complained about the system grinding to a halt although they had nothing running. The misconception is that because they didn't "see" any process running, it must not be taking up any resources. (Out of sight, out of mind.) The issue here is that, even though the process is running in the background and you can't see it, it still behaves like any other process, taking up memory and other resources.

Windows NT Internals

Windows NT is provided in two versions: Workstation and Server. For most users, there is no noticeable functionality difference; rather, the differences lay in areas that are of more interest to administrators. Windows NT workstations can function either as clients within a network consisting of one or more Window NT servers, or they can be part of a peer-to-peer network, such as a workgroup. Servers provide additional functionality, such as maintaining domain-wide user and security information as well as providing authentication services. In addition, the Windows NT Server allows unlimited connection, whereas the Workstation is limited to 10.

Although a Windows NT Server can be configured in "stand-alone" mode to function within a workgroup, it can best show its functionality as part of a domain. A Workgroup is a logical collection of machines that share resources. Normally, workgroups are created by the computers within a single department or within a company, if it is small enough. Each computer can make available to as well as share resources from all the other computers. In essence, all computers are of equal status. Within a company, different departments may be workgroups and each may have a unique name to identify the workgroup. Because each computer is independent, each is responsible for authenticating users itself.

Like a workgroup, a domain is a logical collection of computers that share resources. One of the key differences is that there is a single server that is responsible for managing security and other user-related information for the domain. This server allows logon validation, by which a user logs into the domain and not into just a single computer.

Microsoft has called Windows NT a "multiple-personality operating system," as it was designed to support more than one application programming interface (API). This makes it is easier to provide emulation for older OS environments as well as the ability to more easily add new interfaces without requiring major changes to the system. The technique that Windows NT uses is called a "microkernel" and was influenced by the Mach microkernel developed at Carnegie Mellon University. (The kernel is the central part of operating system.)

Windows was designed as a module operating system. This has advantages in both the development and actual operation of the system. A substantial number of operating system components were implemented as modules and as such have well-defined interfaces that other modules can easily use. From a design standpoint, this enables new components to be added to the system without affecting the existing modules. For example, a new file system driver could be added without affecting the other file system drivers.

In addition, the total amount of code needed is smaller. Many functions need only be implemented once and are shared by the other modules. If changes are made to that function, only one set of instructions needs to be changed. Provided it maintains the same interface, the other modules are not even aware of the change.

The other major advantage is when the system is running. Because certain code is shared, it does not need to be loaded more than once. First, that saves memory, as there are not multiple copies of certain functions in memory. Second, it saves time, as the functions do not need to be loaded repeatedly.

Although all versions of Windows support multitasking, the major difference is that Windows 3.1x, along with Windows 95 and Windows 98, implement nonpreemptive or cooperative multitasking, which means that each program must voluntarily give up control of the processor to allow other programs a chance to run. If a program fails to voluntarily yield control to other programs, the system will stop responding to user actions and will appear to be hung. Windows NT resolves this problem by implementing the same type of preemptive multitasking as had already been available on UNIX machines for many years.

The operating system fully controls which program runs at any given time and for how long. Each program is allowed to run for its time-slice, and when the time-slice expires, someone else has a turn. Windows NT saves the necessary information about the executing program's state to allow it to continue where it left off.

In addition to being a preemptive multitasking operating system, Windows NT is also multithreaded. A thread is often referred to as a "lightweight" process. In some aspects, a thread (or lightweight process) is the same as a normal process. In both cases, the processor state is maintained, along with both a user and a kernel stack. (The stack is an area of memory used to key track of which function have been called, as well as the values of different variables.)

One key difference is that threads do not have their own address space or access tokens. Instead, all threads of a process share the same memory, access tokens, resource handles, and others. This simplifies programming in one regard, because the data can be shared without any new data structures. However, it is up to the programmer to ensure that the threads do not interfere with one another—which makes the programming trickier.

One aspect of this is the very nature of threads. They have no separate data space and therefore have the same access rights to the data and other parts of memory of other threads of the same process. This makes it easier to share data but also makes it easier to corrupt it.

To avoid this problem, programmers need to work with what are called "critical sections" of code. When one thread is in a critical section, it needs to ensure that no other thread can access the data in any way. It is not sufficient to simply say that other threads cannot change the data, as they may expect the data to be in one state when the thread in the critical section changes it: In other words, access to a object must be synchronized between threads. Windows NT accomplishes this all through the use of synchronization objects.

Windows NT actually uses different kinds of synchronization objects depending on the needs of the programmer. In addition, the testing and setting of the synchronization object can be done automatically; that is, in one uninterruptible step. If a thread is waiting for a synchronization object, it will be "suspended" until the other thread has completed its critical section. However, it is up to the programmer to create the synchronization object when the program is initialized.

Because no additional structures are needed, threads have an advantage over normal processes. No time needs to be spent creating the new structures to be able to share this information; it simply is available. Also, threads do not have their own address space and are therefore easier (and quicker) to create.

As one would expect, threads are treated like objects, just as processes are. Both processes and threads are managed by the Process Manager. The function of the Process Manager includes creation, management, and destruction of processes and threads.

Although you may not realize it, you use threads with a large number of applications. The most common example is printing a document while you continue to use your word processor. Although you could pass the file (data) to another process, using threads makes this much more efficient for the reasons just discussed.

Like processes, only one thread can use the processor at any given time. However, what happens if you have multiple processors? Windows NT, like many UNIX dialects, have a built-in capability to use multiple processors. If you do have multiple processors, different processes can be running on each processor at any given time. Taking this one step further, different threads could be on different processes at the same time, thus significantly speeding up the execution of your program.

Threads (and therefore processes) under Windows NT operate in a number of different states. The three most significant are running, ready, and waiting. When a thread is running, it is currently using the processor. In a single processor environment, only a single process can be in the running state at any given time.

Ready means that the thread is ready to run. This means that it has all the resources it needs and is just waiting for its turn. As with UNIX, the thread that runs is determined by a scheduling algorithm which is based on the original priority of the process and therefore each respective thread thread.

When a thread is waiting, there is some event that needs to occur before it can be ready again. As with UNIX, that event can be anything from waiting for a key to be pressed to waiting for a synchronization object to become unlocked.

One aspect of the modular design of Windows NT shows itself in the kernel, which provides the means for a certain activity but does not itself decide when these activities should take place. In other words, it provides the mechanism to do some task but not when it occurs: This is accomplished through the various kernel objects, two of which are the process object and the thread object. These are very similar objects, but they are different. Therefore, they must be treated differently by the kernel.

Task Priorities and Resources

You can see what tasks are currently running on your system as well as their priorities by using the Task Manager. Either right-click the taskbar and select the "Task Manager" entry or press CTRL-ALT-DEL and click the "Task Manager" button. When the window appears, click the "Processes" tab. This will show something similar to Figure 1-2.

Cc768132.fig01-04(en-us,TechNet.10).gif

Figure 1-2: Task Manager processes.

Name: the program that was started to create the process

PID: the process ID

CPU %: average CPU usage for that process

CPU Time: total amount of time that process has been on the CPU since the system started

Memory: amount of virtual memory that the processes is currently using

Under the view menu, you can select addition columns to display. I rarely have used any other ones, and a discussion of them goes beyond the scope of this book. However, sometimes I include the Base Priority column to see the priorities of the various processes. You can change a process's priority by right-clicking the process name and selecting the appropriate menu entry. Be extremely careful with this, as you can cause major problems if the wrong processes have too high a priority.

At the bottom of the screen you see the total number of processes, the average CPU usage (not including the idle process and the memory usage). The first value is the total memory usage followed by the maximum available memory.

Environment Subsystems

One of the key design goals of Windows NT was compatibility with existing applications. If Microsoft had decided to ignore this issue and concentrate solely on performance, most (if not all) of the existing Windows and DOS applications would have become useless. Because this would have meant purchasing new applications, business might have opted to move completely away from Microsoft.

While still improving performance considerably, Microsoft was able to accomplish the goal of compatibility by implementing a set of operating system "environment emulators." These are the so-called environment subsystems that form the intermediate layer between user applications and the core of the NT operating system. Because of module design of the operating system, Windows NT is capable of supporting Windows/DOS, OS/2, and POSIX applications.

The Windows NT environment subsystems run as user-mode processes. In addition, the environment subsystems are also multithreaded. Many of the functionality provided by the environment subsystems are only available through system-level processes on other operating systems. On the one hand, this provides an additional level of protection, as less of the system is running in the highest privileged mode. However, in order for Windows NT to access the lower-level function, a context switch may be necessary. Due to the time involved to make the context switch, performance may suffer.

These are also responsible for defining the syntax used for device and file names that the calling process uses. For example, the Win32 subsystem uses the "MS-DOS-style" syntax (C:\dir\filename.typ), whereas the POSIX subsystem uses file name syntax similar to that of UNIX (/dir/filename). The MS-DOS-style device names (like A:, B:, C:) are actually symbolic links within the Windows NT I/O system. Windows NT actually uses names like "Floppy0" or "Disk1" to access devices.

Essentially, each program is bound to a single environment subsystem. Therefore, you cannot mix calls between subsystems. For example, a POSIX program cannot call services defined by the Win32 subsystem or vice versa. This presents problems when one subsystem provides features that another needs.

The interaction between the application at the appropriate environment subsystems is like a client-server system. Here, the client is the user application, and the server is the environment subsystems. When a user application wishes to utilize some low-level aspect of the system (i.e., the hardware), two things can happen. If the call is mapped directly to a part of the operating system called the Windows NT Executive service, then it is done so. However, if the call needs to be serviced by an environment subsystem, then the appropriate environment subsystem (i.e., Win32, POSIX) is called.

One example where the same environment subsystem is used is when a new process is created. Typically, a process will create another process that uses the same environment subsystem. For example, a POSIX program will normally create another POSIX program. Because the POSIX environment subsystem needs to be aware of all POSIX processes, it is called to create a new one.

On the other hand, only the Win32 subsystem can write to the screen. Therefore, all processes must call the Win32 environment subsystem to process screen I/O. In both cases, a context switch occurs that allows the appropriate subsystem to run. This obviously decreases the performance of the system. (A context switch is the process by which the system takes one taks off the CPU to let another one run.)

Note that this problem does not exist in most UNIX versions. When a process is accessing the hardware, it is still running within its own context. Granted, it may be running in kernel mode, but the system is still running the original process. To solve this problem, Windows NT uses the technique of shared memory (also available in UNIX for years). This is a portion of memory that is accessible to both the application and the appropriate environment subsystem. Therefore, the system does not spend time copying the data that need to be accessed by both the application and the environment subsystem.

Another benefit is achieved by changing the process scheduler slightly. Remember that threads are an integral part of Windows NT. The client thread of the applications only needs to call the server thread in the environment subsystem. Because no new process is created, the process scheduler does not need to schedule asnew process.

Remember that the Win32 subsystem controls the display. Any time an application wants to write to the display, the Win32 subsystem needs to be called. Instead of calling the Win32 for every screen update, all graphics calls are buffered. For example, if you wanted to create a new window of a particular size, location, background color, and so forth, each of these would require different calls. By waiting and "batching" the calls together, you can save time.

MS-DOS and 16-bit Windows programs actually run as a Win32 process. The program that creates the virtual MS-DOS environments for these to run in (NTVDM.EXE) is a Win32 program. A virtual DOS machine (VDM) is created for the program, so it thinks that it is running on a machine by itself. However, the virtual machine is a normal 32-bit Windows NT process and is subject to the same rules of preemptive multitasking as is other programs. However, this only applies to MS-DOS programs. By default, Windows applications share a common virtual machine and are still subjected to nonpreemptive multitasking. However, each application can be configured to run within its own virtual machine. When calls are made to the 16-bit Windows API, they are translated into 32-bit calls so that they can be executed by the Win32 subsystem.

Essentially, a VDM emulates an 80286 machine running DOS. MS-DOS applications normally own the machine and can do anything they want, including directly accessing the hardware and even overwriting memory used by the system. Therefore, an extra level of protection is need. For this reason, Windows NT implemented the VDM mechanism to protect the system from errant MS-DOS applications. When a DOS program running in a VMD "misbehaves," the worse that can happen is that it crashes its own VDM. Other processes are not affected.

Although the MS-DOS application tries to access hardware directly, it cannot: Windows NT prevents this. As with other aspects, Windows NT provides a set of virtual device drivers. Any attempt to access the hardware must pass through this virtual machine. The virtual machine is then able to intercept the calls and process them with appropriate Windows NT virtual device drivers.

To run 16-bit Windows applications, NT uses a VDM that contains an extra software layer called the Win16 on Win32 (WOW) layer. Although the VDM for Windows shares some of the code for MS-DOS VDM, all Windows applications share the same VDM. The reason this is done is to simulate the environment that 16-bit Windows applications run in. Because Windows applications might want to communicate with one another, a single VDM is used. Each 16-bit application runs as a thread of the VDM; however, the WOW layers ensures that only one of these threads is running at any given time.

Along with the benefits that the single VDM brings, it also brings disadvantages. Even though it is running as a thread of the VDM, a 16-bit application can completely take over the VDM, just like it can take over a regular Windows machine. Because it is running as a thread of the VDM, it has access to all the memory of the VDM and therefore to the other 16-bit applications (which are running in that same VDM), just as with a regular Windows machine. (except that Windows NT with multi-task the VDM, just like any other process.)

NT Executive

One of the most important aspects of Windows NT is the NT Executive. Here lies most of the functions that we normally call the "operating system." This includes memory and process management, preemptive multitasking, I/O, security, and interprocess communication. Each component of NT Executive provides a set of functions, referred to as "native services" or "executive services." These services form the API to NT Executive. Application programs on Windows NT do not call NT Executive, but rather "system service" calls go through another layer of code that maps the user interface to the appropriate Windows NT Executive service.

NT Executive sits on top of the kernel and the Hardware Abstraction Layer (HAL), which then provides the lowest level functions, such as directly accessing the hardware. Because of this separation, the majority of NT Executive code can be hardware independent and therefore easily ported to different hardware platforms.

The HAL is the lowest level of the Windows NT operating system (see Figure 1-3). It is said to "virtualize" hardware interfaces, as the actual interface to the hardware is hidden from the programmer. It is no longer necessary to know the details of a specific type of hardware, but rather just the interface provided by the HAL. This allows Windows NT to be portable from one hardware platform to another.

The HAL is a kernel-mode set of hardware access and manipulating routines. Many are provided by Microsoft, but due to the modular nature of Windows NT, these can also be provided by the hardware manufacturer. The HAL is part of NT Executive and lies at the lowest level between the hardware and the rest of the operating system.

The reason it is called such is that the HAL hides, or abstracts, the details of the hardware behind standard entry points. As a result, different platforms and architectures look alike to the operating system and therefore to the programmer. Because the programming interface is the same, much of the code is the same between hardware platforms. Hardware-dependent details, such as I/O interfaces, interrupt controllers, and multiprocessor communication mechanisms become hidden and become less of a concern to the programmers.

Cc768132.fig01-05(en-us,TechNet.10).gif

Figure 1-3: The layers of Windows NT.

This is also a disadvantage to the system administrator, as there is no longer a direct connection to the hardware. Should an application have difficulty accessing a specific piece of hardware, it is not possible to access the hardware without going through the programming interface. Although it may make programming easier, it does make troubleshooting hardware more difficult.

Windows NT does not provide compatibility with device drivers written for MS-DOS or Windows. This does not mean that specific hardware will not work with Windows NT, but rather that a new driver needs to be specifically written for NT. Because the device driver architecture is modular in design, device drivers can be broken up into layers of smaller independent device drivers. Parts of a driver that provide common functionality need only be written once. This code can then be shared among various components, making the kernel smaller.

Because NT Executive runs in kernel mode, it has complete access to all the system resources, including all of memory, and can therefore issue any machine instructions that it wants. All other code, including that of the environment subsystems, runs in user mode and can only access memory that NT Executive has given it permission to access.

When a subsystem calls an executive service, the call is "trapped" and sent to NT Executive. The processor is switched from user to kernel mode so that NT Executive can issue the instructions and access the memory it needs to execute the requested service. Once the service is complete, NT Executive switches the processor back to user mode and returns control to the subsystem.

Note that the kernel is not an independent module but is part of NT Executive. In addition to managing the thread scheduling, the kernel is responsible for many other aspects of the system. The kernel is also treated specially, in that it can neither be paged to disk nor can it be preempted. So that the kernel does not take up too much space, it is made as compact as possible. Also, to keep it from using up too much CPU time, it is made as fast as possible.

Memory Management

Virtual memory on a Windows NT machine is handled by a component of NT Executive called the virtual memory manager. As with other operating systems, the job of the virtual memory manager is to manage the relationship between the memory seen by the application (virtual memory) and the memory seen by the system (physical memory). Because Windows NT is intended to run multiple processes simultaneously, the virtual memory manager must not only protect processes from one another, but it must also protect the operating system from the applications. In addition, the virtual memory manager was designed to be able to run on different hardware platforms, regardless of what memory addressing scheme they use.

One of the major drawbacks of MS-DOS and Windows 3.1x are that they are 16-bit systems. As a result, they need to utilize a segmented memory model. This means memory is accessed in small, 64-kB chunks called segments. Windows NT followed the lead of UNIX vendors by switching to a flat memory model. This means that memory is seen as one large chunk. As of this writing, Windows NT is still a 32-bit operating system, meaning that it can access 232 or 4 Gbytes of memory. (Note that Microsoft has promised a 64-bit version of Windows NT sometime after the initial release of Windows 2000).

Both the Intel and Alpha family of processors (the only ones that Windows NT is currently being developed for) provide certain protections for the system. When these protections are enabled, the CPU will prevent processes from accessing memory they are not allowed to. Within the 4-Gbyte flat memory, the upper 2 Gbytes can only be accessed by code within NT Executive. Because this is also where NT Executive resides, it is safe from all processes running in user mode.

Separate tables are maintained for each process, which translates the virtual memory that the process sees to the physical memory that the system sees. The virtual memory manager is responsible for ensuring that the translation is done correctly. By accessing physical memory only through these tables, the virtual memory manager can ensure that two processes do not have access to each other's physical memory.

What happens when two processes need to access the same physical memory? There are methods of sharing data between running processes. The most obvious way is a file on the disk somewhere. However, this requires going through the file system and hardware drivers as well as locating some mechanism to tell the other process where the memory lies. The processes can send signals to one another, but these are extremely limited. The best alternative in many cases is to simply share a region of memory.

The virtual memory manager takes care of this by changing the appropriate entries in the address translation tables so that both processes point to the same location. In addition, the NT Executive provides synchronization objects that the two processes use to ensure that they both get the memory location that they need.

When physical memory runs low, it is the job of the virtual memory manager to free up space. Memory is accessed in units called pages. Although different page sizes are possible on some platforms, Windows NT universally uses a 4-Kbyte page. This helps to make the porting to other operating system easier, as the Intel i386 family also uses 4-Kbyte pages.

Windows NT uses a temporary storage area on the hard disk to store pages of processes that are currently not running. This is called page file (sometimes referred to as a paging file). Because both physical memory and the page file are accessed in 4-Kbyte pages, copying files back and forth is fairly easy.

Changing the Size of the Paging File

Although there is no real substitute for RAM, you can often increase performance by increasing the amount of virtual memory available. This is done by increasing the size of the paging file. Start the System applet in the Control Panel and Select the "Performance" tab. This brings you to Figure 1-4

In the top part you see a list of only the local drives, as you cannot put a paging file on a network drive. By selecting the appropriate drive letter, you can set the minimum and maximum size of your paging file.

If you have multiple hard disks, you can improve performance by having a paging file on each drive. Even though the system is reading from one drive, it can make a request of the second drive.(Note that this only works for SCSI drives, not (E)IDE.)

You can also increase speed by creating a paging file that does not change its size; that is, where the minimum and maximum are the same size. To increase speed even further, this should be done on a partition that has very little fragmentation. You can completely move the paging file to a different partition, defragment the first drive, then move the paging file back. The system will then create a single file that is unfragmented.

Note that you can also change the maximum size of your registry here as well.

Cc768132.fig01-06(en-us,TechNet.10).gif

Figure 1-4: Virtual memory configuration.

When pages are copied from memory to a page file, the address translation tables for the process are marked to indicate that these pages are not in physical memory. When an application attempts to access a page that is not in physical memory, a fault occurs. The virtual memory manager needs to determine if the fault was generated because the page was never in physical memory or was copied to the swap file. Once the page is copied back into physical memory, the address translation table is marked to indicate its new location.

Windows NT uses another mechanism from UNIX to improve efficiency: mapped file I/O. In essence, this is a process by which a file on the file system is treated as if located in memory. All or part of the file is mapped to a range of virtual memory within the process. The process can then access the file as if located in memory and does not have to go through the same process to open, access, and then close the file.

Conceptually, there is nothing different between memory-mapped I/O and the use of the paging file. In both cases, a file on the hard disk is treated as an extension of virtual memory. For this reason, mapped file I/O is managed by the Windows NT virtual memory manager.

This process can come in handy when the size of the file to be read is larger than the size of physical RAM and the paging file combined. Normally, the system would not be able to load the entire file. Instead, only those pieces that are currently being used are loaded into memory. The system then loads or writes pages just as it would with the paging file.

When deciding what pages to pull into memory, the virtual memory manager uses three policies. The first is called "fetch." This occurs when a page is first accessed. In general, the system will wait until an attempt is made to access a page before it is loaded into memory. Based on the principle of locality, it is likely that the next page (or pages) will be needed as well. Windows NT takes advantage of this principle by loading several pages at once. Because it is likely that subsequent pages will be needed, why wait?

The term "placement" refers to the process by which the virtual memory manager stores a page in physical RAM by loading each page into the first free page in RAM. There is no need to store pages in contiguous locations in RAM. Because there is no physical movement, accessing memory locations is the same no matter where the memory is located. The physical location is given to the address decode circuits on the motherboard, and the contents are returned. Although most modern computers also take advantage of the locality principle and return more than what is actually requested (i.e., pipeline burst cache), the speed at which the memory is accessed is independent of the physical location.

If there is no more room in physical memory, (i.e., no free pages) the virtual memory manager has to make room. It uses its "replacement policy" to determine what pages should be replaced. Windows NT uses a very simple replacement scheme: first-in, first out (FIFO). This means that pages that have been in memory the longest (they were the first ones in) are replaced with newer pages (they are the first to be paged out). This has the disadvantage that pages that are pulled in first, but used regularly have a greater chance of being paged out.

The Kernel's Dispatcher and Process Scheduling

The Windows NT dispatcher schedules threads to run on the processor. Windows NT uses a 32-level priority (0–31), where the higher the number, the higher the priority. Only processes that are ready to run can be scheduled to run, even if they should have the highest priority. For example, if a process were waiting for data from the hard disk, it would not be scheduled to run.

When a process or thread starts up, it has a specific "base priority." The dispatcher adjusts the priority based on the thread's behavior. Interactive threads, such as a word processor, spend a lot of their time waiting (i.e., for actions such as key presses.) Because they regularly give up their time on the CPU voluntarily, the dispatcher may return the favor by increasing their priority. Therefore, when the user finally does press a key, it is responded to almost immediately. Threads that do not give up the processor voluntarily (e.g., large database queries) may have their priority decreased. Which thread actually gets to run is very straightforward: The process that is ready to run and has the highest priority will run.

Certain threads are referred to as "real-time" threads, and they run at the same priority for the entire time they are running. Real-time threads will have a priority between 16 and 31, which means they are on the upper half (highest priority) of the scale. These are normally used to control or monitor systems that require responses and precisely measured times.

In contrast with real-time threads, there are "variable priority" threads. As their name implies, their priority can change while they are running. While the thread is running, the dispatcher can change a variable thread priority by up to two levels above or below its base priority.

The hardware clock generates an interrupt at regular intervals. The kernel dispatcher is called and schedules the thread with the highest priority. However, it is possible that the process that was running gets to continue running. This happens frequently, when a thread is running at high priority and all of the other threads are running at a lower priority. Even if the system were to decrease the priority of that thread, it would continue at that priority for as long as it is active.

Changing Default Task Priority

Although this type of scheduling is simple to program and does well in many cases, it does have limitations. Obviously, a thread with a real-time priority could "hog" the CPU so that no other process can run. This might be fine in some circumstances, but a programmer could decide to put a thread at a real-time priority, even though it doesn't need one. The result is that this single thread could take complete control over the system. The priority calculation is much more complicated; however, the intent is to allow everyone their fair share of time on the CPU.

Windows NT does allow you to adjust the priority, although this ability is very limited compared with that of UNIX. Changing the default priority is done from the system applet in the Control Panel under the tab "Performance" (see Figure 1-5)

Here you can set the "boost" that foreground (active) applications get. The Windows NT default is that foreground applications get the highest priority, which means that their base priority is increased by two levels.

Cc768132.fig01-07(en-us,TechNet.10).gif

Figure 1-5: Performance in system applets.

Input/Output

Accessing peripheral devices was one area that the Window NT designers really had to clean house. The routines used to access hardware in MS-DOS, and therefore Windows 3.x, had some major limitations. Written in 80 x 806 assembly language, they are not portable at all. Assembly language often makes for faster code, but each time it is brought to a new hardware platform, the code must be rewritten from scratch.

Another problem is the sheer age of the MS-DOS drivers. At the time they were written, most programmers were not thinking along the lines of objects or modules. As a result, there is a lot of repetitive code.

If moved to the Windows NT environment, these drivers would be completely inappropriate. In addition to the problems that existed under DOS, these drivers are not designed to work in a multitasking environment. Windows had to do a lot of work to get these drivers to function correctly, even in the nonpreemptive environment of Windows 3.x. Needless to say, these types of drivers would be totally useless in the multiprocessor environment of Windows NT.

As you may have guessed by now, I/O under Windows NT is composed of several modules. These are coordinated by the I/O manager, which is part of NT Executive. One of the primary functions of the I/O manager is to manage communication between the various drivers. To do this properly, each driver must maintain the standard interface that the I/O manager used, no matter what physical device is being accessed.

At the lowest level are the device drivers that actually talk to the hardware. These need to know about all of the physical characteristics of the hardware. In few cases the code can be shared, so there is a driver for each device. Not only for each kind of device, but a different device driver for each manufacturer and sometimes even for each model.

Above the device drivers are higher level drivers that applications are more likely to use to interface with user applications. When they access hardware, what they are actually doing is passing a logical request. The I/O manager then processes the request and passed it to the appropriate physical device driver.

As previously discussed, Windows NT relies heavily on modular and layered drivers. The Windows NT I/O subsystem also takes advantage of the benefits of this scheme. Within the I/O subsystem, however, the advantages of this modular approach become even more apparent. Because changes in one module rarely have an impact on other modules, new or updated hardware drivers can be implemented without affecting the other layers. A good example of this would be the development of a new kind of hard disk that requires a new driver. The file system driver does not need to be changed; only the driver directly accessing the hardware does. Another example would be a situation such as: I have two partitions, one formatted as the DOS File Allocation Table (FAT) and the other as the NT File System (NTFS). Both the FAT file system and the NTFS must access the same hardware. Although being accessed twice (once for each file system driver), the hardware driver need only be loaded once.

When a process makes an I/O request, it does so through the appropriate environment subsystem, which then issues the request to the I/O subsystem. Upon receipt of the request, the I/O manager creates one or more so-called I/O request packets, which it transfers back and forth between the different driver layers. These packets will be different for different drivers, so it is the responsibility of the I/O manager to know how to create the packets for each driver.

For example, an application wishes to read a file from an NTFS on the hard disk. It makes the request of the Win32 environment subsystem, which communicates with the I/O request manager. The I/O request manager creates the I/O request pack and hands it to the NTFS. The NTFS driver then calls back to the I/O manager, whereupon the I/O manager calls the device drivers that actually manipulate the hardware. Each of these layers must send return status to the previous layers (which is done in reverse order) until the status is finally passed back to the original application.

Depending on the application, it can continue working while the I/O request is being processed. This process is called asynchronous I/O. In such cases, the I/O manager does not wait for the task to complete but will return to the calling environmental subsystem immediately rather than waiting for the file system and hard device to complete their work. This request is put into a queue and processed later.

Another advantage of this mechanism is that I/O requests can be handled in a more-efficient order and not necessarily in the order in which they were received. This way, similar requests can be gathered together (e.g., all requests from the hard disk). Once the request has been satisfied, the calling process is signaled that the task has been completed.

This has an obvious advantage when writing to the disk. Because the process relies on the system to write the data to the disk, it can continue working without any problem. However, with reads, the process needs to be a lot more careful. Granted, it could continue working (or perhaps running only a single thread), but it must be sure that the data are available before the process attempts to use it.

File Systems

Windows NT supports a number of different file systems. Because compatibility with older Microsoft systems was important in the development of Windows NT, it is not surprising that Windows NT supports the MS-DOS FAT file system as well as the OS/2 High-Performance File System (HPFS) and the standard CD-ROM file system (ISO9660). A new file system that Windows NT brings with it is the NTFS, which provides many advanced features.

Compatibility with older systems is not the only reason you would choose one file system over another. The NTFS offers a wide range of advantages over the FAT file system, but there are cases when you might prefer the FAT to the NTFS. Because I support users running Windows for Workgroups as well as Windows NT, I need to have access to both. Therefore, I cannot create an NTFS-only file system. In my case, it is simpler to leave the C: drive as FAT. In addition, on many machines I use, I run different versions of UNIX (SCO, Solaris, Linux); the only file system that is compatible with all three is the FAT.

The File Allocation Table (FAT) File System

To better understand the advantage of the NTFS, we need to compare it with something else we know. Therefore, I am going to first talk about the MS-DOS FAT file system. Although it does offer some advantages, the HPFS in OS/2 is beyond the scope of this book. In addition, the driver is not included if you install Windows NT 4.0 from scratch; only when you upgrade from 3.51.

The fact that the FAT is a very simple file system is one reason why it is supported by so many operating systems. It is often referred to, although incorrectly, as simply FAT and not as the FAT File System. The actual file allocation table is a structure that is used to manage the file system. Because the file system is characterized by how access is made through the FAT, it is properly referred to as the FAT file system.

The smallest unit of space on a hard disk or floppy is a block. On most media today, whether for a DOS PC or a Sun Workstation, the block size is 512 bytes of data. Due to the nature of the FAT, it manages the blocks in larger groups called clusters. How many blocks are contained within the cluster is dependent on the size of the hard disk. (Note: As a safety measure, there are actually two copies of the FAT in case one gets damaged.)

One of the major limitations of the FAT is its size. The FAT stems from the ancient history of computers when the largest number anyone could imagine was 64 Kbytes. As a result, there is a maximum of 64-Kbyte entries in the FAT. This means that there can be, at most, a total of 64-Kbyte files and directories on the FAT file system. This means that the larger the hard disk or floppy, the larger the cluster. For example, if the file system were 64 MB, then 64-Kbyte entries would mean that a cluster would be 1 Kbyte in size. If the system were 640 Mbytes, each cluster would be 10 Kbytes in size. The file system is 10 times as large, so the cluster need to be 10 times as large. This limitation is not based on the total size of the hard disk but rather on a single partition.(Microsoft increased the limit of the FAT with Windows 95 with the FAT32 filesystem, but this is not support by Windows NT.)

Actually, it does work exactly this way. Cluster sizes are in powers of 2 Kbytes, such as 2 Kbytes, 4 Kbytes, 8 Kbytes, and so forth. For example, I have a 500-MByte hard disk with a single partition and each of the clusters are 8 Kbytes in size, whereas on the 640-MByte file system mentioned above, the cluster would actually be 16 Kbytes. On my 2-Gbyte hard disk, each cluster is 32 Kbytes! If I have a file that is only 10 Kbytes, I lose 22 Kbytes, because the remainder of the cluster is unused. If the file is 33 Kbytes, I lose 31 Kbytes, as the entire first 32-Kbyte cluster is used, but only 1 Kbyte of the second is used. Therefore, the last 31 Kbytes is left unused. In one case, converting one 500-MB partition to two 250-MB partitions, I ended up with over 70 MBytes more free space!

Another limitation shows itself when trying to implement the FAT file system in a multiuser environment. There is nothing built into the file system to deal with security. Although I could make a file read-only, anyone with access to the system could change this and then write to the file.

The FAT file system also has a limitation on the names that can be used for files. Each file name is made up of two parts: the name and an extension (.). The name can be up to eight characters and can be composed of ASCII characters (with a few exceptions). There is a separator between the name and the extension, but this is not an actual part of the file name within the FAT. The extension can be up to three characters long, but there does not have to be an extension. By convention, this naming scheme is referred to as the "8.3 file naming convention."

If you have a small hard disk (e.g., under 250 Mbytes), then the FAT file system may be useful, because there is little space wasted on administrative overhead. However, as the size of the disk increases, so does the size of the clusters, and you end up losing the space that you would otherwise save.

As I mentioned, The FAT file system is named for its method of organization, the file allocation table (FAT). The FAT resides at the beginning of the volume and essentially contains pointers to the data on the hard disk. The FAT file system has several disadvantages as a result of its design. It was originally designed for small disks and had a simple folder structure. Although more-efficient and therefore more-complex file systems were available at the time, the FAT file system provided the necessary functionality in the least amount of space.

Despite common misconception, the FAT file system provides a limited safety mechanism. There are actually two copies of the FAT on the system. Should the primary one become damaged, it can be rebuilt using the second one. In addition, should the file system become corrupted, the information in both FATs can be compared and return the file systems to a reasonable state.

At the beginning of a FAT formatted partition is the partition boot sector, similar to that found on any other file system. Following that are the two copies of the FATs. Following the second FAT is the root directory, followed by all remaining files and directories.

The root directory has the same structure and content as any other directory on the system. The only difference is that the root directory is at a specific location and has a limited (fixed) size. A hard disk can contain up to 512 entries, including files and other directories. Root directories contain entries for each file and directory that it contains. Each entry is 32 bytes long and contains the following information:

  • Name (8.3 format)

  • Attribute byte

  • Create time (24 bits)

  • Create date (16 bits)

  • Last access date (16 bits)

  • Last modified time (16 bits)

  • Last modified date (16 bits)

  • Starting cluster number in the FAT (16 bits)

  • File size (32 bits)

There is no attempt on the part of the system to organize the FAT file system. Files are allocated on a first-come, first-served basis. After time, as files are removed and new ones are created, files become scattered around the disk, as the system searches for free space to write the files. The result is that the file system becomes "fragmented." This happens as a result of the FAT file system design. As mentioned, the starting cluster for each file is contained within the directory. The entry in the FAT that corresponds to that cluster then contains a pointer to the next cluster. It then contains a pointer to the next cluster and so on. The last cluster contains the value 0xFFFF, which indicates that this is the last cluster.

To support long file names, Windows NT (as well as Windows 95) uses the attributes in a way that does not interfere with MS-DOS. When a file is created with a long file name, Windows NT creates the standard 8.3 file name but uses addition directory entries to store the long file name. One entry is used for each 13 characters in the file name. Based on how the volume, read-only, system, and hidden file attributes are set in the subsequent directory entries, Windows NT determines the long name of the file. (Generally if all 4 bits are set, MS-DOS will ignore these files.)

In addition, Windows NT can create names that do not adhere to the other MS-DOS limitations, such as which characters can be used. Therefore, the names stored in the subsequent directory entries are in Unicode. Also, both Windows NT and Windows 95 use the same algorithm to create long files names. Therefore, if you have a dual-boot system (where you can boot either Windows 95 or Windows NT), they will both be able to read the long file names on a FAT partition.

NTFS

The NTFS offers some significant advantages over the FAT file system. In addition to increased reliability and performance, the NTFS does not suffer from the limitations of the FAT file system. In addition, the NTFS supports more file attributes, such as those related to security, and was designed to allow additional attributes that were not thought of when the file system was first designed.

The NTFS is said to be "recoverable" because it keeps track of activity on the hard disk. This activity is referred to as "transactions," which are kept in a transaction log. This makes recovering from a system crash or other problem simple in that the system need only check the transaction log and use it for recovery purposes. The system is "rolled back" to the last "commit" point, which is a place where the system wrote everything to disk. When a check is performed on the FAT file system, the consistency of pointers within all of the directory is checked along with file tables and the actual allocation. For example, it checks that entries in all directories actually point to data on the disk.

One of the most welcome additions in the NTFS was the extending of file name length to 255 characters as well as allowing new characters such as spaces. However, the NTFS still does not support all the functionality of most UNIX file systems. For example, the NTFS will generally preserve the case when you create a new file, but it is still case insensitive. Therefore, NT FILE.TXT and file.txt are the same file but would be different on UNIX, as they should be.

An interesting aspect of the NTFS is the fact that all space on the partition that is in use is part of a file. This includes the bootstrap information and the system files used to implement the NTFS structure. When the partition is formatted for an NTFS, it is said that it contains an NTFS "volume."

The Master File Table (MFT) is the central management unit on an NTFS and contains at least one record for each file on the volume, including one for itself. Each record is 2 Kbytes in size, which means that there is a lot more space necessary on an NTFS just to manage the files. This makes an NTFS appear very much like a relational database. Each file is identified by a file number, which is created from the position of the file in the MFT. Each file also has a sequence number and a set of attributes.

Another interesting aspect is that the NTFS uses a clustering scheme similar to that of the FAT and is always multiples of the sector size on the physical medium. One important aspect is that you are not limited to 64-Kbytes entries in some table. Instead, you can choose a cluster size of 512, 1024, 2048, or 4096 bytes. Therefore, prior to formatting the disk, you can give some consideration to the kind of data that will be stored there. If you expect a lot of small files, you should consider a small cluster size. However, larger cluster size means access times are reduced, because there are less data to search.

The file system driver accesses files by their cluster number and is therefore unaware of the sector size or any other information about the physical drive. When accessing the disk, the logical cluster number is multiplied by the cluster factor. This makes accessing the files on the disk faster.

The NTFS boot sector is located at the beginning of the volume, with a duplicate located in the middle of the volume. It contains various information about the volume, including the start location of the Master File Table (MFT) and the Master File Table Mirror (MFT2) as well as the number of sector in the volume.

Unlike the directory entry of many other file systems such as the FAT, the filename is an attribute of the file and not just an entry within a directory. In fact, the NTFS will have multiple entries for each directory entry. This is how it can have hard links as are available on UNIX systems. However, this is only supported by the POSIX environment subsystem, so it is not available from most Windows applications. Among the other information contained within the file is a header, standard information, security descriptor, and the data. The structure of the MFT is shown in Figure 1-6.

IMAGE OF MFT

  • Header (H)

  • Standard Information (SI)

  • Security Descriptor (SD)

  • File Name (FN)

  • Data

Let's assume we have a file that is approximately 1500 bytes long. This size added to the information stored with the MFT record is less than the 2-Kbyte size of the record. Therefore, the data together with the attributes can be stored within single MFT records.

The first 16 records of the MFT are reserved for special system information, the first of which describes the MFT itself. The second record is a MFT record that points to mirror of the MFT, in case the first MFT becomes corrupt. The location of both the MFT and the MFT mirror are stored within the boot record. To increase recoverability even further, a duplicate of the boot sector is located at the logical center of the disk.

Starting with the seventeenth record, you have the records that point to the files within the file systems. Like many other systems, the NTFS also sees directories as files, which just happen to have a specific structure. Instead of what we humans consider to be data, the data for a directory are indexes into other locations within the MFT. Like files, small directories are kept within the data portion of the MFT record (See Figure 1-9). With larger directories, the indexes point to other records and then point to the actual file indexes.

The scheme provides some speed enhancements over the FAT and many UNIX file systems for smaller files. In both the FAT and UNIX file systems, finding the data is a two-step process. First, the system locations the appropriate entry for a file (such as in the node table under UNIX) then needs to find the physical location on the hard disk. With the NTFS, the moment the entry in the MFT is found, the data are located as well.

Cc768132.fig01-08(en-us,TechNet.10).gif

Figure 1-6: . The Windows NT MFT

Cc768132.fig01-09(en-us,TechNet.10).gif

Figure 1-7: . MFT record for a small file or directory.

If the data cannot fit into a single record, the data portion of the records contains pointers to the actual data on the disk. This pointer is called a virtual cluster number (Vcn) and points to the first cluster in each of what is called a data run or simply run as well as to the number of contiguous clusters in each of the runs. If the file is so large, it may be that there are so many runs that pointing to all of them would be more than can fit in the data record. The data record will then point to another MFT record, which then points to the data.

NTFS Attributes

Each NTFS file consists of one or more attributes. Each attribute consists of an attribute type, length, value, and optionally an attribute name. Attributes are ordered by attribute type, and some attribute types can appear many times.

With the NTFS, there are both system-defined and user-defined attributes. It is through these attributes that every aspect of the file is accessed, including the data. This is because file data are just an attribute of the file. This data attribute contains what we normally think of as data as well as the normal file sizes and sizes for all attributes.

System-defined attributes are defined by the NTFS volume structure and have fixed names and attribute type codes. The name and format of user-defined attributes are determined solely by users, with the attribute type codes established uniquely for each NTFS volume.

Attributes are stored either as resident or nonresident. When an attribute is stored within the MFT record, it is considered resident. However, it can happen that the attribute is too large to fit within the MFT. In this case, the attribute is stored elsewhere and therefore considered nonresident.

Some System-Defined Attributes

  • **Attribute List—**defines the valid attributes for this particular file

  • **File Name—**contains the long file name and the file number of the parent directory file. Must always be a resident attribute

  • **MS-DOS Name—**contains the 8.3 file name as well as the case-insensitive file name

  • **Security Descriptor—**attribute the contains the file's Access Control List (ACL and audit field (activities on this file to be audited)

  • Reference Count—similar to the reference count in UNIX. This is basically the number of "directories" that contain this particular file

Security on the NTFS

Like other aspects of Windows NT, security information for the NTFS is stored in ACLs. Each entry in an ACL is referred to as an Access Control Entry (ACE). ACEs are ordered (sorted) first by those that deny access, then by those that grant access. In this way, if you are denied access to any object by an ACE, it will be found first before any that grant you access. In addition, you need to keep in mind that "deny" will always override "allow" if there is ever a conflict.

When an object is created, one or more ACLs are usually assigned to that object. Windows NT uses a scheme of "ACL inheritance," which is intended to allow access control information to be passed on to subsequent objects. Using the concept of containers, a Windows NT ACL can be thought of as three separate ACLs:

  1. Effective ACL—ACL pertaining to the container object

  2. _Object Inherit ACL—ACL to be inherited by sub-noncontainer objects.

  3. Container Inherit—ACL to be inherited by subcontainer objects, such as subdirectories.

When a new object is created, what access is given will be determined by a couple of things. First, what kind of object it is. If the new object is a noncontainer (i.e., a file), it may have a different ACL than if it were a container (i.e., a directory). For example, when a new file is created, the effective ACL of the file is the same as the Object Inherit ACL. Because the new object is not a container, the Container Inherit ACL has no effect (not inherited).

If the new object is a container (i.e., directory), the process is slightly more complicated. The Object Inherit ACL is simply transferred to the new object. That is, the Object Inherit ACL of the parent object becomes the Object Inherit ACL of the child. The Container Inherit ACL will become both the Effective and Container Inherit ACL for the new container. Essentially, this means that all directories and subdirectories end up with the same ACL (provided no one changes things).

Although it seems that there are three separate ACLs for each object, there is actually only one. Each ACE can be defined to have different inheritance characteristics. That is, it can be marked for no inheritance or inheritance by subcontainers, noncontainer objects, or both.

In general, children inherit the properties of their parent, whether or not an object is created new and when it is copied/moved. For example, if you are inside of MS Word and create a new file, it inherits the security properties of the directory. If you create a new directory inside of MS Word (or something like Windows Explorer), the new directory also inherits the permissions. If copied, files and directories inherit the permissions of their new parent. This is because they are new objects. However, if files or directories are moved, they retain their original permissions.

NTFS Strategy: Recoverable File System

There are several mechanisms within the NTFS that help ensure recoverability in the event of a system crash. To aid in this, the NTFS is transaction based. A transaction is a collection of smaller actions, all of which must either be completed, or none can. Each action on the system that modifies it in some way is considered a transaction. Transactions are logged, which helps in recovering the file system.

When a file is changed, the Log File Service logs all redo and undo information for that transaction. Redo information is essentially the information needed to repeat the action, and undo information is the information needed to make the system think that nothing has happened.

After a transaction is logged, it is passed to the Cache Manager, which checks the Memory Manager for free memory resources. If the resources are available, the Cache Manager sends the transaction instructions to the NTFS to make the requested changes to the file. If not, the Memory Manager needs to free up resources.

If the transaction completes successfully, the transaction (file update) is committed. If the transaction is incomplete, the NTFS will roll back the transaction by following the instructions in the undo information. If the NTFS detects an error in the transaction, the transaction will be ended and then also rolled back using the undo information.

The cache is used to slightly speed up performance. Keep in mind the logging requires a certain amount of time and therefore slightly decreases system performance. Windows NT uses something called a "lazy commit," which is similar to a "lazy write." Instead of writing the commitment information to the disk, it is cached and written to the log as a background process. If the system crashes before the commit has been logged, when the system restarts the NTFS checks the transaction log to see whether or not it was successfully completed. If not, the NTFS will undo the transaction.

At regular intervals (8 seconds), the NTFS checks the status of the lazy writer in the cache and marks it a checkpoint in the log file. If the system crashes following the checkpoint, the system knows to back up to that specific checkpoint for recovery purposes. Because the system does not have to figure out how far transactions were made, checkpointing results in faster recovery times.

NTFS Advantages

The NTFS file system is best for use on partitions of about 400 MBytes or more, because performance does not degrade with larger partition sizes under the NTFS as it does under the FAT file system.

The NTFS enables you to assign permissions to individual files, so you can specify who is allowed various kinds of access to a file or directory. The NTFS offers more permissions than the FAT file system, and you can set permissions for individual users or groups of users. The FAT file system only provides permissions at the directory level, and FAT permissions either allow or deny access to all users. However, there is no file encryption built into the NTFS. Therefore, someone can start the system under MS-DOS or another operating system and then use a low-level disk editing utility to view data stored on an NTFS partition.

The recoverability designed into the NTFS file system is such that a user should seldom have to run any disk repair utility on an NTFS partition. In the event of a system crash, the NTFS uses its log file and checkpoint information to automatically restore the consistency of the file system.

Even with these benefits, the NTFS is still subject to fragmentation. Unlike memory, if parts of files are stored in different physical locations on the hard disk, it takes longer to read the file. Unfortunately, Microsoft has not provided any defragmentation software for Windows NT, which is available in abundance for Windows 95/98.

Fortunately, Executive Software (https://www.executive.com) provides a solution with their Diskeeper products. There are both workstation and server versions of the software, which can be configured to automatically defragment the system as it is running. The functionality of Diskeeper is so well proven that Microsoft will reportedly include it as a standard component of Windows 2000. A lite version is provided on the accompanying CD for you to try out.

The System Registry

In Windows 3.1, configuration information was stored in various files spread out all over the system. Although they generally had the common extension (.INI), it was often difficult to figure out exactly which file was responsible for which aspect of the configuration. Windows NT changed that by introducing the concept of the system registry. Windows registry is a central repository for all of the systems configuration information.

The registry is a kind of hierarchically organized database. It is grouped into several groups of entries, which are referred to as hives. These contain keys, which themselves can contain subkeys. It is within these keys and subkeys where the information is actually stored. The function of each hive is shown in Table 1-1.

Each hive is contained in a separate file as well as in a single .log file. Normally, the system related hives are found in the directory %SYSTEMROOT%\system32\config and the user-related hives are in %SYSTEMROOT%\profiles\username (where username is the name of the respective user; see Table 1-2). The system hives have the name DEFAULT, SAM, SECURITY, SOFTWARE, and SYSTEM. The following table contains the names of the hives and in which files they reside.

Table 1-1. Function of the Registry Hives

Name of the Primary Key

Description

HKEY_LOCAL_MACHINE

Contains information about the local machine

HKEY_CLASSES_ROOT

Contains information about file association and Object Linking & Embedding (OLE)

HKEY_CURRENT_USER

Contains information about the current user

HKEY_USERS

Contains information for all user profiles, including the current user

HKEY_CURRENT_CONFIG

Contains information about the current hardware configuration

The system administrator can specify the amount of space that the registry uses on the hard disk. Sometimes the default of 2 Mbytes is insufficient, so the space needs to be increased. This is only really necessary on the domain controllers, because they contain the user account information, which grows a lot faster than other configuration information. When the system gets close to using up the allocated space, it generates a message that gives you enough time to make the necessary changes.

When the system is first installed, a base copy of the registry is loaded, and changes are made based on what configuration options you chose. Each time the system is changed, such as when hardware or software is installed, the registry gets changed. Even when you boot your system, the NTDETECT.COM program can make changes to the registry.

Viewing the Registry

The registry can be viewed or changed directly using REGEDT32.EXE (Figure 1-11). Note that each hive controls many different aspects of your system. If you must edit the registry by hand, make sure you make a copy before you start.

Each key consists of three components: the key name, the data type, and the value. How long the value can be and in what format it is is dependent on the data type. You can have such data types as binary data, text strings, and multipart text.

Table 1-2. Registry Hives and Their Associated Files

Registry Hive

File name

HKEY_LOCAL_MACHINE \SAM

Sam and Sam.log

HKEY_LOCAL_MACHINE \SECURITY

Security and Security.log

HKEY_LOCAL_MACHINE \SOFTWARE

Software and Software.log

HKEY_LOCAL_MACHINE \SYSTEM

System and System.log

HKEY_CURRENT_USER

Ntuser.dat and Ntuser.dat.log

HKEY_USERS.DEFAULT

Default and Default.log

Cc768132.fig01-11(en-us,TechNet.10).gif

Figure 1-8: . Registry Editor

Warning: Be extremely careful when you make changes directly to the registry, because you can make your system nonbootable.

Starting Windows

The process of turning on your computer and having it jump through its hoops to bring up the operating system is called booting. This derives from the term bootstrapping. This is an allusion to the idea that a computer pulls itself up by its bootstraps, in that smaller pieces of simple code start larger, more complex ones to get the system running.

The process a computer goes through is similar between the different computer types, whether it is a PC, Macintosh, or SPARC Workstation. In the next section, I will be talking specifically about the PC. However the concepts are still valid for other machines.

The very first thing that happens is the Power-On Self-Test (POST). Here, the hardware is checking itself to see that things are all right. One thing that is done is to compare the hardware settings in the CMOS (Complementary Metal-Oxide Semiconductor) to what is physically on the system. Some errors, like the floppy types not matching, are annoying, but your system still can boot. Others, like the lack of a video card, can keep the system from continuing. Oftentimes, there is nothing to indicate what the problem is, except for a few little "beeps." On some systems, you can configure the system to behave differently for different errors or problems it detects in the CMOS.

Once the POST is completed, the hardware jumps to a specific, predefined location in RAM. The instructions that are located here are relatively simple and basically tell the hardware to go look for a boot device. Depending on how your CMOS is configured, first your floppy is checked and then your hard disk.

When a boot device is found (let's assume that it's a hard disk), the hardware is told to go to the 0th (first) sector (cylinder 0, head 0, sector 0), then load and execute the instructions there. This is the master boot record (MBR), sometimes called the master boot block. This code is small enough to fit into one block but is intelligent enough to read the partition table (located just past the master boot block) and find the active partition. Once it finds the active partition, it begins reading and executing the instructions contained within the first block.

It is at this point that viruses can affect/infect a Windows NT system. The master boot block is the same format for essentially all PC-based operating systems. All the master boot block does is to find and execute code at the beginning of the active partition. Instead, the master boot block could contain code that told it to go to the very last sector of the hard disk and execute the code there. If that last sector contained code that told the system to find and execute code at the beginning of the active partition, you would never know anything was wrong.

Let's assume that the instructions at the very end of the disk are larger than a single, 512-byte sector. If they took up a couple of kilobytes, you could get some fairly complicated code. Because it as at the end of the disk, you would probably never know it was there. What if that code checked the date in the CMOS, and if the day of the week was Friday and the day of the month was the 13th, it would erase the first few kilobytes of your hard disk? If that were the case, then your system would be infected with the Friday the 13th virus, and you could no longer boot your hard disk.

Viruses that behave in this way are called "boot viruses," because they affect the master boot block and can only damage your system if this is the disk you are booting from. These kinds of viruses can affect all PC-based systems. Some computers will allow you to configure the CMOS so that you cannot write to the master boot block. Although this is a good safeguard against older viruses, the newer ones can change the CMOS to allow writing to master boot block. So, just because you have enabled this feature, it does not mean your system is safe. Therefore, you need to be especially careful when booting from floppies.

Now back to our story. . . .

As I mentioned, the code in the master boot block finds the active partition and begins executing the code there. On an MS-DOS system, these are the IO.SYS and MSDOS.SYS files. On an NT system, this is the NTLDR, which reads the contents of the BOOT.INI file and provides you with the various boot options. In addition, it is also the responsibility of the NTLDR program to switch x86-based CPUs from real mode to 32-bit, protected mode.

If you choose to boot into DOS or Windows at this point, the NTLDR program loads the file BOOTSECT.DOS and runs it. This file is a copy of the boot sector that existed before you installed Windows NT (assuming you installed NT on top of DOS or Windows). If you select to boot Windows NT, NTLDR loads the programmer NTDETECT.COM, which is responsible for detecting your hardware. If you have configured multiple hardware profiles, it is NTDETECT.COM that provides you the choices. Next, NTDETECT.COM loads program NTOSKRNL.EXE and passes it the information it has collected about your hardware. However, on RISC-based computers, the OSLOADER carries out all of the functions.

About the Author

Jim Mohr is currently responsible for supporting over 1,000 Windows users worldwide. He is author of several books, including UNIX Web Server Administrator's Interactive Workbook and Linux User's Resource (Prentice Hall PTR).

We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice.

International rights = English Only

Link
Click to order