Physical and Virtual Memory in Windows 10

Anonymous
2015-11-11T12:21:14+00:00

System Memory:

Just as the movie ticket serves as a controlling agent between the demand and the seats in a theatre, similarly virtual memory are like tickets to processes that must occupy slots in the physical memory (RAM). If Physical Memory is running to full capacity, then the processes committed with a ticket in virtual memory will have to wait till some processes exits Physical Memory and makes room for the new processes to enter.

A process is essentially a set of tasks contained in a program, one or more of which executes through a sequence of instructions called threads. Processes run on the operating system and the operating system manages the hardware resources for all the running processes. The operating system provides a Virtual Memory to all processes which must run on Physical Memory. The address space of the virtual memory in bytes is (2^address bits), where the bits corresponds to the physical address lines in the system architecture (32 bit Or 64 bit). The address space of the physical memory depends on the amount of RAM installed. The virtual memory manager of the operating system applies a method called Paging to map the virtual address space to the physical address space, in such a way that all processes can get to run on the physical memory. If Physical memory is sufficiently large to accommodate all processes, then the committed virtual address space is mapped to physical address space at once. If Physical memory is less than the combined memory demand of all processes, then the virtual memory manager will have to load each process sequentially on the physical memory, wait for the process to end, and then rewrite a new process on the physical memory and repeats this till all processes committed to virtual memory are executed.

The operating system (OS) acts as a platform and interfaces all processes by its Virtual Memory, and on the other hand the OS gets to run these processes on the Hardware by mapping the processes in its Virtual Memory to the Physical Memory. This mandates all processes to be designed specific to an OS, thereby disallowing processes to directly communicate with the Hardware which remains shielded and under the sole control of the OS.

This renders the processes to become Platform Independent, meaning that processes designed for a particular OS will always work irrespective of the hardware platform on which the OS is installed.

The Virtual Memory manager of the operating system use special Paging techniques namely Disc Paging and Demand Paging to overcome the space limitation of Physical memory.

Disc Paging extends the computer’s physical memory (RAM) by reserving space on the hard disc called Page File which the processor views as non-volatile RAM**.** When there is not enough memory available in RAM to allocate to a new process, the virtual memory manager moves data from RAM to the Page file. Moving data to the Page file frees up RAM making room for the new process to complete its work. However, accessing data from the Page file degrades the system performance considerably. Disc Paging is of use only on systems with limited physical memory which would otherwise not allow a large process or multiple processes to run concurrently. Increasing virtual memory will not improve performance on such a system.

When a process references a virtual memory page that is on the disc, because it has been paged-out, the referenced page must be paged-in and this might cause one or more pages to be paged-out using a page replacement algorithm. If the CPU is continuously busy swapping pages, so much that it cannot respond to user requests, the computer session is likely to crash and such a state is called Thrashing. If thrashing occurs repeatedly, then the only solution is to increase the RAM.

The concept of disc paging evolved when systems had 256 MB or less memory. In modern systems with sufficient amount of memory ( ≥ 4GB), the use of Page file is limited to caching pre-fetched data. However with advent of mobile devices, Disc Paging is an attractive option to extend RAM with ultra-fast Solid State Drives.

Demand Paging is key to using the physical memory when a number of processes must run with a combined memory demand exceeding the available physical memory. This is achieved by segmenting the process into smaller tasks. Only the threads corresponding to a specific task are loaded to memory which are required for immediate processing, instead of allowing the entire process to get into the memory. Services that run in the background are typically threads of a larger process. The exception is the Graphics and Hardware Drivers for which memory is reserved and is not available to the Memory manager.

Windows Memory Representation

Resource Monitor and Task Manager screenshots

The amount of physical memory used by a process is called a Working Set. The Working Set of a process is comprised by its Private working set and its Sharable working set both of which are owned by the same process. The Private working set is the amount of physical memory in use pertaining to tasks that are dedicated to the process. The Sharable working set is the amount of physical memory in use by the process pertaining to tasks that can be shared with other processes.

Working Set of a process = Private Working Set + Sharable Working Set.

For example, when you open a Word document, the winword.exe remains in the Sharable working set and the contents of the document is put in Private working set. Thus several word documents opened simultaneously can share the same winword.exe while their data is private and distinct. Shared system resources such as DLLs, System drivers (Kernel/OS.sys files) exists solely as a Shared process with no private working set. Tasks and Services comprise the executable processes (.exe) which are primarily private working sets having a sharable resource.

The Commit and Working Set of all running processes can be viewed on the Memory tab of the Resource Monitor. The Commit is the amount of Virtual Memory reserved by the operating system for a particular process. This amount is only an estimation of the Page File size needed by a process, and is not allocated till it becomes necessary for the system to page out the Private working set of a process in physical memory to the Page File. It is for this reason, the Commit size is always larger than the Private working set of a process.

The sum of all process Commits therefore indicates the Virtual Memory demand by the system which needs to be met from the Page File. However, the Page File size can be configured much less than the Virtual Memory demand, as the system will essentially become non-functional if the private working sets of all running processes are to be paged out.

Virtual Memory or Page File size can be viewed/modified through System Properties > Advanced tab > Performance Settings > Advanced tab > Virtual Memory section (left screenshot, below). Perhaps the best decision is to allow the system to manage this for itself.

The modules in the Sharable working set are not shown in the Resource Monitor. You can view the list of running DLLs & System Drivers through System Information (Run > msinfo32) > Software Environment (left menu) > System Drivers (kernel & OS drivers) || Loaded modules (dlls) || Running Tasks (Processes as seen on Resource Monitor) || Services (contained in Running Tasks).

Tip: You can manually copy the entire Process list in the Memory tab of the Resource Monitor, and paste into an Excel Worksheet for your analysis. However these values keep changing rapidly and so cross-reference to the static memory values in System Information should be synchronised at the same point in time.

The Committed (1.7 GB) as seen on the Task Manager > Performance tab > Memory section, is the sum total of Working sets of all running processes + amount of data cached in Page file, or in other words the Virtual Memory in use.

Total Virtual Memory = Total Physical + Page File Size

= 3800 + 1024 = 4824 MB = 4.71 GB.

Total Virtual Memory is also referred to as Commit Size Limit.

Virtual Memory usage can be seen through System Information (Run > msinfo32), right screenshot above:

Total Virtual Memory = Committed (Virtual In Use) + Available Virtual Memory

Or, Committed (Virtual In Use) = Total Virtual – Available Virtual

                                                     = 4.71 GB – 2.96 GB

                                                     = 1.75 GB (as seen on Task Manager).

The Virtual Memory term in Windows is actually polymorphic. One terminology applies to memory representation where in the true technical sense, Virtual Memory = Physical memory + Page file, although the term Virtual Memory is more often used as a synonym to Page File.

Virtual Memory also refers to the Operating System's abstraction of memory into a process directory, where each folder is a collection ofPages (4KB segments) scattered in memory of a process working set, with the underlying page location data per process residing on a distinctPage Table. This Page Table is a sequential ordering of pages of a process working set, where pages are identified by their assigned index ranging from 0 upto 2^address bits, with each page index specifying the random location of the page in Physical Memory. This enables the OS software to sequentially index through the pages of a process and execute the codes by referring to the physical memory location of the page in the Page Table.

When a machine boots up, the operating system creates two dynamically sized pools in physical memory to allocate the kernel-mode components. These two pools are known as the Paged Pool and Non-paged Pool.

The Paged Pool value (247 MB,****refer to Task Manager screenshot on top) **** indicates the amount of physical memory used by the Kernel to store objects that can be written to disc (paged) when they are no longer used.

The Non-paged Pool value****(140 MB, refer to Task Manager screenshot on top) indicates the amount of physical memory used by the Kernel to store objects that cannot be written to disc but must remain in physical memory at all times.

Physical Memory:

Physical memory refers to actual RAM chips or modules, typically installed on a computer’s motherboard.

Hardware Reserved (296 MB) is the memory space reserved by hardware drivers which must be always remain on the RAM. This memory is essentially locked and is not available to the memory manager. A major part of this is the dedicated Graphics Memory reserved through BIOS.

In Use (1648 MB) memory is the sum total of Working Sets of all running processes owned by the operating system, kernel (non-paged pool), drivers and the various applications.

Modified (52 MB) memory contains modified pages that have been removed from process working sets, because it was idle for long. Modified memory contents must be written to disk before it can be repurposed. As such it is currently not in use but can still be quickly pulled into service if needed. If memory demand runs high or the modified list have grown beyond a certain threshold, the memory manager arranges to write pages from the modified memory to the page file on hard disc, and moves those pages from the Modified to the Free memory.

Standby (1111 MB): When a process exits normally, the memory manager moves the unmodified pages in the working set to the Standby memory, which effectively makes the Standby memory a true cache of recently used files. By keeping these pages on the Standby list, the memory manager reduces the need to re-fetch the information from web/disk/network location which is more involved and time consuming, should the pages be required again.

When a process requests for a page, the memory manager first looks for the page in the Standby memory and if available, returns it as a working set. This is called repurposing a page. If the page is not present in Standby, the memory manager loads it from the Hard Disk into the Free Memory which then becomes part of the process working set.

All pages on the Standby list are available for memory allocation requests. The memory pages in Standby list are prioritized in a range of 0-7, with 7 being the highest. When a process requests additional memory and there is not enough memory in the Free list, the memory manager takes a page with low priority from the Standby list, and releases it to the Free list for the new process to occupy.

The Free (989 MB) memory are the locations that have not yet been allocated to any process OR were previously allocated but returned to the memory manager for reuse when the process ended. All memory demand by the system is met from the Free memory.

The memory manager maintains a thread that wakes up periodically to initialize pages on the Free page list so that it can move them to the Zero page list. The Zero page list contains pages that have been initialized to zero, ready for use when the memory manager needs a new page.

Installed Physical Memory is the RAM installed = 4GB = 4096 MB.

Installed Memory = In Use + Modified + Standby + Free + Hardware Reserved

   = 1648 + 52 + 1111 + 989 + 296 = 4096 MB.

If installed RAM is sufficiently greater than the Committed (peak value over time), then the system is unlikely to experience any memory constraint.

Total Physical Memory = Installed - Hardware Reserved = 4096 - 296 = 3800 MB.

This is the Usable memory available to the Memory Manager.

Cached Memory = Modified + Standby = 52 + 1111 = 1163 MB.

Cached memory holds data or program code that has been fetched into memory during the current session but is no longer in use now. If necessary, the Windows memory manager will flush the contents of cached memory and release it to the free memory.

Available Memory = Standby + Free = 1111 + 989 = 2100 MB.

Committed (Virtual In Use) = Physical Memory in Use + Page file in Use

Or, Page file in Use = Committed - Physical Memory in Use

                                 = 1.75 GB - 1648 MB = 1792 -1648 = 144 MB.

Virtual Memory Organisation:

Every process needs a space in memory to store its data and codes. This is called the address space of a process. The original idea of virtual memory was to expand the physical memory with a reserved space in hard disc called Page file which the program would view as RAM and this was termed as Disc Paging. Subsequently the virtual memory concept was redefined so that processes run on the operating system and the operating system provides a Virtual Memory to all processes while it manages the physical memory on behalf of itself and all its processes. This virtual memory is limited only by the theoretical address space possible on the computer architecture, thus maximising the address space interface to a process. Virtual Memory also helped to establish memory protection by disallowing processes to directly address the physical memory which is unaware of other processes using the memory.

A process runs from the virtual memory provided by the operating system, while the CPU interfaces with the operating system to fetch and execute the instructions from the physical memory. The operating system maintains a translation table called Page Table ordered by the virtual memory addresses assigned to each process, having cross-reference to the physical memory addresses. The CPU incorporates a hardware logic to sequentially generate the same virtual addresses and gives it to its own Memory Management Unit (MMU), which refers to the Page Table and locates the physical address where the instruction is stored.

The Virtual Memory Manager (VMM) of the operating system manages the memory requests made by all processes. The VMM allocates consecutive address in the virtual memory in units called Page, to each process which must run. Initially each Page is initialized with the storage locations of the process files on hard disc which are subsequently replaced by the physical memory addresses once the process files are loaded to memory. This allows an operating system to provide an address space to all processes in the virtual memory, irrespective of the size of physical memory available.

The mapping of a Virtual Page to Physical memory occurs in units called Page frame which are of the same size of the Page and formed by consecutive bytes in physical memory. However, while Pages are continuous in Virtual memory, the corresponding Page frames are randomly distributed in Physical memory. A Page frame is an array of memory bytes and is essentially a higher unit of memory size, appropriate for allocating to processes. The collection of all page frames of a process comprises its Working Set.

Thus the operating system maintains a Page Tablefor each process, containing the process pages in virtual memory mapped either to its page frame in physical memory Or initialized to the page location on hard disc. Page Tables of all running processes are created on the physical memory at system start.

Each process can access the entire virtual address space (2^32 bytes Or 2^64 bytes) on its own without any restriction and so each process use the same set of virtual memory addresses for its pages. The virtual memory manager knows the process owning the page by the Page Table ID and will map them to separate page frames in physical memory. Conversely, multiple processes can individually own a shared process by defining pages in their respective Page Table, which map to the same page frames in physical memory.

A Page is a virtual entity and therefore its memory space has no physical existence. A Page exists only in the form of a page address on Page Table which is a physical entity. This Page Table is essentially the Virtual Memory and a Page is analogous to a ticket in virtual memory. The counterpart of Page is the Page Frame Array in Physical memory and is the physical identity of the Page.

A Page Table is a collection of Page addresses in an array, where the indexes of the array represent the virtual page addresses, and the array values called Page Table Entry (PTE) contains the corresponding Page Frame address in physical memory. PTEs also contain Process Control Information and is typically 4 bytes for 32 bit systems.

Control bits provides information about the state and attributes of a process. These Control bits are Valid bit (address refers to a page frame on physical memory), Modified bit (process data modified), R/W Access protection bits and Process ID bits. These bits helps the memory manager to understand the status of a process and initiate action accordingly.

Page Size actually refers to its real world counterpart – Page Frame, whose size is typically between 1KB and 8KB and is generally 4 KB for 32 bit systems.

For a 32 bit system, with 4KB Page Size (2^12 bytes),

Number of Pages = Virtual Memory Space / Page Size = 2^32 / 2^12 = 2^20.

Or, Virtual Memory Space = Number of Pages * Page Size

2^32 = 2^20 * 2^12

Translating this into the language of the system in terms of address bits,

32 bit Address Space = 20 bit Virtual Page Number (VPN) + 12 bit OFFSET

The VPN and OFFSET can be thought of transforming the 32 bit address space from a single array into multiple arrays arranged in rows, where the 20 bit VPN is used as the address of each array and the 12 bit OFFSET is used as an index (position) to identify the elements in the array. Thus 2^VPN are the number of arrays (Pages) and 2^OFFSET are the number of array elements (bytes) which is the array size (Page size).

The Offset is implemented in the CPU hardware and is used in the system in various context. The Offset bits are used as an index to sequentially access the bytes in a Page frame. The Offset is therefore a counter which increments in bytes to index into the memory elements of a Page frame. The Page size is fixed by the Offset maxima which sets the upper limit of a Page frame in physical memory. In a Page Table, the Offset bits are used to store process control information as it has no significance in the virtual context.

When a Page Frame array is created in Physical Memory, it is assigned with a 20 bit array address called Physical Page Number (PPN). The Offset defines the Page Frame size which is 2^12 = 4096 bytes = 4KB. While VPNs are consecutive indexes of a page table, PPNs are also consecutive frame address in physical memory. However each VPN is assigned with a random PPN depending on the availability of 4096 consecutive bytes of free memory for loading the frame data.

**The address of a physical memory location is formed by concatenating the PPN with Offset.**In other words, a physical memory location is specified by the address of an array (page frame) together with the index number (offset) of an element in the array.

Number of Page Frames possible with 20 bits = 2^20

Total Physical Memory that can be occupied by Page Frames = No. of Page Frames * Frame size = 2^20 * 4096 bytes = 4 GB.

Unlike Virtual Address Space where each process can occupy the entire space on its own which is 2^32 = 4GB, the Physical Address Space can be a maximum of 4 GB which has to be ultimately shared by all processes. Sharing require processes to enter the physical memory in batches, get executed by the processor and then exit the memory to make space for a new process. This is further optimised by loading only a part of the process in page table to memory comprising of only the essential tasks which came to be known as Demand Paging and is an improvement on pure virtual paging.

The 4GB address space limitation in 32-bit systems is overcome by a mechanism called Physical Address Extension or PAE. PAE systems are actually 36 bits, meaning that they have 36 physical address lines, capable of addressing 2^36 = 64 GB physical memory. However, processes must run in virtual memory in 32-bit mode to comply with instructions created specific to the platform architecture either in 32-bit or 64-bit. The operating system in turn has to read and write to the physical memory in 32-bit mode, which demarcates the lowest 4GB memory space within the installed RAM (greater than 4GB), as the active region to run processes. The higher region memory being outside the scope of 32-bit operation remains passive, but is usable in the 36-bit (long) mode. When a process ends, the operating system relocate the page frames from the active to the passive memory by switching to long mode and changing the PPNs in the process page table to map to the passive memory. Increasing RAM in effect enhances the capability of the system to cache more processes in passive memory thereby minimising the need for disc paging.

Demand Paging is a memory management technique based on principles that are independent of the system architecture or installed RAM and applies to PAE as well as 64-bit systems.

Demand Paging

The operating system is responsible for managing all information related to a process in a Process Control Block (PCB) within the OS kernel. These are broadly classified as process identification data, process state data and process control data. The process identification data hold mappings between Process ID and **Page Table Base Address (PTBA)**which is the starting address of a Page Table in the physical memory. Every CPU has a Page Table Base Register (PTBR) which is accessible only to the operating system to store the current page table base address. During a process switch, the operating system simply makes a change of the page table base address in PTBR.

The CPU has a Program Counter which holds the virtual address in 2 parts. The high order bits contain the VPN which is a counter that increments by virtual page number and the low order bits contain the Offset which is a byte counter to index into the memory elements within a page. The VPNs are identical for all Page Tables because the virtual memory addresses generated by the Program Counter is the same for all processes. This Program Counter is driven by the logic set in the CPU hardware.

The Memory Management Unit (MMU) of the CPU first refers to the Page Table Base address in PTBR to access the Page Table in physical memory, and then refers to the VPN part of the virtual address in Program Counter to index into the Page Table and identify the PPN corresponding to the VPN. The MMU generates the physical address by concatenating the PPN with the Offset part of the virtual address in the Program Counter and stores this physical address on the Memory Address Register (MAR) of the CPU. The CPU reads the address in MAR to fetch the instruction from the physical memory and execute it. The Offset in the Program Counter increments to point to the next virtual memory location and the cycle repeats. When the Offset crosses its maximum value (all 1s), the VPN part in the Program Counter is incremented to point to the next VPN in the Page Table.

A process execute by the VPNs arranged sequentially in the form of virtual memory which is translated to PPN by the MMU for accessing the instruction from physical memory.

In the language of the OS software, the processor executes a process by sequentially fetching instructions from the virtual memory, where the virtual memory address indexing the page table array serves as a MEMORY POINTER to the PTE containing the physical memory page address. Thus all memory R/W operations are performed by pointer instructions referenced to virtual memory.

When a process initiates, the operating system creates its page table by mapping virtual addresses to physical address of the process files in hard disc. The OS marks a Valid bit = 1 for Pages in the page table that correspond to tasks that are required to be loaded in memory. Pages that are not to be loaded into memory at the initial stage are marked as invalid with Valid bit = 0. Next, the Page Frames are created on physical memory, and are loaded with codes from the process files in hard disc. Finally the Page Frame address (PPN) replaces the physical address of the page in page table which was initialized with the file location in hard disc. This is the essence of Demand Paging.

Once all Page frames are created and the process is ready for execution, the OS updates the PTBR of the CPU with the Page Table Base address and resets the Program Counter. The MMU indexes into the Page Table and loads the MAR only if the PPN has a Valid bit = 1.

If a fetched instruction is a pointer to another memory location containing the actual instruction, then the MAR is updated to the memory location of the pointer address. The image of PTBR, Program Counter and all CPU registers are saved in memory before the execution jumps to the memory location updated in MAR. On completion of the instruction(s), the return statement reloads the registers with the saved image so that the execution returns to the original virtual address and resume sequential operation. This is called Context switching.

The MMU generates a Page fault if the virtual page address has a missing frame number in the page table Or the frame number does not exist in physical memory Or the physical address is not part of the current working set but is in standby/modified list.

The Page fault invokes the Page handler of the operating system which tries to find a resolution depending on the type of page fault. The Page handler first ensures that the page request is genuine. If it is found to be not genuine, the process is terminated. If it is a genuine request, the operating system determines the missing code/data and loads it from the hard disc to a free location in memory thus creating a new page frame. The new Page frame address is updated on the faulting page in the Page Table and the page is marked as valid with a valid bit = 1. If the Page frame is in standby/modified memory, the MMU has to simply update the frame number on the faulting page which brings the page back from standby to the working set. The Page handler finally returns to the original process resuming execution at the faulting page.

When a process needs to execute a new task, the operating system creates the new task in physical memory and updates one or more VPN of the task initially marked invalid on the Page table with a valid bit, and define its corresponding PPN in the PTE. If this new task is a shared process, then the PTBR is updated with the Page Table Base address of the shared process which triggers the execution of the task. Alternately the page frames of a shared process can also be mapped within the Page Table of another process.

Translation Lookaside Buffer (TLB)

The performance of a Page Table is improved by implementing Translation Lookaside Buffer (TLB) which is a cache maintained on the registers of the processor's MMU, to store the recently used pages of the page table. When the MMU needs to read a Page from the Page Table in memory, it first tries to read it from the TLB by default, which is much faster than reading from the memory. If the Page does not exist in TLB, then the MMU first copies the Page from the page table into the TLB and then reads it from the TLB. The TLB memory is associative meaning that memory elements are addressed by contents (values) rather than an address. Thus both the VPNs and PTEs of a Page Table reside as values in TLB.

The success of the TLB rests on the statistical principle of locality which states that a page accessed recently will be required to be accessed shortly again called Temporal Locality; and page numbers nearby an accessed page has a high probability to be accessed within the next few memory references called Spatial Locality.

Temporal Locality is supported by caching of recently accessed pages in TLB whereas Spatial Locality is supported by Prepaging which is to cache all valid Pages in a Page Table before they are actually requested. Thus on a Page fault, all pages that were earlier part of a working set are added to the TLB instead of only the faulting page. This checks the need to invoke the Page Handler repeatedly for every missing page in the TLB.

When a page is found in the TLB, it is called a TLB Hit and when a page is not found, it is called a **TLB Miss.**A TLB hit **** eliminates the need to access a Page Table in memory, whereas a TLB miss incurs a Page Table access in memory. The principle of locality is the cornerstone of TLB success with a hit rate of more than 99%. In other words it means that a Page Table access in memory is required once in 100 times.

Segmentation

Segmentation is a comprehensive memory addressing method by itself and can be implemented independently to directly manage the Physical memory with 16 bit addressing called Real Mode Or implemented as Virtual addressing in 32 bit system architecture called Protected Mode and this can be further combined with Demand Paging.

The earliest memory addressing method was in 16 bit Real Mode, where programs would be loaded directly in the physical memory in segments at fixed locations. The processor would execute the codes sequentially by the address generated by the Program Counter. T

Windows for home | Windows 10 | Performance and system failures

Locked Question. This question was migrated from the Microsoft Support Community. You can vote on whether it's helpful, but you can't add comments or replies or follow the question.

0 comments No comments
{count} votes

7 answers

Sort by: Most helpful
  1. Anonymous
    2016-04-21T21:55:02+00:00

    Can I print this discussion?

    113 people found this answer helpful.
    0 comments No comments
  2. Anonymous
    2016-08-26T04:24:32+00:00

    This is way too technical.  All I want to do is find, and change vitual memory.  It used to be easy to find now its hidden somewhere.  I've been searching for the last hour trying to find something that should be a simple click to find.

    135 people found this answer helpful.
    0 comments No comments
  3. Anonymous
    2016-08-26T05:29:50+00:00

    Please refer to the lines just above the third illustration from top.

    Edit:

    To open the System Properties window:

    Control Panel > System > Computer name... (section) > click on "Change settings".

    Regards,

    Sushovon Sinha

    12 people found this answer helpful.
    0 comments No comments
  4. Anonymous
    2017-09-13T14:35:29+00:00

    Hello Sushovon Sinha,

    thanks for your excellent explanation.

    If anyone want to read the dutch translation of your article

    this is the link to my pdf.

    Jan

    8 people found this answer helpful.
    0 comments No comments
  5. Anonymous
    2017-09-13T16:42:41+00:00

    Thanks for the reply.  Microsoft as gotten too technical.  Your customer base are not techs but end users.  We don't care about open code, open source, or whatever.  We want productivity, not to have to change code, or read papers to get to one task.  I typed 'how to change virtual memory in Cortana and this is what I received:

    https://www.bing.com/search?q=how+to+change+virtual+memory&form=WNSGPH&qs=SC&cvid=0b43d58b7fe64aeaaa2e791fec686542&pq=how+to+change+vitual&cc=US&setlang=en-US&nclid=EFB4BBE6655C48CEEB2BFCD0B22995F8&ts=1505320799755&elv=AY3%21uAY7tbNNZGZ2yiGNjfNgoI6IkurJtHSdJPB45kd%21yNM5AG23kXMLP0fClqk8fnyzyBLdS412oyyQMGdzUMI1qf9s9QHtP2PgmwkTuyKv

    It didn't even address the issue.  My suggestion is to stop chasing ChROme, and be execellent again.  Microsoft used to lead the pack. not longer.

    9 people found this answer helpful.
    0 comments No comments