Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBJECT MEMORY MANAGEMENT UNIT
Document Type and Number:
WIPO Patent Application WO/2016/172297
Kind Code:
A1
Abstract:
Techniques to facilitate enhanced addressing of local and network resources from a computing system are provided herein. In one implementation, a method of operating an object-based memory management unit on a computing system to unify addressing of local and network resources includes maintaining a mapping of virtual addresses to local addresses and network addresses, and identifying resource requests that use the virtual addresses. The method further provides handling the resource requests per the mapping, and wherein a given request of the resource requests implicates a network resource, accessing the network resource associated with the given request over at least the network.

Inventors:
MEDOVICH MARK (US)
PAREKH RAJESH (US)
SASTRI BHARAT (US)
Application Number:
PCT/US2016/028573
Publication Date:
October 27, 2016
Filing Date:
April 21, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
COLORTOKENS INC (US)
International Classes:
G06F12/00
Foreign References:
US20130332700A12013-12-12
US20120297158A12012-11-22
US20030182363A12003-09-25
US20130205139A12013-08-08
US20130204849A12013-08-08
US20140025770A12014-01-23
US20120297158A12012-11-22
US20110060883A12011-03-10
US20130332700A12013-12-12
Other References:
See also references of EP 3286652A4
Attorney, Agent or Firm:
ARMENT, Keith M. et al. (Building A Suite 20, Westminster Colorado, US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computing apparatus comprising:

one or more computer readable storage media;

a processing system operatively coupled with the one or more computer readable storage media; and

program instructions stored on the one or more computer readable storage media to implement an object-based memory management unit that, when read and executed by the processing system, direct the processing system to at least:

maintain a mapping of virtual addresses in a computing system to local addresses that address local resources of the computing system and network addresses that address network resources external to the computing system over at least a network;

receive resource requests that identify resources using the virtual addresses; for at least a request of the resource requests that implicates a network resource, identify a network address for the network resource based on the mapping between the network address and a virtual address identified in the request; and

access the network resource over at least the network using the network address.

2. The computing apparatus of claim 1 wherein the network addresses comprise Uniform Resource Identifiers (URIs).

3. The computing apparatus of claim 1 wherein the local resources comprise disk storage and dynamic random-access memory (DRAM).

4. The computing apparatus of claim 1 wherein the resource requests comprise read requests and write requests.

5. The computing apparatus of claim 1 wherein the data requests comprise page requests.

6. The computing apparatus of claim 1 wherein the program instructions that direct the processing system to access the network resource over at least the network using the network address direct the processing system to translate the request into a HTTPS command and access the network resource over at least the network using the network address and the HTTPS command.

7. The computing apparatus of claim 1 wherein the virtual addresses mapped to the network addresses comprise virtual addresses mapped to network addresses and encryption keys, and wherein the program instructions that direct the processing system to access the network resource over at least the network using the network address direct the processing system to identify an encryption key for the request based on the virtual address in the request and access the network resource over at least the network using the network address and the encryption key.

8. The computing apparatus of claim 1 wherein the program instructions further direct the processing system to:

for at least a second request of the resource requests that implicates a local resource, identify a local address for the local resource based on the mapping between the local address and a second virtual address identified in the second request; and

access the local resource of the computing system using the local address.

9. A method of operating an object-based memory management unit on a computing system, the method comprising;

maintaining a mapping of virtual addresses in a computing system to local addresses that address local resources of the computing system and network addresses that address network resources external to the computing system over at least a network;

receiving resource requests that identify resources using the virtual addresses;

for at least a request of the resource requests that implicates a network resource, identify a network address for the network resource based on the mapping between the network address and a virtual address identified in the request; and

access the network resource over at least the network using the network address,

10. The method of claim 9 wherein the network addresses comprise Uniform Resource Identifiers (URIs),

11. The method of claim 9 wherein the local resources comprise disk storage and dynamic random-access memory (DRAM).

12. The method of claim 9 wherein the resource requests comprise read requests and write requests.

13. The method of claim 9 wherein the resource requests comprise page requests.

14. The method of claim 9 accessing the network resource over at least the network using the network address comprises translating the request into a HTTPS command and accessing the network resource over at least the network using the network address and the HTTPS command.

15. The method of claim 9 wherein the virtual addresses mapped to the network addresses comprise virtual addresses mapped to network addresses and encryption keys, and wherein accessing the network resource over at least the network using the network address comprises identifying an encryption key for the request based on the virtual address in the request and access the network resource over at least the network using the network address and the encryption key.

16. The method of claim 9 further comprising:

for at least a second request of the resource requests that implicates a local resource, identifying a local address for the local resource based on the mapping between the local address and a second virtual address identified in the second resource request; and

accessing the local resource of the computing system using the local address.

17. An apparatus comprising:

one or more computer readable media;

program instructions stored on the one or more computer readable media to implement an object-based memory management unit that, when read and executed by a processing system, direct the processing system to at least:

maintain a mapping of virtual addresses in a computing system to local addresses that address local resources of the computing system and network addresses that address network resources external to the computing system over at least a network;

receive resource requests that identify resources using the virtual addresses; for at least a request of the resource requests that implicates a network resource, identify a network address for the network resource based on the mapping between the network address and a virtual address identified in the request; and

access the network resource over at least the network using the network address.

18. The apparatus of claim 17 wherein the network addresses comprise Uniform Resource Identifiers (URIs).

19. The apparatus of claim 17 wherein the local resources comprise disk storage and dynamic random-access memor ' (DRAM).

20. The apparatus of claim 17 wherein the processing system is further configured to execute the program instructions to:

for at least a second request of the resource requests that implicates a local resource, identify a local address for the local resource based on the mapping between the local address and a second virtual address identified in the second request; and

access the local resource of the computing system using the local address.

Description:
OBJECT MEMORY MANAGEMENT UNIT

RELATED APPLICATION S

[0001] This application claims the benefit of, and priority to, U.S. Provisional Patent

Application No. 62/151,045, entitled "OBJECT MEMORY MANAGEMENT UNIT", filed April 22, 2015, and U.S. Patent Application No. 15/134,053, entitled "OBJECT MEMORY MANAGEMENT UNIT", filed April 20, 2016, which are hereby incorporated by reference in their entirety for all purposes.

TECHNICAL FIELD

[0002] Aspects of the disclosure are related to computing hardware and software technology, and in particular to computer architecture, cloud computing, and virtualization technology.

TECHNICAL BACKGROUND

[0003] Today, computing is increasingly being delivered as a utility service over the

Internet. Through the deploy ment of cloud computing and virtualization technology, compute, storage, and application services are available for on-demand consumption over the Internet. In this model of delivery, a user is not required to have knowledge of the physical locations and the configurations of the compute and storage resources in order to utilize the service.

[0004] End users of cloud computing often organize the resources available into

"hybrid clouds" that comprise "private clouds" that include servers and storage systems at a private data center, and also "public clouds" that include servers and storage systems located at multi-tenant public data centers such as Amazon Web Services, Google Compute Engine, or Microsoft Azure. These clouds use virtualization technology such as those offered by VMWare ESX or KVM to group computing resources for easy management. End users may also create cloud groups based on workload requirements for various end-user groups.

[0005] The existing methodology to create these groups requires manual assignment, typically by a cloud service provider, of the necessary compute, storage, network, and Internet resources. In fact, to enable easy consumption of services and resources by the compute node, the complexity of deploying and configuring the network topology and the available compute, storage, and network resources is typically handled by the cloud service provider. The sheer number of network devices and tools make it very onerous and inefficient for systems administrators at the service provider to deploy cloud resources that can deliver a level of performance that is guaranteed via a contractual obligation.

[0006] The fundamental reason for this problem results from the fact that the basic monolithic building block needed to build the cloud is a '"motherboard". In its most basic implementation, this ' " motherboard " ' is typically comprised of a CPU, memory, and a network interface controller (NIC) connected together on a circuit board. Each "motherboard' on a network may be identified by a physical or virtual internet protocol (IP) address, or a physical media access control (MAC) address embedded in the NIC device. This "motherboard" may be implemented in a plurality of ways including but not limited to personal computer (PC) motherboards and blade server plug-in boards, multiples of which are required to build large servers as is common in the cloud. These "motherboards" are then used to deploy operating systems, which in turn allow the deployment of virtualization technology in the form of virtual machines (VMs) and virtual networks to create the end cloud product that supports guest operating systems, thereby enabling the consumption of computing resources as a service. In order to achieve this virtualization, the user that is creating the cloud resources typically needs to know the IP addresses of all of the computing, storage, and Internet resources needed to be connected together. Consequently, it is very problematic to create the cloud groups that provide the necessary resources to deliver the level of service required to handle the user workloads efficiently.

OVERVIEW

[0007] Provided herein are systems, methods, and software to enhance addressing of local and network resources for a computing system. In at least one implementation, a computing apparatus includes one or more computer readable media and a processing system operatively coupled with the one or more computer readable storage media. The computing apparatus further includes program instructions stored on the one or more computer readable storage media to implement an object-based memory management unit that, when read and executed by the processing system, direct the processing system to at least maintain a mapping of virtual addresses in the computing system to local addresses that address local resources of the computing system and network addresses that address network resources external to the computing system over at least a network. The program instructions further direct the processing system to receive resource requests that identify resources using the virtual addresses and, for at least a request of the resource requests that implicates a network resource, identify a network address for the network resource based on the mapping between the network address and a virtual address identified in the resource request. The program instructions also direct the processing system to access the network resource over at least the network using the network address.

[0008] This Overview is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. It should be understood that this O verview is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] Many aspects of the disclosure can be better understood with reference to the following drawings. While several implementations are described in connection with these drawings, the disclosure is not limited to the implementations disclosed herein. On the contrary, the intent is to cover ail alternatives, modifications, and equivalents.

[0010] Figure 1 is a block diagram that illustrates a system that uses physical addressing in an exemplary implementation.

[0011] Figure 2 is a block diagram that illustrates a system that uses virtual addressing in an exemplary implementation.

[0012] Figure 3 is a block diagram that illustrates an operation of address translation from virtual address space to physical address space in an exemplary implementation.

[0013] Figure 4 is a block diagram that illustrates an operation of address translation from virtual address space to physical address space using an on-chip translation lookaside buffer cache in an exemplary implementation. [0014] Figure 5 is a block diagram that illustrates virtual memor ' mapped to physical memory in an exemplary implementation.

[0015] Figure 6 is a block diagram that illustrates an operation of using an object memory management unit with expanded virtual memory space in an exemplary

implementation.

[0016] Figure 7 is a block diagram that illustrates an object table having entries mapped to physical memor ' or virtual memory in an exemplary implementation.

[0017] Figure 8 is a flow diagram that illustrates an operation of a pre-boot execution environment (PXE) boot procedure in an exemplary implementation.

[0018] Figure 9 is a block diagram that illustrates an operation of a bootstrap process in an exemplary implementation.

[0019] Figure 10 is a block diagram that illustrates an object memory management unit (OMMU) system in an exemplary implementation.

DETAILED DESCRIPTION

[0020] The following description and associated figures teach the best mode of the in v ention. For the purpose of teaching inventive principles, some con v entional aspects of the best mode may be simplified or omitted. The following claims specify the scope of the invention. Note that some aspects of the best mode may not fall within the scope of the invention as specified by the claims. Thus, those skilled in the art will appreciate variations from the best mode that fall within the scope of the invention. Those skilled in the art will appreciate that the features described below can be combined in various ways to form multiple variations of the invention. As a result, the invention is not limited to the specific examples described below, but only by the claims and their equivalents.

[0021] The following discussion presents techniques to federate or unify a plurality of physical and virtual compute, storage, and Internet resources and make them available as a local resource to any compute node. The techniques provide on-demand deployment and presentation of a compute resource that incorporates discrete phy sical and virtual compute, storage, and Internet resources available both locally or in the cloud as a unified local compute resource to a user.

[0022] In at least one implementation, a plurality of compute infrastructures may be deployed, dynamically federated from a plurality of available discrete physical and virtual compute, storage, and Internet resources, whether they are local or available in the cloud, as a single unified local resource to execute a plurality of workloads. Tins may be accomplished through the use of an object memory management unit (OMMU). In some implementations, the OMMU can provide a computer with the ability 7 to map all of the authorized compute, storage, and Internet resources available to execute its workload, regardless of whether the resource is a physical resource such as a central processing unit (CPU) implemented on a computer motherboard, or virtual such as a virtual machine (VM), local or in the cloud, into a single unified local physical resource that can execute a plurality of workloads. The OMMU may be implemented as a software program executing on a computer or a virtual machine to provide this memory mapping functionality for both a physical machine as well as a virtual machine. Further, the OMMU may also be implemented as a functional block in one or more silicon devices, including but not restricted to, commercial CPU devices such as those from companies like Intel, AMD, ARM, discrete Memory Management Unit VLSI, Motherboard VLSI chipsets, and other devices typically used to implement a computer motherboard.

[0023] The present disclosure describes a novel apparatus and method that enables the deployment and federation of compute, storage, and Internet resources regardless of where they might exist physically, and presents the federated resources as a single unified local resource under program code control. In at least one implementation, a federated cloud computing resource may be created on-demand that is controlled by a software program. The creation of this federated cloud computing resource involves the use of a "bootstrap" protocol such as, but not limited to, PXE (Pre-eXecution Environment as implemented by Intel Corporation) for the user's "motherboard," and the implementation of a resource and memory mapping apparatus called an Object Memory Management Unit (OMMU) in the firmware or the CPU silicon of the "motherboard." This OMMU apparatus and its operation will be described later in detail below.

[0024] Conventionally, a bootstrap protocol such as PXE allows a "motherboard" to boot-up under program control in a predetermined sequence. A sequence of program instructions identifies local resources available to the CPU such as memory, network interfaces, and other components available on the motherboard, initialize their state, and finally load and execute the operating system and all the necessary services such as TCP/TP internetworking to get the "motherboard" ready for use.

[0025] The present disclosure provides an enhanced bootstrap technique that utilizes the OMMU apparatus to create a virtual memory system that provides a virtual address to physical address translation that maps not only local motherboard devices as outlined above but also a plurality of network, Internet, or cloud (compute, storage, and other Internet) resources commonly referred to by those skilled in the art as universal resource identifiers (URIs). This enhanced bootstrap technique employing the OMMU results in an inventory of URI resources that appear as local resources, which in turn allows the "motherboard" to bootstrap with a much expanded capability by incorporating these network, Internet and cloud resources, referred to as universal resource identifiers (URIs), as a single unified local resource.

[0026] In addition to providing a mechanism to translate or map the local

"motherboard" virtual address space to a local physical address space that incorporates main memory and disk storage as the media to define virtual storage, it expands the virtual address space to include a universal resource address space (URAS) into the virtual memory of the local system. In turn, the bootstrap code will also identify and deploy desired and authorized individual Internet resources, known as universal resource identifiers (URIs), and map them into the local system's physical address space. For example, in one embodiment, an inventory of pre-authorized URIs may be downloaded and cached in the system main memory, managed by the OMMU apparatus, and may be updated dynamically under program control.

[0027] The OMMU apparatus provides a mechanism to map or translate an expanded virtual address space that incorporates the universal resource address space to a local physical address space in a manner similar to that of a conventional memory management unit (MMU) implemented in CPU silicon. A PXE or similar bootstrap protocol in conjunction with the OMMU apparatus enables the federation and presentation of a plurality of compute, storage, network, and Internet resources (URIs) as a single unified local physical resource to the local system compute node as the end result of the boot process. The OMMU may use the universal address space as a trust repository, e.g. a cache of private keys that enable the encryption and decryption of URIs, object table entries in the OMMU, file read/write operations, and others.

[0028] In at least one exemplary embodiment, a plurality of systems implementations could comprise combinations of "motherboard" hardware running operating systems such as Microsoft Windows, Linux, OSX, and the like, and network protocols such as hypertext transfer protocol (HTTP) using representational state transfer (ReST) protocols. Another exemplar}' embodiment could comprise a stand-alone computer program executing on a physical computer system or a virtual machine (VM) either locally, or on a remote computing sy stem, or on a virtual machine in the cloud, or at both clients and servers simultaneously. In yet another embodiment, an individual user could utilize a computing system, either physical or virtual, comprising an OMMU to unify all of the user's resources and devices as a single local resource of the computing system, which could include local, on-premise resources, such as a local network attached storage (NAS) drive on a private, lower-layer cloud, in addition to compute, storage, and network resources (URIs) available to the user over Internet.

[0029] Referring now to the drawings, Figure 1 illustrates a system that uses only physical addressing, while Figure 2 illustrates a system that uses virtual addressing using physical main memory as the page table cache. Figure 3 illustrates the steps required to provide address translation from virtual address space to physical address space. Figure 4 illustrates the steps that the CPU hardware performs to provide address translation from virtual address space to physical address space using on-chip translation lookaside buffer cache memory. Figure 5 illustrates a virtual memory system organized as an array of N contiguous byte-sized cells called a virtual page, where part of the virtual address space is mapped to local storage disk and part is mapped to universal resource address space. Figure 6 illustrates a system using an Object Memor ' Management Unit with expanded virtual memory space that includes the universal resource address space in an exemplary implementation. Figure 7 illustrates the basic organization of an object table in an exemplary implementation. Figure 8 illustrates a standard pre-execution environment (PXE) boot procedure. Figure 9 illustrates an enhanced bootstrap process. Figure 10 is a block diagram that illustrates a computing system.

[0030] Most computing systems support the notion of virtual memory. Virtual memory play s a key role in the design of hardware exceptions, assemblers, linkers, loaders. shared objects, files, and processes. Virtual Memory makes it possible to read or modify the contents of a disk file by reading or writing memory locations. It also permits loading or transferring of the contents of a fil e into memory without performing an explicit copy operation.

[0031] To understand the leverage that virtual memory provides, we first define the concept of an address space. An address space is an ordered set of nonnegative integer addresses (i.e. 0, 1 , 2 . . . N). If the integers in the address space are consecutive, then we say- that it is a linear address space or LAS. A basic computer system has a physical address space or PAS that corresponds to the M bytes of physical memory in the system (i.e. 0, 1, 2. . . M - I).

[0032] The concept of an address space makes a clean distinction between data objects

(e.g. bytes) and their attributes (e.g. addresses). We can thus generalize and allow each object to have multiple independent addresses, each chosen from a different address space. Tims, each byte of main memor ' has a virtual address chosen from a virtual address space and a physical address chosen from a physical address space.

[0033] In a system with virtual memory, the CPU generates virtual addresses from an address space of = 2 " addresses called the virtual address space or VAS: {0, 1, 2. . . N - 1 }. The size of an address space is characterized by the number of bits that are needed to represent the largest address. For example, a virtual address space with N = :: 2 " addresses is called an n-bit address space. Modern systems typically support either 32-bit or 64-bit virtual address spaces.

[0034] Virtual memory provides three important capabilities:

[0035] (1 ) It uses main memory efficiently by treating it as a cache for an address space stored on disk, keeping only the active areas in main memory, and transferring data back and forth between local storage disk and physical memory as needed.

[0036] (2) It simplifies memory management by providing each process with a uniform address space.

[0037] (3) It protects the address space of each process from corruption by other processes. [0038] Virtual address space varies according to the system's architecture and operating system. Virtual address space depends on the architecture of the system because it is the architecture that defines how many bits are available for addressing purposes. Virtual address space also depends on the operating system because the manner in which the operating system was implemented may introduce additional limits over and above those imposed by the architecture.

[0039] Formally, address translation is a mapping between the elements of an N- element virtual address space (VAS) and an M-element physical address space (PAS) and is defined as:

MAP; VAS PAS Γ/ 0

Where

MAP (A) = A', if data at virtual address A in VAS is present at physical address A' in PAS (also called a page hit);

Else

MAP (A) :: = ø, if data at virtual address A in VAS is not present in physical memory (also called a page miss).

[0040] The terms ''page hit" and "page miss" are terms that are familiar to those skilled in the art. Those skilled in the art are also aware that to support the translation of virtual addresses to physical address on the fly, special memory management hardware known as a MMU (Memory Management Unit) is implemented in the Central Processing Unit (CPU).

[0041] Referring now to Figure 1, a system is illustrated that uses only physical addressing. Figure 1 include a central processing unit (CPU) 100 and main memory 101 which is organized as an array of byte-si zed cells. The first byte has an address 0 (i.e.

00000000), the next byte has an address of 1 (i.e. 0000001 ) and so on until M-l. In this simple setup, the CPU 100 has access to main memory 101 by generating a physical address 102. The main memory 101 responds with a data word 103 comprise four bytes of data, in this case starting at memory address 4 (i.e. 00000004). Figure 1 demonstrates that in the absence of an MMU, when the CPU accesses physical memory in the form of dynamic random-access memory (DRAM), the actual DRAM locations never change (i.e., memory address 128 is always the same physical location within DRAM).

[0042] Figure 2 is a block diagram that illustrates a system that uses virtual addressing. The system of Figure 2 includes CPU 300 and main memory 205. CPU 300 comprises processor 201 and memory management unit (MMU) 203. Processor 201 is connected to MMU 203 via virtual address bus 202. Virtual address bus 202 is connected to mam memory 205 which is organized as an array of byte-sized cells via a physical address bus 204. The first byte has an address 0 (i.e. 00000000), the next byte has an address of 1 (i.e. 00000001) and so on until M-l.

10043] Figure 2 demonstrates that with 203, virtual memory addresses go through a translation step prior to each physical memory access. In the scenario of Figure 2, the processor 201 accesses main memory 205 by generating virtual address 202 which is then converted into a physical address 204 by MMU 203 inside the CPU 200. MMU 203 translates virtual addresses on the fly using a lookup table stored in main memory 205 whose contents are managed by the operating system. Main memory 205 responds with a data word 206 starting at memory address 4 (i.e. 00000004).

[0044] Some systems utilize data structures called page tables to perform the virtual address space to physical address space efficiently. Figure 3 demonstrates how the MMU 302 uses a data structure called a page table 304 to perform virtual address space to physical address space translation. The page table 304 is conventionally stored in physical main memory 303, which is typically dynamic random-access memory (DRAM).

[0045] The steps that the CPU 300 hardware performs to translate an address from virtual address space to physical address space are enumerated below:

• Step 1 : The processor 301 generates a virtual address (VA) 305 and sends it to the MMU 302.

• Step 2: The MMU 302 generates the page table entry address (PTEA) 306 and requests it from main memory 303 which performs a look-up in page table 304.

• Step 3: Main memory 303 returns the page table entry (PTE) 307 to the

MMU 302, * Step 4: The MMU 302 constructs the physical address (PA) 308 and sends it to cache/main memory 303.

* Step 5: The cache/main memory 303 returns the requested data word 309 to the processor 301.

[0046] Thus every time the processor 301 generates a virtual address 305, the MMU

302 must refer to page table 304 in order to translate the virtual address into a physical address. To elaborate, a control register in processor 301, called the page table base register (PTBR), points to the current page table in main memory 303. The n~bit virtual address has two components: a p-bit virtual page offset (VPO) and an (n - p)-bit virtual page number (VPN). The MMU 302 uses the VPN to select the appropriate PTE 307. For example, VPN 0 selects PTE 0, VPN 1 selects PTE 1 , and so on. The corresponding physical address is the concatenation of the physical page number (PPN) from the page table entry and the VPO from the virtual address. Notice that since the physical and virtual pages are both P bytes, the physical page offset (PPO) is identical to the VPO.

[0047] Figure 4 illustrates an enhanced version of an MMU-based virtual memory system. To improve performance of the virtual address space to physical address space translations, most CPUs provide memory caches that store the page tables 404 on the CPU chip 400 itself in a cache memory called the translation lookaside buffer (TLB) 403. With a TLB 403, the process of translating a virtual address to a physical address is identical as that shown in Figure 3, with the exception of the page table 404 being resident in a TLB 403 cached on-chip in the CPU 400, which provides a performance improvement.

[0048] Figure 4 illustrates the steps that the CPU 400 hardware performs for address translation from virtual address space to physical address space using on-chip Translation Lookaside Buffer 404 cache memory:

* Step 1 : The Processor 401 generates a virtual address (VA) 406 and sends it to the MMU 402.

• Step 2: The MMU 402 generates the page table entry address (PTEA) 407 and requests it from the TLB cache memory 403 which stores a page table 404. * Step 3: The Page Table 404 returns the page table entry (PTE) 408 to the

MMU 402,

• Step 4: The MMU 402 constructs the physical address (PA) 409 and sends it to the main memory 405.

• Step 5: The main memory 405 returns the requested data word 410 to the processor 401.

[0049] In a cloud computing environment, it becomes necessary to provide address translation from a ew address space comprised of the universe of all internet addresses. This disclosure provides an abstraction of a new virtual address space defined as the universal resource address space (URAS). This universal resource address space enables translation for an access to a location in the system's virtual memory space into an access to a memory- mapped address that corresponds to a physical or virtual resource on the Internet or cloud. Those skilled in the art know that these resource addresses are referred to as universal resource identifiers or URIs on the Internet. Thus, a universal resource address space (URAS) may be defined as an ordered set of universal resource locators (URLs), e.g.

{http:/7192. 156.1.1, http://192.156.1.2, . . ., http://192.156. l .N} where each entry is the URL (i.e., the Internet address) of a non-local compute, storage, or service resource. This URL format is familiar to those skilled in the art as the fundamental resource locator format used on the Internet.

[0050] It should also be evident to those skilled in the art that the conventional MMU apparatus shown in Figure 4 will be inadequate to perform a URAS-to-physical address space translation.

[0051] Figure 5 illustrates a virtual memory 500 organized as an array of N contiguous byte-sized cells called a virtual page (VP) 504. Part of the virtual address space is mapped to the local storage disk 501 and part is mapped to the universal resource address space 502. Each byte has a unique virtual address that serves as an index into the array. The contents of the array on disk are cached in main memory in physical pages called PP 505. As with any- other cache in the memory hierarchy, the data on disk is partitioned into blocks that serve as the transfer units between the disk and the main memory or between the Universal Resource Address Space and main memory. Virtual memory systems handle this by partitioning the virtual memory into fixed-sized blocks called virtual pages (VPs) 504. Each virtual page is P = 2 p bytes in size. Similarly, physical memory 503 is partitioned into physical pages (PPs) 505, also P bytes in size (physical pages are also referred to as page frames),

[0052] At any point in time, the set of virtual pages 504 is partitioned into three disjoint subsets:

• Unallocated: Pages that have not yet been allocated (or created) by the VM system. Unallocated blocks do not have any data associated with them, and thus do not occupy any space on disk 50 or the URAS 502.

• Mapped: Allocated pages that are currently cached in physical memory.

• Unmapped: Allocated pages that are not cached in physical memor '.

[0053] The example in Figure 5 shows a small virtual memory with eight virtual pages. Virtual pages 0 and 3 have not been allocated yet, and thus do not yet exist on disk. Virtual pages 1, 4, and 6 are cached in physical memory. Virtual pages 2, 5, and 7 are allocated, but are not currently cached in main memory'.

[0054] Each byte has a unique virtual address that serves as an index into the array.

The contents of the array on disk 501 or the URAS 502 are cached in main memory 503 in physical pages 505. As with any other cache in the memory hierarchy, the data on disk is partitioned into blocks that serve as the transfer units between the disk 501 and the main memory 503, or between the universal resource address space 502 and main memory 503.

[0055] Figure 6 discloses a novel apparatus called the Object Memory Management

Unit (OMMU) 602 that overcomes such limitations of an MMU. The OMMU 602 apparatus disclosed herein is designed to perform unique translation from an address in the expanded virtual address space that incorporates the universal resource address space to the physical memory address space of a local machine. This mapping or translation allows any compute, storage, or Internet resource to be mapped into the local system's physical memory space and hence appear as a local resource. As described above, each entry in the universal resource address space may be a URL of an existing plurality of Internet resources, such as a block of compute resources at Amazon Web Services, for example. Resources located at each URL may be accessed using existing Internet protocols for data exchange. For example, the data exchange may be facilitated by using the HTTPS protocol and its associated command set of GET, PUT, POST, and DELETE. More advanced data exchanges between a local node and the rnernor '-mapped resource at arty URL may be achieved by using advanced programming techniques such as TCP/IP, Unix sockets, and leveraging a plurality of software libraries such as curl URL request library (cURL) that are very well known to those skilled in the art. In this example, virtual memory is provided in the form of universal resource address space 61 1 and disk storage 613.

[0056] The OMMU 602 incorporates a data structure called the object table (OT) 605.

The object table 605 data structure is similar to the page table 604 data structure used with conventional MMUs. Multiple copies of the OT exist in the object translation buffer 603 cache memory implemented on the OMMU 602. An object table 605 contains a plurality of resource addresses or URIs and thus it becomes possible to create a plurality of cloud resource clusters, each defined by unique resource addresses, that are orthogonal to each other, isolated from each other and thus provide data security, process security, and isolation without the need for any additional apparatus such as firewalls, virtual local area networks (LANs), or virtual switches. Nodes or processes that are mapped into a particular resource address space can only be aware of other nodes, processes, or resources in the same object table 605 and all other resource address spaces are completely invisible to them. This provides fine grained privacy at the process level on a physical or virtual machine.

[0057] The OMMU 602 keeps track of which URLs are currently permitted and mapped into the client system's physical address space similar to how a swap file is utilized (i.e., files stored on the disk are mapped into the memory space of a system by an MMU). The OMMU 602 may be programmed by the firmware, the operating system, the guest operating system of a physical or virtual machine, or be hardwired to create a plurality of resource topologies that combine local and cloud-based resources for any single system on the Internet. This also allows for the creation of secure private cloud environments by virtue of populating the OT 605 tables with only the URLs of authorized and authenticated resources available to a specific user or process executing at any node.

[0058] The steps that the CPU 600 hardware performs to translate an address from

Virtual Address Space to Resource Address Space are enumerated below;

• Step 1 : The processor 60! generates a local resource address 606 (e.g., 10, 10.1. 1) and sends it to the OMMU 602. Step 2: The OMMU 602 generates the object table entry address (OTEA) 607 and requests it from the object translation buffer 603 cache memory.

• Step 3: The OTB 603 performs a look-up in object table 605 to check for a matching entry.

¾ Step 4: If there is a match or "hit", the OTB 603 returns the object table entry (OTE) 608 to the OMMU 602.

• Step 5: The OMMU 602 constructs the physical address (i.e., a memory -mapped URL address 609, shown as 192. 156.1 .1 in this example) and sends it to mam memory 606.

• Step 6: The main memory 606 returns the requested universal resource identifier (URI) object located at 192.156.1.1 for the processor 601 to access using a known protocol such as HTTPS.

[0059] The present disclosure provides a very efficient technique to facilitate the activation, discover}', and connection of any node to a plurality of internet resources using a simple representational state transfer (ReST) interface. A major benefit of using the OMMU 602 apparatus is that once the OMMIJ 602 maps a URI object 610 as outlined in Figure 6, the data interchange with that URI object 610 may be achieved in a trusted and secure manner by using a ReST interface. Those practiced in the art know that a ReST interface allows the creation and presentation of complex Internet services in a minimalist way by the use of simple GET, PUT, POST and DELETE commands over HTTPS which supports the use of secure socket layer (SSL) and transport layer security (TLS) encryption for all transactions between any two endpoints in a resource address space. Thus, a ReST-fui interface may be employ ed as an enhanced boot method for a "motherboard" and its constellation of approved and authenticated cloud resources in a given resource address space, and map them into the "motherboard's" local address space in a secure manner. The actual process of booting up using a HTTPS GET/SET request protocol is described in further detail later in this disclosure.

[0060] In Figure 7 an object table (OT) 700 is shown as an array of object table entries

(OTEs) 701. Each page in the virtual address space has an OTE 701 at a fixed offset in the page table. Each OTE 701 consists of a valid bit 702 and an n-bit address field 703. Each entry in the OT 700 is either a pointer into physical memory 705 or a pointer to virtual memory in the form of the resource address space 704 or the disk 706. The conventional page table 304, 404, and 603 used by virtual memory systems is a modality of the object table 700 (i.e., any object table entry 701 may be designated as either a conventional page table entry 703 or as a uni versal resource address table entry 703).

[0061] Most operating systems provide a separate page table, and thus a separate virtual address space, for each process that is executing under the OS. The OMMU also allows mapping of virtual addresses to universal resource addresses (URI or URL) on a per- process basis, for each process executing on the "motherboard." This is important because in a cloud infrastructure there is an increasing use of "containers" as a mechanism to execute a plurality of applications in parallel under a common operating system. In essence,

"containers" are nothing but stand-alone processes that are bound to a virtualized kernel on a per-process basis. In Linux, these "containers" are called Linux Containers or LXC and are well known to those skilled in the art. However, "containers" cannot guarantee security of each process' data because it is not difficult to have a rogue process executing as a

"container" assume root privileges and thus be fully empowered to read, write, and execute data and code belonging to other processes or "containers."

[0062] In at least one implementation, the "motherboard" may use the OMMU apparatus and bootstrap method to ensure that processes (i.e. , "containers") only boot from a predetermined virtual address space (i.e., only authorized universal resource addresses which are loaded into the object table 700 using a ReST protocol from authorized servers). By- abstracting the "container" virtual address space for network and Internet resources and mapping it to a restricted universal resource address space, a "container" is prevented from launching a rogue process that can compromise the data of other containers.

[0063] Referring now to Figure 8, a standard preboot execution environment (PXE) boot procedure is illustrated. The PXE specification describes a standardized client-server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industr -standard network protocols such as dynamic host configuration protocol (DHCP) and trivial file transfer protocol (TFTP). [0064] The idea of PXE originated along with other well-known protocols like the bootstrap protocol (BOOTP), DHCP, and TFTP, and forms part of the unified extensible firmware interface (UEFI) standard. Given fast and reliable local area networks (LANs), PXE is the most frequent choice for operating system boot, installation, and deployment.

[0065] Figure 8 illustrates how the PXE protocol operates. The client initiates the protocol by broadcasting a DHCPDISCOVER containing an extension that identifies the request as coming from a client that implements the PXE protocol. Assuming that a DHCP server or a Proxy DHCP server implementing this extended protocol is available, after several intermediate steps, the server sends the client a list of appropriate boot servers. The client then discovers a boot server of the type selected and receives the name of an executable file on the chosen boot server. The client uses TFTP to download the executable from the boot server. Finally, the client initiates execution of the downloaded image. At this point, the client's state must meet certain requirements that provide a predictable execution

envi ronment for the image. Important aspects of this environment include the availability of certain areas of the client's main memory, and the availability of basic network I/O services.

[0066] The sequence of events as illustrated in Figure 8 are as follows:

[0067] Step 1. The client broadcasts a DHCPDISCOVER message to the standard

DHCP port 67. An option field in this packet contains the following:

• A tag for client identifier (i.e. universally unique identifier, or UUID).

• A tag for the client universal network driver interface (UNDI) version.

• A tag for the client system architecture.

• A DHCP option 60, Class ID, set to "PXEClient: Arch: xxxxx: UNDI:

yyyzzz".

[0068] Step 2. The DHCP or Proxy DHCP service responds by sending a

DHCPOFFER message to the client on the standard DHCP reply port 68. If this is a Proxy DHCP service, then the client IP address field is null (0.0.0.0). If this is a DHCP service, then the returned client IP address field is valid. [0069] At this point, other DHCP services and BOOTP services also respond with

DHCP offers or BOOTP reply messages to port 68, Each message contains standard DHCP parameters, including an IP address for the client and any other parameters that the administrator might have configured on the DHCP or Proxy DHCP sendee.

[0070] The timeout for a reply from a DHCP server is standard. The timeout for re- broadcasting to receive a Proxy DHCPOFFER or a DHCPOFFER with PXE extensions is based on the standard DHCP timeout, but is substantially shorter to allow reasonable operation of the client in standard BOOTP or DHCP environments that do not provide a DHCPOFFER with PXE extensions.

[0071] Step 3. From the DHCPOFFER(s) that it receives, the client records the following:

• The client IP address (and other parameters) offered by a standard DHCP or BOOTP Service.

• The boot server list from the boot server field in the PXE tags from the

DHCPOFFER.

• The discovery control options (if provided).

• The multicast discovery IP address (if provided).

[0072] Step 4. If the client selects an IP address offered by a DHCP service, then it must complete the standard DHCP protocol by sending a request for the address back to the service and then waiting for an acknowl edgment from the service. If the client selects an IP address from a BOOTP reply, it can simply use the address.

[0073] Step 5. The client selects and discovers a boot server. This packet may be sent broadcast (port. 67), multicast (port 4011), or unicast (port 4011) depending on discovery control options included in the previous DHCPOFFER containing the PXE service extension tags. This packet is the same as the initial DHCPDISCOVER in Step 1, except that it is coded as a DHCPREQUEST and now contains the following:

• The IP address assigned to the client from a DHCP service.

• A tag for client identifier (UUID). • A tag for the client UNDI version.

• A tag for the client system architecture.

• A DHCP option 60, Class ID, set to "PXEClient: Arch: xxxxx: UNDI:

yyyzzz".

• The hoot server type in a PXE option field.

[0074] Step 6. The Boot Server unicasts a DHCPACK packet back to the client on the client source port. This reply packet contains:

• Boot file name.

• Multicast trivial file transfer protocol MTFTP1 configuration parameters.

• Any other options the network bootstrap program (NBP) requires before it can be successfully executed.

[0075] Step 7. The client downloads the executable file using either standard TFTP

(port 69) or MTFTP (port assigned in boot server Ack packet). The file downloaded and the placement of the downloaded code in memory is dependent on the client's CPU architecture.

[0076] Step 8. The PXE client determines whether an authenticity test on the downloaded file is required. If the test is required, the client sends another DHCPREQUEST message to the boot server requesting a credentials file for the previously downloaded boot file, downloads the credentials via TFTP or MTFTP, and performs the authenticity test.

[0077] Step 9. Finally, if the authenticity test succeeded or was not required, then the

PXE client initiates execution of the downloaded code.

[0078] It will be evident to those practiced in the art that the PXE boot process is complex and has many potential security flaws. For example, the fundamental assumption is that there is a secure DHCP server available to the client. This cannot be guaranteed because a malicious attacker could easily provide a spoofed IP address for a DHCP server and fool the client into connecting with a malicious DHCP server. Once the client connects to the malicious server, the attacker can download any malware and gain entry into the computer and network environment. [0079] The UEFI specification tries to remedy this potential security loophole by- proposing that a trusted relationship may be created between the motherboard (platform), the motherboard firmware, and the operating system. This trust mechanism uses two pairs of asymmetric keys and an elaborate protocol to validate any data exchange. Those practiced in the art will agree that this protocol is complex and requires the storage of keys in nonerasable, tamper-proof, non-volatile memory. This complex protocol enables passing public keys from the OS to the platform firmware so that these keys can be used to securely pass information between the OS and the platform firmware.

[0080] Typically, the OS has been unable to communicate sensitive information or enforce any sort of policy because of the possibility of spoofing by a malicious software agent. That is, the platform firmware has been unable to trust the OS. By enrolling these public keys, authorized by the platform owner, the platform firmware can now check the signature of data passed by the operating system. Of course, if the malicious software agent is mnnmg as part of the OS, such as a rootkit, then any communication between the firmware and operating system still remains the subject of spoofing as the malicious code has access to the exchange key.

[0081] A mechanism to provide the same functionality as described above with respect to Figure 8 but with more security by using a ReST-ful interface over HTTPS will now be discussed with respect to Figure 9. The technique of Figure 9 leverages the low-l evel functionality of the platform's network interface controller that supports PXE (i.e., is capable of supporting TCP/IP sockets in an OS-absent environment).

[0082] Figure 9 illustrates an exemplary implementation of a boot process. The description of Figure 9 makes reference to items identified in Figure 6 for ease of

explanation, but note that the boot process provided in Figure 9 is not limited to the specific implementation of Figure 6. The process is as follows:

[0083] Step 1. When a "motherboard" 900 powers up, a piece of stand-alone software called a low-level client agent (LLCA) 901 is executed as the initial program load (IPL) process. This LLCA 901 may be programmed into the firmware, loaded into the

motherboard via a USB drive, loaded from a local disk, or provided any other way.

[0084] Step 2. The LLCA 901 will utilize the network interface controller (NIC) 902 device built in or attached to motherboard 900 to send out a HTTPS GET 904 request to an authentication server 905 listening on a pre-determined port address on a specific URL selected by the system administrator. This port address may be changed periodically by the system admin for security purposes. This prevents malicious snooping and tampering that is made possible in a DHCP protocol, by common knowledge that port 67 is the designated DHCP port. In addition, since the request is made using HTTPS GET request, data interchange is secure since HTTPS uses SSL/TLS.

[0085] All data generated by the requesting "motherboard" is tagged with the MAC address 903 of the network interface controller (NIC) 902 of the requesting "motherboard" 900 and a time stamp. This provides traceability of all requests that is useful for

maintenance, debugging, and security of the network.

[0086] Step 3. The authentication server 905 will respond with a challenge 906 to the

"motherboard" that generated the HTTPS GET request.

[0087] Step 4. Upon receiving a valid response, the authentication server 905 will download the resource addresses (URLs) of a set of trusted URI objects into an object table 605 data structure in the object translation buffer 604 on the OMMU 602. For systems that do not have a physical OMMU 602 available, the object table 605 data structure may be cached in physical main memory 606 and a software address translation may be employed by the requesting "motherboard."

[0088] Step 5. Next the client agent can generate HTTPS GET/SET requests 907 to start data interchange with the authorized URI objects, and download any specific operating system or virtual machine software images required to fully configure the requesting "motherboard" per policy-based protocol.

[0089] Step 6. During this data interchange, based on applicable security and privacy policies, the local LLCA 901 is pre-empted by a downloaded system level client agent (SLCA) 908 software module that is matched to the desired operating system. The SLCA 908 then executes the code that sets up the object tables 909 in the OMMU and maps Internet and cloud resources into the physical memory space of the requesting "motherboard."

[0090] Step 7. When the SLCA 908 completes its tasks, the desired operating environment comprising all the authorized Internet and cloud resources are available as local resources to the requesting "motherboard." [0091] It should be clear to those practiced in the art that the OMMU 602 apparatus provides a major benefit in this process. Once the OMMU 602 maps a URI object 610 as outlined in Figure 6, the data interchange with that URI object 610 may be achieved in a trusted and secure manner by using a representational state transfer (ReST) interface. A ReST interface allows the creation and presentation of complex Internet services in a minimalist way by the use of simple GET, PUT, POST, and DELETE commands over HTTPS, which supports the use of SSL/TLS encryption for ail transactions between any two endpoints in a resource address space. Thus, a ReST-ful interface may be employed as an enhanced boot method for a "motherboard" and its constellation of approved and

authenticated cloud resources in a given resource address space, and map them into the "motherboard ' s " ' local address space in a secure manner.

[0092] Some example implementations will now be presented.

[0093] Example 1 : An apparatus and method to federate a plurality of physical and virtual compute, storage, and internet resources and make them available as a local resource to any compute node; said apparatus comprising an Object Memory Management Unit with a buiit-cache memory that is used to translate virtual addresses generated by a CPU into physical addresses on an end-user node on the internet; said Object Memory Management Unit providing the capability to expand the Virtual address space to include the Universal Resource Address Space comprising of all the Universal Resource Locators or URLs on the internet; said Object Memory Management Unit providing the capability to translate virtual addresses from the Universal Resources Address Space into physical addresses enabling local access to URLs on the internet; said Object Memory Management Unit providing the capability to map a plurality of internet and cloud based compute, storage, service resources into the local physical main memory of end-user node on the internet; said method comprising of a novel bootstrap method that implements a ReST interface to connect to resources on the internet via a HTTPS protocol to enable the bootstrap process of any motherboard platform; said method executing a Low Level Client Agent software program on to the motherboard; said Low Level Client Agent either being loaded on to the motherboard via a USB drive, or embedded in the motherboard firmware, or downloaded to the motherboard via the network interface of the motherboard; said method utilizes a challenge/response authentication protocol over a network connection with a dedicated authentication server that is set to listen on a predetermined port address; said authentication server port address being selected by an authorized systems administrator; said port address of the authentication server being capable of being changed at random to maintain access security; said method utilizing HTTPS protocol to implement the said challenge/response protocol; said method utilizing HTTPS protocol to download the System Level Client Agent program code onto the mother board from the authentication server; said System Level Client Agent utilizing the HTTPS protocol to download a list of authorized and available Universal Address Resources and loading said list into the Object Table data structure in the cache-memory of said Object Memory Management Unit; said System Level Client Agent utilizing the HTTPS protocol to download the Operating System on to the motherboard and transferring control to the downloaded Operating System; said Operating System's virtual memory management system using the Object Table within Object Memory Management Unit cache-memory to map virtual addresses on local disk drive to physical addresses in main memor ' as well as using the Object table to map Universal Resource Addresses (i.e., URLs) into the local physical memory to enable the end- user node to access resources as local resources.

[0094] Example 2: The apparatus and method recited in Example i, where in the

Object Memory Management Unit hardware apparatus is integrated onto the CPU silicon.

[0095] Example 3: The apparatus and method recited in Example i where the Object

Memory Management Unit apparatus is implemented as a standalone silicon device to be interfaced to a commercially available CPU.

[0096] Example 4: The apparatus and method recited in Example 1 where the Object

Memory Management Unit hardware does not implement an integrated cache-memory.

[0097] Example 5: The apparatus and method recited in Example 1 , where the Object

Memory Management Unit apparatus is implemented as a software program and uses the Main Memory as the location for the Object Table data storage.

[0098] Example 6: The apparatus recited in Example i where the OMMU supports the storage of encrypted data within an Object Table Entry; said Object Table Entry being composed of a plurality of encrypted and unencrypted fields; said encrypted fields encapsulating " 'objects" containing data structures that are exchanged between any two points that are connected over a network or internet to deploy and access services at either end; said "objects" containing attributes and methods that are interpreted and acted upon by the receiving node to deploy and provide services or data requested by the other end of a network or internet connection; said "objects" encapsulating private keys to encrypt and decrypt data to be exchanged between any two connected nodes in a secure manner; said "objects" providing network address translation and performing

encrypted/decrypted I/O on resources bounded by the translated addresses including network addresses. [0099] Example 7: The apparatus recited in Example 1 used in conj unction with existing PXE bootstrap protocols to allow the download and deployment of internet resources and permit them to he mapped into the local main memory and permit access to those resources as local resources.

[00100] Example 8: The apparatus and method recited in Example 1 where the compute node is a Virtual Machine and the OMMU is implemented in hardware on the host CPU.

[00101] Example 9: The apparatus and method recited in Example 1 where the compute node is a virtual machine and the OMMU function is implemented as a software program module.

[00102] Example 10: The apparatus and method recited in Example 1 where the compute node is an individual process or "container" executing on a host physical CPU or a Virtual Machine and has other compute, storage, network and internet resources and services mapped into the process' or "container's' " virtual address space; where said process or container utilizes the OMMU implemented in hardware on the host CPU or an OMMU implemented as a software program on a Virtual Machine executing on a host CPU to perform virtual to physical address translation that is unique to each process or container, thus securing the privacy of each process or "container" from a plurality of other co-resident processes or "containers" on the same physical or virtual machine; where said virtual to physical address translation performed by the OMMU permits any process or "container" to map a plurality of compute, storage, network and internet services including cloud resources as logical local resources for the process or "container" to provide file I/O (i.e., read/write operations) on any file or memory -mapped device included in the said mapping; where a restricted and authorized set of Universal Resource Addresses are mapped into the virtual address space and the OMMU apparatus provides a virtualization of network and internet addresses on a per process or per "container" basis thus controlling the access to a plurality of other processes or "containers" executing simultaneously on either a physical machine or a virtual machine. [00103] Figure 10 is a block diagram that illustrates an OMMU system in an exemplary implementation. OMMU system 1000 is an example of OMMU 602, although other examples may exist as described herein. OMMU system 1000 includes processing system 1002 and storage system 1004. Storage system 1004 further includes software 1006, which is configured to operate OMMU system 1000 as described herein.

[00104] Processing system 1002 may comprise a microprocessor and other circuitry that retrieves and executes software 1006 from storage system 1004. Software 1006 includes map module 1008 and request module 1010. Processing system 1002 may be implemented within a single processing device, but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. Examples of processing system 1002 include general-purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device. In some implementations, processing system 1002 may comprise the main CPU or CPUs of a computing system, however, in other examples, processing system 1002 may comprise separate hardware within a computing system.

[00105] Storage system 1004 may comprise any storage media readable by processing sy stem 1002 and capable of storing software 1006. Storage system 1004 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Storage system 1004 may be implemented as a single storage device, but may also be implemented across multiple storage devices or sub-systems. Storage system 1004 may comprise additional elements, such as a controller to read software 1006 in some examples.

[00106] Examples of storage media include random access memory, read only memory, magnetic disks, optical disks, and flash memory, as well as any combination or variation thereof, or any other t pe of storage media in some implementations, the storage media may be a non-transitory storage media. In some instances, at least a portion of the storage media may be transitory. It should be understood that in no case is the storage media a propagated signal .

[00107] In operation, processing system executes software 1006 to provide the desired OMMU operations described herein. In at least one example, map module 1008 directs processing system 1002, when executed by processing system 1002, to map virtual addresses to local addresses and network addresses, wherein the local addresses correspond to local resources of a computing system, such as dynamic random-access memory (DRAM) or disk storage (flash media, hard disk drives, etc.), and wherein the network addresses correspond to network addresses (URIs and the like). As the mapping is maintained via map module 1008, request module 1010, when executed by processing system 1002, directs processing system 1002 to identify data requests that use the virtual addresses and handle the data requests per the maintained mapping. For example, if a request with a virtual address were mapped to a local address for a local resource, processing system 1002 may access data from the local resource of the computing system. In contrast, if a request with a virtual address were mapped to a network address for a network resource, processing system 1002 may access data in the network resource over the network. The data requests may be generated by the operating system on the host computing system for OMMU system ! 000, may be generated by applications executing on the host computing system for OMMU system 1000, or may be generated by any other process on the computing system.

[00108] The functional block diagrams, operational sequences, and flow diagrams provided in the Figures are representative of exemplary architectures, environments, and methodologies for performing novel aspects of the disclosure. While, for purposes of simplicity of explanation, methods included herein may be in the form of a functional diagram, operational sequence, or flow diagram, and may be described as a series of acts, it is to be understood and appreciated that the methods are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.