Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER SYSTEM MODEL
Document Type and Number:
WIPO Patent Application WO/2006/054099
Kind Code:
A8
Inventors:
SKIMMING WILLIAM GLEN (GB)
Application Number:
PCT/GB2005/004453
Publication Date:
August 20, 2009
Filing Date:
November 17, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SKIMMING WILLIAM GLEN (GB)
International Classes:
G06F15/00; G06F15/76
Attorney, Agent or Firm:
CRAWFORD, Andrew, B. et al. (235 High Holborn, London WC1V 7LE, GB)
Download PDF:
Claims:
Claims

1 Computer System comprising of meta-model processing means for receiving, transmitting and corroborating computer system meta-information, the meta-processing means comprising of; metaSA means, metaCA means and metaIA means, the metaSA processing means comprising of meta-information for receiving, transmitting, corroborating and fabricate on a plurality of system architecture properties, the metaCA processing means comprising of meta- information for receiving, transmitting, corroborating and fabricate on a plurality of computer architecture properties, the metaCA processing means comprising of meta-information for receiving, transmitting, corroborating and fabricate on a plurality of implementation architecture properties, in addition, the computer system comprising of processing means for receiving/transmitting and processing of any information for application or system purposes, the processing means comprising a plurality arrangement of configurable algorithmic processing blocks, in addition, to the arrangement of configured algorithmic processing means handling active information processing tasks, moreover, all types of processing means operate concurrently on independent tasks to each other, in addition, the active execution of the processing instruments is guarded by the security processing means in both meta-information and instance information levels.

2 A computer system as claimed in claim 1, wherein the meta- information forms an active self-describing stream of the system that is periodical stored in meta-engrams both in memory and storage maintaining accurate corroborative state information used for services such as; robust recovery cases, re-engineering cases, optimisation cases, operation cases, maintenance cases and administration cases, that these self-describing streams form a polymer-metacode model operated on by the metaSA, metaCA and metaIA, and used to dynamically chain meta-streams and instances stream

each bearing a technical effect on the configuration and abilities of the computer system.

3 A computer system as claimed in claim 1, wherein the metaSA processing instruments are of a reconfigurable property to any design fashion such that its processing means, enable to formulate proactive and reactive assessments for best course of action on the running of system architecture meta-models.

4 A computer system as claimed in claim 1, wherein the metaCA processing instruments are of a reconfigurable property to any design fashion such that its processing means, enable to formulate proactive and reactive assessments for best course of action on the running of computer architecture meta-models.

5 A computer system as claimed in claim 1, wherein the metaIA processing instruments are of a reconfigurable property to any design fashion such that its processing means, enable to formulate proactive and reactive assessments for best course of action on the running of implementation architecture meta-models.

6 A computer system as claimed in claim 1, wherein the embodiment of metaIA into a plurality of processing agents, managing the instances fabricated by the metaSA processing instruments and the metaCA processing instruments, the plurality of processing agents sustain harmony between the system architecture information component instances and the computer architecture component instances, in addition, the arrangement of processing agent instruments enable decision pattern processing on all abstraction levels of the computer system, moreover, the processing agent instruments makes proactive and reactive decisions that configure the environment to suit the requirements of the information instance.

7 A computer system as claimed in claim 1, wherein a plurality of algorithmic processing instruments form a reconfigurable block, and each reconfigurable block owns a plurality of interface processing instruments capable of concurrent information stream and access which govern the means of information flow in and out of such quiddities and algorithmic processing instruments, wherein a plurality of such quiddities form a mesh commanded by the embodiment of a metaIA agent instrument, and the plurality of such agent instruments are commanded by a primary metaIA processing agent forming a loosely coupled heterogeneous arrangement of quiddity and agent instruments, the primary metaIA agent processing instrument takes as input meta-information and information instance from the metaCA and the metaSA which the metaIA output is the organisation and OAM of the mesh to suit the tasks requirements.

8 A computer system as claimed in claim 9, wherein the arrangement of quiddities and agents can handle a plurality of independent information threads, co-ordinated by an agent instrument forming a multi-thread aware computer system.

9 A computer system as claimed in claim 9, a plurality of quiddities can be assigned to a group and a plurality of groups can be configured such that each group handles application threads, and that the grouping of such threads are dependent of there application type, enabling the computer system to be application centric without being application specific.

10 A computer system as claimed in claim 9, where the mesh of processing instruments enable a high-bandwidth of information processing. In addition, such meshes can form redundancy units that can recover or sustain a thread of processing should another group meet some degree of failure.

11 A computer system as claimed in previous claims can run a plurality of mixed execution models either of spatial and temporal fashion,

12 A computer system as claimed in previous claims wherein the arrangement of quiddities are such to provide a triumvirate of processing instruments, wherein the embodiment of one triumvir agent has the combined roles of the metaSA and metaCA comprising the etiquette of the system, another embodiment triumvir agent commands the role of the metaIA comprising of the delegation and organisation of the system and the remaining embodiment triumvir commands the role of action comprising of the application threads under the guidance of the two control triumvirs.

13 A computer system comprised of a plurality in any design fashion of computer system components as claimed in any previous claims forming a communal of distributed units, wherein the roles of a computer system are distributed to its members such roles comprising of application processing, memory processing, presentation processing, connectivity/communication processing, security processing, peripheral processing and storage processing.

14 A computer system as claimed in claim 15, wherein the memory unit has an agent instrument administrating, organises, validates and directs the flow of tasks to the remains distributed units. The application unit has agent- processing instruments each dedicated to a family set of application tasks. The presentation unit has agent-processing instruments each dedicated to a presentation type such as; screen display, document, web pages and visual peripherals. The storage unit has agent-processing instruments each dedicated to storage related tasks to such devices as; DVDs, CDs, hard drives and floppies. The connectivity/communication unit has agent-processing instruments dedicated to handing communication and connectivity tasks such as; network protocols, network interfaces and middleware related devices and protocols. The security unit has agent-processing instruments dedicated to

handling the security model(s). The peripheral unit has agent-processing instruments dedicated to such devices; pointing devices, printers, scanners and cameras.

15 A computer system as claimed in claim 13 and 14 can be scaled within a single or multiple devices to form a computer system of any fashion.

16 Computer system comprising processing means for receiving/transmitting and processing an instruction stream; the processing means comprising etiquette means; delegation means and action means, the etiquette means comprising semiotic and idiom processing means for receiving and interpreting the instruction stream, the instruction stream including schemas defining the instruction language, and etiquette processing means for monitoring and communicating requests for tasks to the delegation means, the delegation means comprising judgment processing means for monitoring and communicating requests for services to the action means, in response to communication with the etiquette means, and reporting to the etiquette means requests for information from the action means, and discovery processing means for retrieving configuration information relating to available services from external sources in response to communication with the etiquette means, the action means comprising action processing means configurable to carry out different processing functions, factory processing means for configuring the action processing means in response to communication with the delegation means, and interface processor means dynamically configurable by the action processing means to allow communication between the action means and external sources/destinations, the processing means further comprising memory processor means, the memory processor means including storage means for storing schemas and communication control information, the memory processor means allowing communication with each of the etiquette, delegation and action processing means, wherein communication within the system takes place via interfaces which are dynamically configurable by the

processing means by which they are owned, in accordance with dynamically determined exchange protocols, and the etiquette, delegation and action processing means are arranged to enable communication with the corresponding means of a different system.

17 1.4 System Architecture

18 1.5 Computer Architecture

19 1.6 Implementation Architecture

20 Computer system comprising processing means for receiving/transmitting and processing an instruction stream, the processing means comprising etiquette means, delegation means and action means,

the etiquette means comprising idiom processing means for receiving and interpreting the instruction stream, the instruction stream including schemas defining the instruction language, and etiquette processing means for monitoring and communicating requests for tasks to the delegation means,

the delegation means comprising judgment processing means for monitoring and communicating requests for services to the action means, in response to communication with the etiquette means, and reporting to the etiquette means requests for information from the action means, and discovery processing means for retrieving configuration information relating to available services from external sources in response to communication with the etiquette means,

the action means comprising action processing means configurable to carry out different processing functions, factory processing means for configuring the action processing means in response to communication with the delegation means, and interface processor means dynamically configurable by the action processing means to allow communication between the action means and external sources/destinations,

the processing means further comprising memory processor means, the memory processor means including storage means for storing schemas and communication control information, the memory processor means allowing communication with each of the etiquette, delegation and action processing means,

wherein communication within the system takes place via interfaces which are dynamically configurable by the processing means by which they are owned, in accordance with dynamically determined exchange protocols,

and the etiquette, delegation and action processing means are arranged to enable communication with the corresponding means of a different system.

21 A computer system as claimed in claim 1, wherein the etiquette means comprises security processing means for validating the authenticity of instructions contained in the instruction stream, on the basis of security information contained in the instruction stream.

22. A computer system as claimed in claim 2, wherein the etiquette means is arranged to prevent unauthorised instructions from being processed.

23. A computer system as claimed in any preceding claim, wherein the instruction stream includes schema instances comprising instructions according to rules defined in the schema.

24. A computer system as claimed in claim 2 or 3, wherein the instruction stream includes schema instances comprising instructions according to rules defined in the schema, and the security processing means are arranged to validate each schema instance.

25. A computer system as claimed in any preceding claim, wherein the dynamically determined exchange protocols are determined on the basis of a self-describing portion of the information being transmitted, which contains details of the nature and form of the information.

26. A computer system as claimed in. claim 6, wherein the self-describing portion contains routing information.

27. A computer system as claimed in any preceding claim, wherein the etiquette means are arranged to interpret schemas including system information specific to a particular application.

28. A computer system as claimed in claim 8, wherein the system information comprises information regarding hardware requirements and/or task management information.

29. A computer system as claimed in claim 8 or 9, wherein the etiquette means are arranged to communicate system information obtained from the schema to the memory processor means.

30. A computer system as claimed in claim 10, wherein the memory processor means are arranged to provide a content addressable memory,

whereby to allow different processing means to access relevant system information.

31. A computer system as claimed in claim 10 or 11, wherein the delegation means are arranged to retrieve system information from the memory processor means.

32. A computer system as claimed in any of claims 8 to 12, wherein, in response to a schema received and interpreted by the etiquette means, the etiquette means is arranged to communicate system information to the delegation means based on the schema, and the delegation means is arranged to instruct the action means in respect of required execution engines, the action means being arranged to configure the action processing means in response to the instructions from the delegation means.

33. A computer system as claimed in claim 13, wherein the etiquette means is arranged to receive schemas from an external requestor by means of an assignment cycle, the assignment cycle comprising the etiquette means being invoked by the requestor, validating and decoding the schema, communicating the decoded schema to the memory processor means, and confirming completion of the assignment cycle to the requestor.

34. A computer system as claimed in claim 14, wherein the etiquette means is arranged to open communication with the delegation means following completion of the assignment cycle, instruct the delegation means of the presence of the schema and of information regarding usage of the schema, and close communication with the delegation means.

35. A computer system as claimed in claim 15, wherein the delegation means is arranged to access the schema from the memory processor means in response to the communication with the etiquette means, configure the

judgment processing means and the discovery processing means on the basis of information from the etiquette means and the schema, open communication with the action means, instruct the action means in respect of required execution engines on the basis information contained in the schema, receive information from the action means regarding the success or failure of the configuration of the action processing means, close communication with the action means, and report the success or failure to the etiquette means.

36. A computer system as claimed in claim 16, wherein the action means is arranged to configure the action processing means in response to the communication with the delegation means, test the configuration, report the success or failure of the configuration to the delegation means, and listen for requests for processing functions from external sources.

37. A computer system as claimed in any preceding claim, wherein the discovery processing means are arranged to retrieve configuration information from an external system in response to a request from the etiquette means or the action means.

38. A computer system as claimed in any preceding claim, wherein the discovery processing means are arranged to coordinate with the corresponding means of an external system to enable certain processing functions required by the instruction stream to be carried out by the external system.

39. A computer system as claimed in any preceding claim, wherein the judgement processing means schedules requests to the action means whereby to optimise the occupancy of the action processing means.

40. A computer system as claimed in any preceding claim, wherein the action processing means are configurable to provide a plurality of distinct execution engines for independently carrying out processing functions.

41. A computer system as claimed in claim 21, wherein the distinct execution engines are configured to transmit and receive data independently from one another, to and from locations specified by the delegation means.

42. A computer system as claimed in any preceding claim, wherein the schema includes TJML models providing information about the system, and the etiquette means is arranged to process the models in real time to allow configuration of the system on the basis of expected kinds of future instructions in the instruction stream.

43. A computer system as claimed in any preceding claim, wherein the schema defines portions of the configurable action processing means which can be assigned on the basis of subsequent received instructions to be configured to provide execution engines required for a particular task.

44. A computer system as claimed in any preceding claim, wherein the etiquette means is arranged to interpret information from the schema which defines or extends the instruction set used by the computer system.

45. A computer system as claimed in any preceding claim, wherein the etiquette means is arranged to interpret from the schema information for performing hardware updates within the configurable action processing means.

Description:

Computer System Model

1.1 Technical Field

A high-level architect would describe this application as a self-describing computer system model. This application defines Conceptual Logic (CL) as a set of ontologies that self-describes its computer system and any part of it. Thus, the set of ontologies that describe a computer system forms the meta- semiotics of conceptual logic.

1.2 Definition of Application terms

The sections below break down the reasoning to the technical name Conceptual Logic -Computer System Model (CL-CSM). Also aware people of varying expertise and skills will read this document, what follows is a systematic definition of terms used throughout this application. There importance is to ensure a level-set between reader and the inventor.

Conceptual Ontology

In this application, conceptual ontology is a map of ideas and their relationships.

Ontology:

In this application, ontology is the take on to develop an exhaustive and rigorous conceptual schema within a given domain, a typically hierarchical data structure containing all the relevant entities and their relationships and rules within that domain.

Schema:

In this application, schemas describe specialised information set that describes the anatomy and action of some quiddity. Schemas are an internal representation of the quiddity.

Idiom

In this application, idiom means the meta-meta-language or vocabulary comprised of metadata structures bound by some grammar constraints. Idiom provides the syntax (pattern rules of semantic metadata).

Etiquette

The rules and behaviour coded in the metadata that prescribes; the idiom, moiety or quiddity.

Information

In this application, information is a term depending on the context applied, as a rule, information is used to describe at any abstraction level; ontologies and schemas, meta-messages and models, patterns, hardware, software, knowledge, instruction, communication, representation, data and stimulus.

Information: as a Message

In this application, when information is a message, it means something is communicated from the sender to the receiver, as opposed to noise, which is something that inhibits the flow of communication. When information is a message, it does not have to be accurate. CL-CSM assumes a sender and a receiver, and does not attach any data processing significance to the information. Information in this sense is simply any message the sender chooses to create, in response to a requestor response or part of the contract building relationship.

Information: as a Pattern

In this application, when information is a pattern, it assumes a separation among agents and its representation between agent and its quiddity. In CL- CSM, messages, data and metadata are forms of information conforming to some pattern. Thus, information is any type of pattern that influences the

formation or transformation of other patterns. For example: DNA is a pattern sequence of nucleotides that influences the formation and development of an organism.

Patterns

In this application, a pattern can be; a model, a set of rules, a template, a framework or idiom. Patterns direct the action of building, routing or command issuing of quiddities, moieties or information units. Especially if the quiddities instances have enough in common for the underlying pattern to be inferred or discerned, in which case the quiddities are said to exhibit the pattern.

Meta-Information

In this application, meta-information is the metadata on any information form such as its ontology or an instance. That is either in a stream, packet, cyber- structure or coded storage pattern. Meta-information is the basic root data for all data types.

Metaware

In this application, metaware is the embodiment of a self-describing design pattern stream. Metaware shapes configurable hardware blocks known as quiddities into a role-processing unit known as agents. Metaware shapes and organise software applications and applets. Metaware is a template that describes instance of any conceptual logic schema. Metaware goes beyond the software or firmware models, as it contains hardware information, which reconfigures the configurable logic, thus, it naturally has a technical effect.

Metadata

In this application, metadata means data about data. Metadata describes information about the data's structure. Metadata is definitional data that provides information about or documentation of other data managed within a

quiddity environment. Metadata may include descriptive information about the context, quality and condition, or characteristics of the data.

Meta-assembly

In this application, meta-assembly means the collection of metadata patterns into a single assemble unit. Meta-assembly is the embodiment of a pattern of a metadata either in storage or as an instance. Meta-assembly is a set of metaware units that could compose of any combination from; Moiety, Quiddity and Agent.

Meta-semiotics

In this application, meta-semiotics means the totality of the general rules of construction, syntax, semantics and pragmatics, which characterise the metaware unit. Meta-semiotics describes the language structure, pattern, meaning, context and rules used by a metaware unit.

Meta-language

In this application, meta-language provides the self-describing information for the compilation, assembly and interpretation rules for the ontology. Meta¬ language provides the frame concept to the structuring of the language to be used, and the core language constructs.

Logic:

In this application, logic means the language by which computer systems use.

Moiety

In this application, a moiety is a part or portion of a quiddity or agent entity. To lessen the complexity of a quiddity its facets are factored into clear clean parts called moieties. Thus, a moiety is owned by its quiddity. Moieties offer a set of facilities to the quiddity, whereas a quiddity offers services.

Quiddity

In this application, a quiddity is something that has behaviour, in this context, something means is has no role its a free resource. As the behaviour and model develops, the Quiddity will morph from a 'something' to a specific role- playing Agent entity. The embodiment of a Quiddity is a raw pliable intelligent cell-unit, and at its core is a combinatorial configurable logic form.

Agent

In this application, an agent is an initialised quiddity to some purposeful role for the computer system. Thus, an agent is an autonomous entity with an ontological commitment and agenda of its own. Each agent possesses the ability to act autonomously; this is an important distinction because a simple act of obedience to a command does not qualify an entity as an agent. An agent may interact or negotiate with its environment and with other agents. Agents make decisions, therefore, an Agent is both proactive and reactive to its environment following the CL-CSM meta-model. Agents are clearly identifiable problem solving entities with well-defined boundaries and interfaces; embedded in a particular environment over which they have partial control and observability.

Service

In this application, a service is a set of well-defined functionings that has a registered and well-defined signature. Registered services are discovered through introspection or from some registered service agent.

Facility

In this application, a facility is a closely related set of technical services. The arrangement of directly related facilities into a service creates well-defined crisp service architecture. v

Computer System

In this application, a computer system is the set of hardware and software, which processes data for some purpose. A computer system is a technical implementation of its computer system model.

Hardware

In this application, hardware is a comprehensive term for all the physical parts of a computer system. Thus, hardware is distinguishable from the data it contains or operates on, or from the software that provides instructions to the hardware to perform purposeful tasks, as being some physical entity.

Software

In this application, software refers to one or several computer programs and data held in storage on a computer for some purpose. The software performs the functionality of the program it carries out, either by directly issuing instructions to the computer hardware or by serving as input to another piece of software. Data software exists solely for its eventual use by other program software.

Data

In this application, data is the plural of datum, which means a large class of statements. In data context statements are; measurements, observations of a variable or results from a software operation. Such statements may comprise of numbers, words, or images

Interface

In this application, interfaces are; entry-points to conceptual logic components, entry-points to any metaware component, entry-points of services, facilities, operations and any software method, entry-points from one embodiment to another. Interface's offer more than a gateway to something, interfaces also; discover the behaviour and scope of a unit, organise and route streams, track

exchange patterns, organise and preserve the active network mesh and provide a level of security. Interfaces offer the only means to receive or dispatch information.

Message

In this application, a message is a structure containing all or part of either, information or data. Information can be metadata.

Message Exchange

In this application, a message exchange is the exchange of information among two or more units.

Message Exchange Pattern (MEP)

In this application, MEP is the pattern and protocol by which quiddities cohere to. MEP' s form part of the Etiquette of the system and are described within the meta-etiquette.

Computer System Model (CSM)

In this application, a CSM is the functional abstraction of computer systems. Thus, a CSM is the root of many computer systems. Each computer system differs in its subtle technical implementation from the common CSM. Though a CSM is a blueprint for infinite computer system designs and implementations, it also binds them to a common functional distribution with common constraints.

Conceptual Logic - Computer System Model (CL-CSM):

CL-CSM is an aggregation from several core ontologies that describe what it is at the CSM level. A core ontology, is a further aggregation of schemas; schemas contain specific meta-information on what it is, what it does and how it performs. Schemas contain self-describing information on its; organisation, structure, etiquette and action patterns. To summarise, conceptual logic is an

arrangement of self-describing models that enable a computer system to perform introspection and understanding on other CL-CSM based systems.

Reconfϊgurable Computing

Reconfigurable computing is computer processing with flexible computing fabrics. The principal difference when compared with using ordinary microprocessors is the ability to make substantial changes to the data path itself as well as the control flow.

1.3 Understanding the Application Arrangement

The decision to make subtle changes to the format of a standard patent application was not taken lightly. However, given the scope of the application, and the number of step changes it makes, and the issues a reader may have the following arrangement was chosen.

The History section sets the pattern of how the technology developed providing a level-set between reader and inventor. The Problem Domain section describes a development and follows with the problems (in italic text) that it created. For example, a description of CISC what that solved, and its impact to computing, followed with the problem on compilers and ISAs. The summary section then collects all theses problems that are then used to best show this application's innovation.

The two sections; History and Problem Domain are not there to show knowledge of computer systems, or the extent and scope of understanding. There presence is to highlight the scope, areas of innovation this application is in and identify the step change the application has to computing.

2. History

Computer systems are not new, arguably, Charles Babbage's Difference Engine in 1833 would be the first. Progressive development such as, Howard Aikin's Harvard Mark I, or John Von Neumann's 1945 Princeton machine are just some of the milestones in this field. Nevertheless, these are computer implementation architectures each winning its own rightful place in history. However, as the abstract section addressed, a computer system model scopes more than implementation.

The history section objective is to prime the reader in to the progressive development of computer system model. The point is to describe the trend of development that shaped designers thinking and in consequence the patent filings. The purpose of detailing the computer system history is to prescribe why this application is unique.

2.1 System Modelling

The pattern of computer development has a common theme, which is to perform complex calculations fast, reliably and repetitiously. In its infancy, computer science did not have a recognised methodology or modelling technique.

Alan Turing is the recognised father of computer system modelling. Given the mind-set of the time (1936), Turing's technique developed an Instruction Set Architecture (ISA) oriented, to perform calculations faster and more reliably than any human. Turing developed a simple modelling technique called a Turing Machine (TM) comprising of:

•An infinite length of tape

•A Simple Control box

•A simple fetch engine to get datum

•A write-back engine

•A simple data-path box

•A simple execution unit to work on the data in the designed manner

Figure 3 describes the UML components of a TM. A description on how those components interact is in the TJML sequence diagram shown in Figure 4: Sequence diagram of the Turing Machine. In UML, a diagram is an abstraction level of the model.

The TM sequence steps are:

Step 01. The fetch engine reads datum from a location on the tape Step 02. The fetch engine then issues the instruction or datum to the execution unit

Step 03. The execution unit then processes the datum guided by the instruction engine

Step 04. The execution unit sends result invoking the write back service Step 05. The write-back engine then writes the result to a location on the tape

Step 06. The write-back then tells the fetch engine that its ready for the next instruction

Effectively the Turing machine is an abstract model of computer execution and storage, giving a mathematically precise definition of algorithm or 'mechanical procedure'. Turing Machines are still widely used by computer architects, in theoretical computer science, and the theory of computation. To summarise, a Turing Machine (TM) is just a Finite State Machine (FSM), which receives input patterns and writes the results on to an infinite tape.

Thus, a TM can solve a 'Finite set" of FSMs. Below is a UML state diagram representing the states the CPU transcends as it flows through the sequence.

Turing realised that a TM models an execution sequence; to encapsulate the set of sequences another type of TM is needed. In 1947, Turing described a universal Turing machine as the solution to simulating any other TM as:

It can be shown that a single special machine of that type can be made to do the work of all. It could in fact be made to work as a model of any other machine. The special machine may be called the universal machine.

2.2 Turing Machine and the Computer System Model

The popular approach of using TMs to model a computer system meant the focus starts in the computer architecture and not ordered by the component layers shown in Figure 4: Computer System Model. This conflict from a general design approach of either top-down or bottom-up, nevertheless, the focus has always started from the Computer Architecture (CA).

TM modellers would develop the ISA using hunches and trends provided by market researchers, which became the contract between the System Architecture and the Computer Architecture. Thus, computer architects define the set of machine attributes in the ISA that a system architect should understand to program the microprocessor. This approach is fraught with problems, and described in the following section.

2.3 Definition of Computer System Model Parts

A standard CSM comprises of three main components as shown in Figure 1 : Standard computer system model. Error! Reference source not found. Also considered as the CSM abstraction levels:

•SA - System Architecture

•CA - Computer Architecture

•TA - Implementation Architecture

2.3.1 System Architecture (SA) - abstraction levels

The SA component in Figure 6: CSM - system architecture component, has the generic functional architecture represented in Figure 7: Generic architecture of

an operating system. A breakdown of the primary components that make the system architecture follows.

Application

In this application, a software application is a subclass of computer software that employs the capabilities of a computer directly to a task the user wishes to perform.

OS

In this application, an operating system (shown in Figure 7: Generic architecture of an operating system), is the system software responsible for the direct control and management of hardware and basic system operations, as well as running application software.

Compiler

In this application, compilers translate source code written in a high-level language to object code or machine language, which the computer executes. Thus, compilers understand the contract rules of the ISA the sequence diagram described in Figure 8: Compiler relationship with the ISA provides a general view of how they interact.

Firmware

In this application, firmware is software that is embedded in a hardware device. The BIOS firmware is an integral part of the system hardware. It provides low-level communication, operation and configuration to the hardware of a system.

2.3.2 Computer Architecture (CA) - abstraction levels

The CA component in Figure 9: CSM - computer architecture component, has the generic functional architecture represented in Figure 10: Generic CPU architecture. A breakdown of the computer architecture components follows.

Instruction Set Architecture - ISA

In this application, Instruction Set Architecture (ISA) forms the contact between computer hardware and software. The ISA is a specification describing what a computer's CPU should be able to understand and execute, The TSA describes the programmers model of the microprocessor to the system architect, including the native datatypes, instructions, registers, addressing modes, memory architecture, interrupt and fault system, and external 1/0.

Instruction Set Processor - ISP

In this application, Instruction Set Processor (TSP) understands the instruction set, and with the execution model of the processor. Thus, the TSP understands; how instructions are encoded and the details of operand addressing and register usage. The ISP forms the control subcomponent of the CPU model described in Figure 10: Generic CPU architecture, the control unit is also known as the instruction engine.

I/O

In this application, Input and Output (I/O) is the collection of interfaces that different functional units of an information processing system use to communicate with each other, or to the information sent through those interfaces.

Datapath - Execution Engine

In this application, Datapath (described in Figure 10: Generic CPU architecture) will be referred to as the execution engine. An execution engine is a part of a CPU that performs the operations and calculations called for by the program. It often has its own control unit, registers, and other electronics, such as an arithmetic and logic unit or floating-point unit, or some smaller, more specific component.

CPU Digital Design

In this application, CPU digital design is the abstraction level that has two clear parts its Functional and Technical design. Functional design is an abstraction level that organises and distributes various functional units into a logical design structure or Architect vision.

2.3.3 Implementation Architecture (IA) — abstraction levels

The CA component in Figure 11 : CSM - implementation architecture component, has the generic functional architecture represented in Figure 12: Generic organisation of the SA and CA parts. A breakdown of the implementation architecture components follows.

Digital Design

In this application, the technical design is a design proposal based from the functional design of the CPU digital design model in to a computer system.

Circuit Design

In this application, circuit design is the collection of digital circuits formed in integrated circuits, interconnected in such a fashion as the whole will perform the work of some computer system.

Layout Design

In this application, layout design is the physical organisation and implementation of the parts from some technical design. For example a PC system model design has an implementation model that organises key modules at the system level to some architecture that best fits the requirements and design.

Organisation

Figure 12: Generic organisation of the SA and CA parts shows a standard organisation of the hardware and software to make a PC model. Implementation architecture describes these interface organisations their relationships and the physical devices. In this application, organisation means the machine, software the interfaces and their interactions.

2.4 History Summary

As stipulated at the beginning of this section the objective is to show the trend of CSM development through its subcomponent development. CSM history has been a design pattern based on:

•A focus to improve the numerical computational ability of a CPU

•A focus of improving temporal instructions performance

•A focus in developing better ISA's of the CPU architecture

•Using Turing's method of modelling

•How best to get the CA to react the instruction and data streams

•The computer architecture is the core to a CSM, and impacts on the remaining CSM components to make them a CPU driven solution

Computer systems developed from the need to perform ever increasing hard¬ working, repetitive and complex tasks. In a fast and reliable way, early computer systems were improvised, rather than being developed to some methodology. The adoption of Alan Turing's method changed that and shaped most designers way of viewing the computer system.

The following section highlights this problem domain and identifies the current trend of overcoming these problems.

3. Define the standard Computer System Model (CSM)

The first problem with the trend of CSM development, and though each sub¬ component has recognised business practises, they are dissonant from each other. There is no common framework, for example, Intel makes CISC/RISC chips, they define TSA' s with little help or input from system architects. Microsoft make operating systems, they only consult Intel on the ISA then go their own way. No common or standard framework that can integrate these architecture disciplines. Identifying this fact, will facilitate in understanding this application.

Identifying that CPU design influences the remaining CSM components, thus, making computer system development ISA centric. The first analysis of the problem domain is the computer architecture.

3.1 Computer Architecture issues

Figure 9: CSM - computer architecture component describes the various components that form the computer architecture. Until IBM this area had no standard methodology even within a company, thus, one ISA would be totally different and non-compatible from another.

IBM's innovation is to develop a reference instruction set for all their current and future computer systems. To lessen complexity from a developer's reference, they designed the ISA to execute many complex actions from a single instruction. Thus, the computers would have to make fewer fetch instructions from the main memory, enabling this store to be smaller than other implementations. This form of Complex Instruction Set Computing (CISC) architecture became the main computer type.

IBM's reference instruction set had many innovations; a notable one is designing the ISA for data processing, rather than just mathematical calculation. The ISA design manipulated more than simple integer numbers, but text, scientific floating point, and the decimal arithmetic needed by

accounting systems. IBM's successful design influenced others to follow their paradigm.

The CISC ISA is an orthogonal instruction set, which means the ISA has the property of -any instruction can use any register in any addressing mode. In addition, CISC appeared to address the issue with the growing gap between the machine language of the CPU, and the need for high-level human oriented language.

The problem is the illusion that CISC is the right solution, which is based on; the small microcode, the high-speed memory access and the orthogonal organisation of instructions. In addition, compiler writing could not fully capitalise on the ISA. The result is, real-life instruction usage used only a limited set of the available instructions. Thus, CISC did not bridge the 'semantic gap'. CISC compilers did highlight the performance gap between high-level languages and machine languages. In addition, the complexity of grammars meant optimisation could work against the programmer's intent.

In Figure 13: Trend of CPU development the CPU design on the left labelled - Simple CPU, describes the CISC CPU architecture. Though Development of CISC evolved from 4-32 bits wide, the complexity of the core engines highlighted the next problem of this approach.

An apparent solution to CISC limitations came from research carried out by IBM and UC Berkeley. They discovered the compiler only used a small subset of the available CISC instructions. Thus, much of the power of the CPU is under-utilised for real-world use. The conclusion, make the computer simpler and less orthogonal, the result is the Reduced Instruction Set Computer (RISC). RISCs had larger numbers of registers, accessed by simpler instructions. The resultant model is a simple CPU core running at high-speed, supporting the exact sorts of operations the compilers were using anyway.

The RISC high-speed highlighted the need to improve instruction and data feeds. Techniques such as instruction pipelining, coupled with executing multiple instructions in parallel on separate execution engines had a great impact. Figure 13: Trend of CPU development describes this superscalar approach (middle architecture). The simplest superscalar arrangement is to use one bus manager or arbiter, to manage the memory interface, an instruction engine to manage the feed of instructions, and two execution engines to perform calculations.

Superscalar CPUs highlighted the need to 'predict' branches as they imposed a severe impact and overhead on the CPU instruction engine. For example, in the centre of Figure 13: Trend of CPU development is a simple superscalar CPU. In a use case, whilst running a program it needs to add two numbers and branch to a different code segment, if the resultant is bigger than a third number. In such a use case when the branch operation is issued to the second execution engine for processing. The active execution engine must wait for the results from that addition. Thus, it runs no faster than if there were only one execution engine.

All pipelined processors like superscalar CPUs perform some form of branch prediction, because they all must guess the address of the next instruction to fetch before performing the current instruction. A branch predictor is the part of a CPU's instruction engine that decides whether a branch in the instruction flow is likely. Branch predictors are crucial in superscalar processors in achieving their high performance. In theory, they allow processors to fetch and execute instructions without waiting for a branch to be resolved.

The problem is RISC promised a lot and every attempt to get RISC CPUs to perform resulted in highlighting another flaw or true complexity of the problem. With large caches, holding data and instructions for the execution

engine needed ever more elaborate predictor circuitry. The instruction pipeline idea suggested it would increase instruction and data throughput by a factor equal to the number of stages applied. In reality, the time taken by the extra logic added to the store the intermediate values, resulted in diminishing returns. In addition, this extra logic meant increased instruction engine size, which influenced the power consumption of the device.

The limitations of instruction level parallelism (ILP), which means the number of non-dependent instructions in the program code. Observable data showed that specific applications did benefit from ILP, notably graphics. However, general-purpose applications that do not need ILP obviously showed no benefit from ILP.

In an attempt to reduce, the complexity of the instruction engine, and realising that trying to predict branches architects started to research the execution model. Superscalar model highlighted the drawback of multiple execution engines, which is an operand register dependency. To lessen these dependencies, the innovation of an execution model called Out-of-Order arrived. Which means, execute the instructions in machine order rather than program order. The instruction results completed out-of-order, are reordered to the program order by the CPU, enabling the program to restart after an exception.

Use Figure 14: Execution model and the use case below to understand Out-of- Order model.

In-Order use case - normal an execution engine with a single execution unit

Step 01. Instruction fetch

Step 02. If input operands are available, the instruction is dispatched to the appropriate execution engine else the processor stalls until they are available

Step 03. The instruction is executed by the execution engine Step 04. The execution engine writes the results to the register file

Out-of-Order use case - used with execution engine that have multiple execution units

Step 05. Instruction fetch

Step 06. Dispatch instruction to an instruction queue

Step 07. The instruction waits in the queue until its input operands are available. The instruction then leaves the queue before earlier, older instructions.

Step 08. The instruction is issued for execution to the appropriate execution unit

Step 09. The results are queued

Step 10. When all older results are stored to the register file, then this result is written to the register file. This is called the graduation or retire stage

As the use cases highlights, one of the differences created by the new execution model is to create queues. Which allows the dispatch step to be decoupled from the issue step, and the graduation stage to be decoupled from the execute stage. Compared with the earlier In-Order processors, these stages operated in a lock-step, pipelined fashion.

Out-of-Order circuitry is both large and complex making the rate of development in the instruction engine far greater than in the execution engine. Nevertheless, observable data showed these CPUs still did not add benefit in the general-purpose application market.

Research into evaluation models influenced CPU performance but again only benefited specific type of applications. This is because; a solution designed by computer architects for system architects on a specific problem will only solve a specific problem.

The VLIW CPU architecture on the right in Figure 13: Trend of CPU development is the result in changing the execution model and evaluation chain of a CPU. Problems of superscalar architectures such as instruction scheduling logic for superscalar CPUs are too simple. And ever increasing complexity of the instruction engine to predict branches. Resulted in Very Large Instruction Word (VLIW), a significant innovation which displaced coordinating the execution units from complex instruction circuitry into the compiler. This act of statically scheduling the instructions in the compiler has many practical advantages overdoing it in the CPU hardware. The compiling occurs once on the developer's machine, the control logic is then "canned" in to the final program. This means the VLTW CPU consumes no transistors, and no power, and therefore is free, and generates no heat.

In VLIW CPU's, the compiler uses profile information to guess the direction of a branch. This allows it to move and preschedule operations speculatively before the branch is taken, favouring the most likely path it expects through the branch. If the branch goes the unexpected way, the compiler has already generated compensation code to discard speculative results to preserve program semantics.

Architects at Transmeta took the interesting step of placing a compiler in the central processing unit, and making the compiler translate from a reference byte code. Adapting IBM's idea but in Transmeta case, they chose the popular x86 instructions as their reference ISA, to an internal VLIW instruction set. This approach combines the hardware simplicity, low power and speed of

VLIW RISC with the compact main memory system and software reverse- compatibility provided by popular CISC.

Intel architects developed an execution model they called Explicitly Parallel Instruction Computing (EPIC). This innovation supposedly provides the VLIW advantage of increased instruction throughput. However, EPIC avoids some of the issues of scaling and complexity, by explicitly providing in each "bundle' of instructions information about their dependencies.

3.2 Specialised CPU's and Configurable Logic

The trends described so far are CPU designs based on the temporal execution model. There are three primary execution models in CPU design, which are; Temporal, Spatial and Mixed. The following subtopics describe how the simple function; y = Ax 2 + Bx + C is handled by the three primary execution models.

3.2.1 Spatial Execution Model

Spatial solutions have a fixed and specific application nature. Spatial computing relies on working the specific problem domain such as encryption, imaging and audio. Application Specific Integrated Circuits (ASIC) is of the spatial type. The ASIC approach is to decide the required functional units, and in what order they should be (data-path) and the interfaces, they should present. After completing the design, the result will be a set of functional units grouped that form a service. Thus, a spatial solution has a greater focus on the execution engine over the instruction engine, resulting in a more powerful data processor.

Architects that tried to make the shift from general-purpose CPU to application centric CPU took another development track. Developing application, specific CPUs called ASICs highlighted the advantages of such CPUs. The result is

most of the available resource is centred on the application. Spatial solutions have a fixed and specific application nature. Thus, spatial computing relies on working the specific problem domain such as; encryption, imaging and audio. The architects approach is to decide on the execution units, the datapath and the interfaces they should present. After completing the design, the result will be a set of execution units grouped that form a service.

Spatial solutions need little application software and thus are more like data stream processors. Advantages of a spatial solution are; more efficient in data processing, lends itself to more parallelism than temporal, high performance compared to temporal and does not need complex control.

Spatial solutions suffer from limited scale to differing applications from its original functional design. From complex and expensive production costs, and finally spatial solutions are risky because of its specific application nature

The advantages of the spatial execution model are:

•The fastest application processor type over mixed and temporal solutions

•More efficient in data processing

•Spatial execution lends itself to parallelism

•High performance

•Does not need complex instruction engine

•Reliable data processing

•Application centric

The disadvantages of the spatial execution model are:

•Does not scale well with differing applications to its functional design

•Complex and expensive production costs

•Rigid nature means short self-life

•Hard to resolve any undesirable features or upgrade

3.2.2 Temporal Execution Model

By far the most flexible solution with current microprocessors is the temporal approach. Nevertheless, as described in the Error! Reference source not found. Computer Architecture issues the temporal execution model needs a large instruction engine.

The advantages of the temporal execution model

•The advantage of temporal execution is that it scales well to large and differing applications

•Financially more attractive than spatial

•Better yield and design process over a spatial solution

The disadvantages of the temporal execution model •Efficiency penalties •High Occupancy Quantisation Error •Fixed ISA generating a code support problem •Complex support circuitry

3.2.3 Mixed Execution Model

CPU designs that employ mixed execution models use configurable logic like the FPGA. An FPGA solution offers greater flexibility over the ASIC since in effect one can now configure functions in the hardware on the fly. Reconfigurable logic enables the adaptation of the functional units. They can update any configuration code for a functional unit, an advantage over the ASIC. Both ASIC and FPGAs need little or no supplementary logic such as; decoders, sequencers, predictors, queuing registers, and other issuing mechanisms.

Architects realised that utilising the best features of spatial and temporal models would challenge the current crop of general-purpose CPUs. A move to Reconfigurable Logic development type that combined configurable logic with a general-purpose CPU has attracted many architects as it promises a flexible application centric model. In this scheme, a special computer language compiles fast-running subroutines into a bit-mask to configure the logic. One distinct advantage of this shift apart from moving to application centric is being independent of mainstream costly operating systems.

One reconfigurable solution is to direct slower or less-critical parts of the program to run a general-purpose CPU. This process being able to create devices such as software radios, by using digital signal processing to perform functions usually performed by analogue electronics. Architects at Quicksilver Technologies took the idea further by developing a heterogeneous architecture. Their architecture has an arrangement of process elements that perform specific functions such as text and arithmetic. Like this patent and others adopted an algorithmic approach instead of a static ISA. Quicksilver adopts hierarchical cluster architecture and interconnect, which keeps the power consumption low.

PicoCHIP is another design based on a heterogeneous architecture, but its architecture is optimised for a single application, wireless processing. FPGAs are another technology that blurs the boundaries between hardware and software. However, even with this they all are based on the standard CSM model which meant no common framework, and no standard reference model.

The advantages of the mixed execution model are:

•Efficient use of die space

•Faster than the Temporal execution model Excellent development turnaround

•Greater flexibility and adaptability

The disadvantages of the mixed execution model are:

•Large datasets, vectoring and reuse is not an easy feature of FPGA' s •If a major context switch is required, and configuration is large, the

FPGA needs to be reconfigured a process, which can take some time

•Resource management problems such as; interconnect issues, RAM to configuration bits and efficient non- fragmented dataflow

3.2.4 What does all this mean?

The CISC, RISC, Superscalar and VLIW CPUs described use the temporal execution model. All the topics discussing branch predictions, evaluations and complex instruction engines ended with the problem affecting the execution efficiency. This efficiency between the execution engine and the clock rate, is called, High Occupancy Quantisation Error (OQE). In this application, OQE means the error induced by the execution model in the instruction and data stream with respect to the clock rate.

Figure 18: Occupancy Quantisation Error (OQE) shows that from a given clock, the spatial solution has a good clock to function utilisation. However, the temporal solution introduces 'gaps' named exchanges. These gaps are because of all the instruction engines prediction, evaluation, queuing and other control circuitry trying to preserve stream feed.

OQE is the big issue with temporal based CPUs; Computational Power Efficiency is another indicator of the efficiency of the CPU. In this application, CPE means a measurement of work carried out within a given period or time- frame against the amount of power consumed.

The average CPE for: •Temporary

Microprocessor (Pentium, Athlon) & DSPs = 5% CPE -simulate the hardware

•Spatial

ASICs = 60-70% CPE

FPGAS = 20-45% CPE

Knowing the CPE of a design enables determining the processors OQE. It is the goal of CLCSM to have a low OQE by having a high CPE.

3.3 System Architecture issues

The first issue is compilers (shown in Figure 6: CSM - system architecture component) they create the contract between the system program called the operating system and the CPU's ISA. This means that they must understand the ISA specification and the operating system model. VLIW showed the dissonant state between system architects and computer architects as they VLIW machines only show benefit in specific applications.

Compilers are about rules, perfect rules require knowing all use cases or creating the perfect algorithm that suits all cases. Since both are impossible, it is time to revisit the compiler concept and the way we create programs.

The difficult part of CPU development is creating a compiler that utilises the idiosyncrasies of the CPU instruction and execution engines. However, the compiler must also understand the rules of a programming language and map the two to create efficient code. Given that system architects and computer architects operate on different paradigms current practises will prevent a solution.

The operating system directs the operation of the computer system and the running of the application. From the dissonant gap defined it is not surprising,

it will be hard for any system program to directly control the hardware if its not in fine-tune with the hardware.

With all the innovations described show how much the computer architecture drives system architects and implementation architects. CPU ISA drives the compiler; the language drives the compiler and the system architects program model.

A fundamental dissonant issue is threads, Superscalar and VLIW machines still do not naturally cope with threads, and they have multiple execution units to help the execution engine process an application task. Intel tried to solve this by creating hyper-threading in the Pentium. In a review of the technologies with impact over the last ten years, Intel's hyper-threading model came in as one of the top ten disasters in the computer trade. They sited the reason as, since its release little software applications have been able to capitalise on the technology. A proof is that system architects program models do not translate well into the computer architecture world.

System architect's program models make use of stacks, plenty of registers, large memory and needs efficient movement between one thread of processing to another. As a temporal solution, this means the CPU will not be efficient and have a high OQE and low CPE. If an FPGA solution is used then huge memory is required, which is beyond acceptable limits for a practical implementation architecture. An ASIC solution will not fair any better since it can only be used for an application not any.

System architects started the move to metadata. To system architects metadata enabled an application to translate another machines data structures to structures it understands. The technology, XML is about platform interoperability. XML drawbacks are; it does not describe the semantics, pragmatics and even the various rules the usage of an application will

understand. XML is about how to validate and use data instances sent between two end-points.

The operating system has become the centric technology for current development, initially because of the ISA being so unwieldy. An operating system tries to make application development easy. However, the operating system adds much bloat code for most applications, which impact not only on performance but also on stability.

The problem with this model comes from:

•So far no general software architecture has made best use of the computer architecture underneath.

•The software perspective does not have a smooth abstraction exchange to the computer architecture and the underlying processor interfaces or elements.

•Bugs and security flaws in the System Architecture are because of a lack of empathy to the computer architecture.

•The methods to enable developers to create business using the Computer System Model can be complex and are at the software abstraction level.

•Any development in the System Architecture abstraction level has a timeline of nearly 5 years or the production life of the CPU.

•Compiler rules define a contract between the programming language and the ISA. Mainly the compiler is concerned only with the ISA specification of the CPU and knowledge of the instruction engine of the CPU design. This means during its optimisation the application architects intent can be over¬ ridden by the rules of the compiler and introduce bugs or restriction to the application.

3.4 Implementation architecture issues

Implementation architecture concerns itself with the layout of interfaces and organisation of; the system architecture, computer architecture or the computer system abstraction level. Error! Reference source not found. Shows the two clear responsible areas, what it hides is the depth and complexity of those areas.

Implementation architecture is more about the 'how' to implement the CPU design and the system components. The problem with the implementation architecture is the organisation, model eventually has many forces acting on it. The CPU will have been design with a memory architecture which is either von Neumann or Harvard architecture both shown in

As stipulated the implementation architecture is designed to make best use of the components interface with respect to the product. Its concerned with how these components can best be implemented. Figure 21 : Organisation of the physical device that makes a PC describes the typical interfaces, interconnectivity and relationships they deal with.

Given the example in Figure 21 the implementation architecture has not had a basic change, and once implemented it has no capacity to change. The issues with the current implementation architecture are:

•The core of intelligence is in one place

•The basic core model for a system has not changed from Harvard or von Neumann

•The organisation is static and cannot dynamically adapt from application to application.

•The restrictions are still prevalent in multi-processor designs.

Strange thing about system architecture no matter at what scale or abstraction levels the engrained practices appears to be the biggest limiting factor. Circuit

design might employ new technology and increase the number of transistors. But the organisation for temporal, or FPGA has not changed.

Analysis on figures; Figure 13: Trend of CPU development Figure 21 : Organisation of the physical device that makes a PC and Figure 22: Configurable logic organisation designs show that in all domains the design are similar. A CPU has instruction engine and execution engine. Even when there are multiple execution units within the execution engine the only difference will be in the instruction dispatch logic. PC's, workstations, servers and supercomputers architecture are the similar. The number of processors may increase and the interconnect may vary in bandwidth and number, they still are fundamentally the same.

Software patterns do not vary much either the new software model increase the number of interfaces, the method of interaction they are still detached from the core CPU architectures.

A system model is based on either Harvard Enhanced or von Neumann though interesting architectures like NUMA are under development the processing core, memory block and I/O structure is still similar.

3.5 Summary of the problem domain

Given this section has described is the trend of CPU development driven or centric on the CPUs ISA. A summary of the problem domain and motivation to the application now follows.

Computer System Model issues

•No framework providing a common architecture and interfaces between all the subcomponents of the CSM

•Driven by the computer architecture component and not application centric, which means that developers cannot just focus on their invention or vision

•No modelling technique for the CSM, there are disparate methodologies for system architecture, computer architecture and computer architecture

System architecture issues

•Operating systems multi-threading technology is not directly supported by the CPU, creating an immediate discord between the system software and the CPU

•Application centric CPUs means the developer must have intermit knowledge of digital design resulting in high-risk, complex development CPUs

•Compilers though simplify certain CPU overheads are not good mediators

•Compilers contract between software and the CPI ISA use rules that get lost in translation of a high-level language to the machine

•Compilers do not fully capitalise on the CPU components, or can cater for the scenarios that break the stream feeds

•Compilers rules are tied to the current ISA, every new CPU ISA requires a new compiler. As the architectures become more complex getting compilers to be efficient in all use cases is as difficult as developing the perfect predictor

•Firmware easily becomes outdated and are too rigid for agile or adaptive machines

•Software languages are fine for human development but they get lost in translation with execution models, compiler rules and algorithm deduction.

•Operating systems have become so dominate, the shift from CPU ISA centric to software API centric, though slightly better is still not application centric.

•True lack of harmony between hardware and software means efficiency problems with the system

Computer architecture issues

•Computer architecture relies on rigid ISA solutions

•General-purpose CPUs require complex instruction engine architectures that demand more CPU overhead than the data to be processed

•Each solution to improve general-purpose performance increased the CPU complexity and decreased efficiency

•No real innovation in the CPUs execution engine

•CPU design does not take in to account software developer requirements, only the perceived functionality

•CPUs are not multi-task oriented

•CPU architectures are lob-sided approximately 95% of a modern CPU is instruction engine and 5% execution engine resulting in low computational efficient

To summarise current computer system development suffers •Coordination of many levels of abstraction •Under rapidly changing set of forces •Design and implementation cost effects business ROI

4. Objects and Disclosure of the application

The present invention is a radical computer system model that addresses the shortfalls described in the previous sections:

•Computer System Model

•No common framework model that binds the architectures •Be adaptive application centric, without being application specific •Provide a security model to meet the demands modern computing •An application provisioning model

•System Architecture

•Lack of common language for all to follow

•Overcome the limitations of compiler methods

•Do away with operating system dependency

•Provide an application centric environment for any programmer

•Computer architecture

•Overcome the application specifics of spatial execution models •Overcome the OQE and CPE of advanced temporal execution models •Fixed Computer System Models:

•Disparate method and perspective of the architectures to a computer solution

•Frozen-In-Time scenario the developers are entrenched in

•Poor application performance because of generic Temporal programming

•Dependencies on non-business technologies like Operating System •Configurable Computing CSM

•Requires skilled and experienced hardware engineers

•No software tools for the system architect

•No virtualisation model to make use of Temporal programming as well as spatial programming

•No common language for ease of up-take for developers •Sporadic Control Flow dependencies induced through branch predictions, evaluation model and the instruction engines •Unpredictable Processor Interfaces

•Static and simple interface means bottleneck problems, instruction- sequencing problems, which impose programming constraints within a distributed system

•Fixed routing and data & I/O paths

•Overcome the reactive nature of CPUs

4.1 Brief description of the drawings used within this application.

FIGURE 1 : STANDARD COMPUTER SYSTEM MODEL 42

FIGURE 2: CONCEPTUAL LOGIC - COMPUTER SYSTEM MODEL 42

FIGURE 3: COMPONENT DIAGRAM OF A TURING MACHINE 42

FIGURE 4: SEQUENCE DIAGRAM OF THE TURING MACHINE 43

FIGURE 5: STATE DIAGRAM OF THE TURING MACHINE 43

FIGURE 6: CSM - SYSTEM ARCHITECTURE COMPONENT 44

FIGURE 7: GENERIC ARCHITECTURE OF AN OPERATING SYSTEM 44

FIGURE 8: COMPILER RELATIONSHIP WITH THE ISA 44

FIGURE 9: CSM - COMPUTER ARCHITECTURE COMPONENT 44

FIGURE 10: GENERIC CPU ARCHITECTURE 44

FIGURE 11 : CSM - IMPLEMENTATION ARCHITECTURE COMPONENT 45

FIGURE 12: GENERIC ORGANISATION OF THE SA AND CA PARTS .45

FIGURE 13: TREND OF CPU DEVELOPMENT 45

FIGURE 14: EXECUTION MODEL 45

FIGURE 15: SPATIAL EXECUTION MODEL OF Y = Ax 2 + Bx + C 46

FIGURE 16: TEMPORAL EXECUTION MODEL OF Y = Ax2 + Bx + C...46

FIGURE 17: MIXED EXECUTION MODEL OF Y = Ax2 + Bx + C 46

FIGURE 18: OCCUPANCY QUANTISATION ERROR (OQE) 46

FIGURE 19: VON NEUMANN ARCHITECTURE 47

FIGURE 20: HARVARD ARCHITECTURE 47

FIGURE 21 : ORGANISATION OF THE PHYSICAL DEVICE THAT

MAKES A PC 47

FIGURE 22: CONFIGURABLE LOGIC ORGANISATION DESIGNS 48

FIGURE 23: CL-CSM MODEL COMPONENTS 48

FIGURE 24 META-MODEL FUNCTIONAL BLOCKS 49

FIGURE 25: CL-CSM SYSTEM ARCHITECTURE COMPONENT 49

FIGURE 26: APPLICATION DEVELOPMENT 49

FIGURE 27: EXAMPLE META-ASSEMBLY STRUCTURE 50

FIGURE 28: TRIMOS META-OPERATING SYSTEM 50

FIGURE 29: EXAMPLE OF THE SECURITY SEQUENCE CASE 51

FIGURE 30: CL-CSM COMPUTER ARCHITECTURE COMPONENT 50

FIGURE 31 : ALGORITFIM ARRANGEMENTS IN CL-CLBS 52

FIGURE 32: CL-CSM IMPLEMENTATION ARCHITECTURE

COMPONENT 52

FIGURE 33: EXAMPLE ORGANISATION OF TRIUMVIRATE SYSTEM53

FIGURE 34: EXAMPLE FABRIC ORGANISATION 53

FIGURE 35: CL-CSM ORGANISED INTO A TRIUMVIRATE

PROCESSOR 54

FIGURE 36: CL-CSM TIMING MODEL PATTERN 55

FIGURE 37: EXAMPLE METACA FUNCTIONAL ORGANISATION OF A

PC 55

FIGURE 38: EXAMPLE METAIA ORGANISATION OF A PC 56

FIGURE 39: NETWORK PROCESSOR MODEL 57

5. Details of the application

At the core of this application is a meta-model constantly corroborated by key configurable and programmable components of the system.

The details of this application will follow the same format as the previous sections. The format of the subtopics, which describe the details of CL-CSM in a top-down is:

•What the it is described through its functional features •An example implementation describing how it would work

5.1 Application functional features

The best way to describe what this application is would be to analyse the CL- CSM described in Figure 23: CL-CSM model components. The step change to the computer system model is the existence of a meta-model that integrates all the subcomponents.

A way to view this and its uniqueness is to compare the standard CSM with the CL-CSM. A standard CSM is a functional requirements model of what makes a computer system, without specifying a technical design or its implementation. For example in Figure 1 : Standard computer system model, the ISA component is a functional specification, not a specific ISA such as; x86, MIPs or SPARC. In contrast the Dynamic-ISA in Figure 2: Conceptual Logic - Computer System Model is a combination of a pre-configured instance and extended by the metaCA schema. Thus, in CL-CSM components are not just functional specifications, their meta-information has a technical effect on the functional component.

A CSM defines what a computer system is by its functional specification, and the functional distribution into a set of components, which are in turn grouped by similar roles.

Unlike the established XML metadata technologies, the CL meta-modal used in this application defines more than data structures as shown in Figure 27: Example meta-assembly structure. CL-CSM is a comprehensive set of ontologies that self-describe the system. CL-CSM describes how to build instances, route information, implement a mesh, organise tasks and manage all its resources. In addition, it can encode its current arrangement into a meta- modal instance and template that can be cloned to remote endpoint of similar type. Other differences are; XML is text based whereas CL meta-model is machine native code, XML is used by the application component of the CSM whereas CL metadata is used throughout the computer system.

The following section will breakdown the CSM models providing a clear distinction between the two CSMs, and since the standard CSM is used to describe all current CPUs, applications, computing devices. A logical conclusion is that a CL-CSM is likely to be unique to them.

5.1.1 Meta-model

The meta-model is a distinct feature over a standard CSM, in Figure 24 Meta- model functional blocks the meta-model is broken into its primary hierarchical architecture. The architecture has a metaCPU, unlike XML where it's parsed and processed by the CPU, in CL-CSM the meta-modal processes the meta stream with by an arrangement of processing elements that understand the underlying meta-semiotics.

The functional distribution has a core CL-CSM metaCPU that manages and organises the meta-stream. The distribution of standard CSM functionality is reflected in the three meta-processing elements named: •metaSA - CSM equivalent is system architecture •metaCA - CSM equivalent is computer architecture •metalA — CSM equivalent is implementation architecture

A breakdown of the meta-model is in the following sections. To summarise, at the core of CL-CSM is a self-describing meta-model. CL-CSM uses meta- processing elements to sustain a meta-stream. The arrangement of meta- processing elements reflects the standard CSM. Unlike the CSM model, this meta-stream provides enough information on the system, enabling a proactive and reactive deterministic system.

5.1.2 MetaSA

It is the metaSA agent, that is concerned with processing the system architecture domain of the meta-model. The role of the metaSA is running the applications and the metaware object system.

The point of any computer system is to run applications. How well it runs applications is down to the CPU capabilities, the framework of developing the application and the abilities of the operating system. All computer systems run two types of applications each with clear responsibilities and roles. The primary application is called the system program, normally called the operating system and its role is to direct the hardware to the application demands. Another role is to maintain the efficient running of the system with resource usage such as; memory, I/O and peripherals. The second application is the users application which performs roles the user wishes.

A large difference between this approach and the standard is the meta- information pertaining to the application. In standard computer systems the application developer creates an application using Integrated Development Environment (IDE) tools such as; UML, .NET and C++ Builder. In all these tools, the objective is to translate the application from a high-level programming language into executable machine code.

This executable code contains information on its immediate resource requirements and dependencies. In some frameworks, XML provides information that helps the operating system service the demands of the application. However, XML is not used to derive information on the running environment or the tasks.

This is where meta-model differs from the XML model, the application frame creates more information about the application. During development, a graphical notation environment based on UML is used. The abstraction levels create ontologies that contain information such as; timing, term relationship,

sequence, collaboration, semiotics, interface and operations. The arrangement of these ontologies make a meta-assembly, each ontology has a manifest that is embedded into the meta-model. The collection of manifests helps the metaCPU to direct the correct Agent for processing.

Its the extra information of timing pattern, collaboration pattern, sequence pattern, semiotic pattern preserved in the application that starts an innovative shift to current solutions.

The metaware information tells the metaCPU of the required execution units and the execution pattern, which in essence is the description of the spatial and temporal datapath. Metaware is unique to conceptual logic, it provides the metaCPU with information on what execution units are required. In addition, it describes the I/O arrangement, the protocols and the contract structure.

On top of all the computing requirements, which will be explained in the relevant section, the application structure, the operations, the data structures and collaborating applications are supplied.

This means an application meta-assembly of the application has a technical effect on the CL CPU. The self-describing information shapes the physical configuration as well as the cyber-configuration of the environment.

5.1.3 Example: metaSA application development

In this application the term agent has been defined it now becomes clear that conceptual logic has some predefined agents, which the metaSA is one of them. An example of how to develop CL applications is visually described in Figure 26: Application development.

The analyser will take the UML based model called Mercurial and extract all the etiquette information. In this application, etiquette information means the

collection of behavioural and semiotic data of the model. Once validated the structural information of the interfaces and content (the operations) are defined and encoded. The MWmapper validates the timing, collaboration, sequence and term patterns.

An example of the meta-assembly schema is described in Figure 27: Example meta-assembly structure.

What this architecture of meta-information provides is a common language based on Conceptual Logic methodology between the software and hardware schemas. The application design pattern is passed through the Meta-Polymer Application Generator, which is a standalone version of the metaCPU. This technique means the developer need not know the hardware idiosyncrasies or concern themselves with complicated application models such as; CORBA, .NET and J2EE.

The benefit of preserving all the model data is the system uses that to 'plan' or be proactive in running the application. The polymer-metacode slice enables the application to join with other meta-assemblies, providing a dynamic and flexible application ensuring cross-application usage models. The polymer- metacode contains information such as; routing data that can help organise mesh networks, metaware to shape or adapt interfaces and rules for exchange patterns.

Another benefit of preserving all this information and using meta-processing elements is the operating system. As stated earlier current computer systems, have two application types, one being the operating system. As the role of the operating system is to direct the running of the system to application demands is now redundant in such a system, So is the need to have an operating system, the equivalent of this system program is the metaCPU's object system.

An example of such a meta-operating system is shown in Figure 28: TrIMOS meta-operating system. The meta-model of TrIMOS (Triumvir Intelligent Metaware Object System), the Logic Abstraction layer hides the idiosyncrasies of the Component Processor; this layer exposes a Facilities API for the component processor service objects. The Facilities Loader manages the initialisation, loading and cycle management of the service objects. The scheduler, co-ordinates the schema and instance streams to the relevant service objects. The resource manager ensure safe running of the objects and applications within the processor workspace. Interface/Routing the various interface and routing configurations. The CPB Object service is the general service object that offers facilities API to process object instances and manages their requests. The Service ORB provides an IPC mechanism between the service objects. The Conceptual Logic (CL) subsystem offers an API to developers. The Generic CL Objects are the standard configurable but locked process blocks the component processor has; Schema Process Block, Action Process Block, Delegation Process blocks and the Security Process Blocks to name some.

This model means that any application passed through the application generator will run on any CL-CSM system. The limiting factor is only on the resource demands that application makes. Unlike developments on current system, interoperability between different manufactured CL systems is guaranteed. The difference is an application design to run on an Intel Pentium PC does not guarantee it running on any Intel PC. As the differing operating system will factor on its ability to run, whereas, a CL-CSM application will run on any PC based on the CL-CSM processor.

Another feature is they security nature of the model. The certificate element is processed by the Security Process Block to check for rights and trust of the schema, Failure in this area prompts the execution architecture within the

system to block further execution of the schema in case of malicious or integrity actions.

The example sequence diagram in Figure 29: Example of the Security sequence case provides an example use of the security model. The Schema Process Block (SPB) has passed the Certificate/Security parcel to the Security Process block for processing. Part of the process is to check from the database the certificate, signature or token within the storage area. Failure starts the 1.4 exception process success returns the 'go ahead' to the SPB.

A hidden feature of this innovation is the system, is thread aware since it handles and sequences application tasks. In addition, the resource management mean it runs a virtual model both features not available in ASICs or FPGAs.

To summarise the metaSA component makes radical changes in the system architecture model its innovations are:

•It changes the usual CPU ISA centric model to an application centric, without being application specific model

•It has no need for an independent operating system

•It has a built in adaptable thread model

•It has a built in virtual model

•Combining Mercurial and metaCPU does away with the need of firmware and compilers

The metaCPU provides an integrated compiler environment

•The security system model prevents viruses or malicious code

•The meta-assembly provides a proactive and reactive agent environment

5.1.4 MetaCA

It is the metaCA agent, which is concerned with processing the computer architecture domain of the meta-model. The role of the metaCA is executing the metaware stream and the application instruction and data streams.

The core of any current computer system is the CPU, which is tasked to execute the instructions and data of an application. The previous sections described the various attempts to get CPUs to perform this task efficiently.

The innovation of CL-CSM in this domain is the metaware model. Unlike current general-purpose CPUs, CL-CSM CPU has an adaptable extendible ISA. This meta-programmable feature enables the CPU to be applied in both application specific and general-purpose environments.

The metaCA agent is the processor of the CSM, unlike current systems CL- CSM leads itself to multiple processing elements. The standard CSM describes its CPU in two the rigid instruction engine and execution engine model. CL- CSM functionality is to process a plurality of meta-models, the application and the security model. The arrangement of processing elements is far more flexible and lends itself to a more flexible multiple processing arrangements within the wafer.

The metaCA executes what the metaSA generates, but is not limited to a top- down route. As the meta-stream is self-describing it means the metaCA CPU can reverse engineer an application. In addition, the metaSA manifest contains information such as; timing, collaboration and sequence patterns. This is unique because of the timing pattern the need for complex branch prediction is redundant. The collaboration pattern provides management features previously not available in CPUs. Lastly, the sequence pattern helps evaluation models, routing and aid with the prediction.

This means the metaCA can manage and organise its stream, arrange its execution units and adapt to the demands of the application. The metaCA organises a loose federation of Conceptual Logic - Configurable Logic Blocks (CL-CLB) into application processors.

The metaCA is comprised of a sea of coarse-grained CL-CLBs, a CL-CLB is comprised of a set of processing elements and a processing element can either be general algorithmic processor or a specialist algorithmic processor. Analysis of real-world problems shows that they consist of a complex set of heterogeneous elements. These elements in general can be formulated into a set of algorithmic elements, By nature, any algorithmic elements exhibit some form of structure, and they require memory. Thus any application can be composed of a set of heterogeneous algorithmic elements each requiring a set of memory elements within the locale of the algorithmic elements.

This means the metaCA unique combination of algorithm based processing elements enables it to be both application centric whilst not application specific.

5.1.5 Example of metaCA arrangement

From now on algorithmic elements are called PE (Processor Elements), IE (Interface Elements) and memory as ME (Memory Elements), An example of algorithm arrangement is:

•CI/0 - This element enable an adaptable interface from the component to the outside in short the public interface.

•PEs - This processor element has a scalar component to enable the execution of complex sequences.

•PEt - This processor element has a text processor instead of a scalar enabling better meta processing and text processing.

•MRI - This processor element forms the hub of the routing and communication between node zones and the main interface.

Grouping the elements forms a coarse grain processor node that can dynamically change its process sequence according to the application task.

A table of definitions describes the individual algorithm-processing element.

The arrangement of metaCA can be organised into a triumvirate architecture, which is used as an example in the metaIA section as well as this.

Triumvirate architecture is composed of triumvirs that collaborate to act as one, The base level for the Conceptual Logic triumvirate architecture is composed of these prime components:

•Semiotics: means rules, behaviour, grammar, language and control •Interface: Connectivity, IPC, caste, rules and messages •Metaware: Hardware/Software processes which owns the interface

The Etiquette metaCA processor: Has an initial set of rules that govern its view on the events it handles, depending on the Etiquette cast rendered it can build on these perspective rules or start to change its perspective. It's the perspective process that dictates the behaviour of Quiddity and consequently how responsive and proactive it is as a whole.

The Delegation metaCA processor: Like the Etiquette processor, Delegation has some initial configurations. Its role is to understand the metaSA model and deal with mapping the application meta-stream into tasks. Delegation is comprised of two sub-processors called Judgment and Discovery.

The Judgment metaCA processor: Judgment decides what action set should be taken; Judgment does not execute the action itself but delegates what action will be taken, and the timing of the action. Governed by it perspective and

depending on the initial etiquette coding a Judgment processor can enhance, extend or change its rules, biases judgment.

The Discovery metaCA processor: Discovery helps enhance the system abilities by discovery what available resources outside the device. Discovery can detect and learn new Meta-models for the system and through its Perspective and Judgment processors enhance the system or changes its outlook.

More detail on how this architecture is the example subject of the next section, thus, to summarise the metaCA component makes radical changes in the component architecture model its innovations are:

•It is aware of the system architecture models

•The timing and sequence pattern processor does away with complex prediction and evaluation circuitry

•The collaboration pattern processor acts as a full resource management

•The timing, sequence and collaboration processors provide a thread model for the processor

•The pattern processor provide a virtual processing environment

•The heterogeneous architecture provides flexible algorithmic mesh computing model

•Full multi-tasking processor

•The security processor does away with virus and malicious code execution

•The polymer-metacode processor enables application and data chaining

5.2 MetaIA

It is the metaIA agent, which is concerned with organisation, building routing of the implementation architecture domain of the meta-model. The role of the

metaCA is executing the metaware stream and the application instruction and data streams.

Current computer systems are composed of an arrangement of computing elements organised in a fashion to suit the application. The metaIA is the builder, thus, must have all the rules on how to do a job. This completes the roles of CL-CSM, metaS A defines the idea into functional specification model contained into the meta-model. The metaCA decodes the meta-model and distributes the functionality into agent processors. The metaIA builds, organises and configures meta-processors into the required hardware and software system instance.

The metaIA is involved with making the various elements into a computer system; it orchestrates the varying architectures to a common goal. The metaIA uses the meta-information models to determine how it does this and the meta-semiotics to determine its behaviour, language and protocol.

The connectivity of the system under the control of the metaIA architecture, it understands the forces applied on it at any instance. The metaCA concerns itself with the processing responsibility not how the information delivery works or how to manage the streams. As metaSA is input to the metaCA, the metaCA is input to the metaIA. The routing of the information, the design of the interfaces is all under the metaIA architecture.

The CL-instance organiser manages the hardware logic instances, there tasks and arrangement. The RFU-Instance is a configured CLB and the PE-instance is a configured process element instance. The metaIA uses the FE-routing to create the mesh networks and as an interface mechanism for the system.

Both the metaCA and metaIA use a Directed Acyclic Graph (DAG) pattern processor, for the scheduling, functional structure organising and grouping.

This mechanism provides the extensible ISA and the dynamic 1/0 for the metaCA. In addition, the metaIA uses the data for the mesh routing on large algorithms as well as connect the tree of CL-CLBs.

5.2.1 Example metaIA design

To continue the triumvirate architecture described in the metaCA example, and to show how the metaCA acts as input to the metaIA

The example possible configuration shown in Figure 31 : Algorithm arrangements in CL-CLBs could be organised as described in Figure 33: Example organisation of triumvirate system. This is by no means is a true representation of the full density applied to modem silicon wafer process.

At the core of this example is a Triumvirate based architecture that provides smooth coupling between the various abstractions that compose the Component Functional Units or Processor. An abstract functional model of a Triumvir CL processor is shown in Figure 35: CL-CSM organised into a Triumvirate processor. This is an abstract meta-model that places clear roles into the triumvirs; these roles describe their inter-relationships and dependencies with each other.

The role sequence of a triumvir system is:

•The etiquette triumvir works on conceptual schemas that describe what to make in the reconfigurable elements of the system.

•The Delegation provides a means to manage the Instance data/instructions and validate that through the schema. Furthermore it enables a transactional model to coexist with the data instance enabling tight co¬ ordination of the engines execution efforts and a distributed execution model.

•Provide detail diagnostic and robust recovery system through the schema, CAM based architecture and the sequence model it runs on

•The Action Triumvir provides an abstraction model for running application instances without main operating system efforts or system bound events to interrupt it

•The system offers a 'self-repair' and optimal efficiency algorithm because of its build factory feature

•Both Conceptual & Instance streams have a security level processor to guard against malicious code and streams.

•The Action Triumvir provides dynamic configurability, extensibility and functionality to the system through the RISC CPU and the CPB (Configurable Processing Blocks). The CPU offers script, function and vector execution, the CPB offer greater application and function executions.

It is important to note that Conceptual Logic offers a flexibility in application and processor engine development not equalled so far because:

•Tighter harmonious coupling between the Logic level of the software/hardware domains

•Better hardware logic reuse and flexibility to the application

•Higher degree of logic utilisation to the application

•Simpler software model reducing application complexity

The following section goes into more detail of the triumvir blocks and the internal hardware software model that makes the current application. Though it would be stressed the diagrams and architectures are example not the limiting and only possible means to implement this application.

The Etiquette Triumvir (13.1) processes the conceptual schemas this entails the tasks:

•Schema Process block (13.1)

•Schema decodes, validation and security through interface (1.5).

•Schema compiling (tag processing) through interface (1.5)

•Schema instances (meta-information) constructing through interface (1.4/5).

•Meta-instance optimisation and scheduling through interface (1.4) •Instance (instruction and metadata) through interface (1.4) •Etiquette CPU (13.2)

•Component Processor collaboration through CPU interface (1.4/5) •Resource tracking and scheduling through interface (1.4) •Runs the TrIMOS through interface (1.3)

Jointly the Etiquette system tasks are:

•CL Contract management

Provides a framework for managing the protocol, engagement and life- cycle of schema based and connection based process blocks & process services to interfaces.

•CL Policy management

Provides a language and syntax model to describe and communicate the policies means to internal and external Process blocks & Execution engines.

•CL Coordination management

Provides protocols that coordinate the actions of distributed Processors and or Process execution engines. In tandem with the Discovery Process Block provides coordination protocols to support those application types that require to reach consistent/coordinated agreement on the outcome of distributed transactions.

Section 2 of the schematic in Fig 13 covers the local storage and the Delegation Triumvir. The cache system (2,1) is similar but not the same as a Harvard architecture in the sense there is a division in the store because of types. Unlike Harvard the Instance cache contains the active data instance (review the definition segment of the Detail Triumvir for our definition of

data). The Schema cache holds CPB and logic assembly information that is/will be needed during its work-cycle(s).

The Discovery Process Block (2.2) manages the service tasks:

•CL-Inspection

Assisting in the inspection means of a CL Processor for available services. It is also an etiquette subset collection of rules on how the inspection-related information is made available for discovery.

•CL Transaction

Provides coordination protocol means to support those application types that require to reach consistent/coordinated agreement on the outcome of distributed transactions.

With the aid of the Community Process Block and Interface (2.2.1) the

Discovery Process Block also covers:

•CL Referral

Provides a means to dynamically configure external discovery message paths to service directories OR Discovery nodes of CL Processors. Also to define means on how they should handle a discovery message request/response. Referral is a configuration protocol means that enables Discovery nodes within the network to delegate transaction parts OR all (in case of terminal failure for example) of their processing responsibility to other CL Component Processors.

•CL Routing

Describes the message path means to take and the transport protocol means to use and the MEP (Message Exchange Pattern) in either an asynchronous or synchronous manner. Routing supports various transport means that (like the other tasks) are described in a schema.

•CL Delivery

Describes a protocol means that allows reliable delivery of messages to the distributed Component Processor or Computer System Models in

the presence of some terminal failure be it network, computer system or execution engine.

The Judgment Process Block (13.2) provides similar services as the Discovery Process Block but on this conditions:

•The services are constrained to the internals of that Component processor/Triumvir.

•Delivery is not actually support as such since the Routing and Referral service will do that task.

•Judgement takes precedence over the Discovery Process for internal demands.

The Security Process Block (2.4) covers the services:

•CL Security

Provides protection through; message integrity, message confidentiality, message authentication and message accounting for any information instance. Furthermore the Security Process Block, can be dynamically configured through its perspective schema ontology to any current, merging or future standards.

Section 3 of the schematic in Fig 13 covers the worker of the processor; the preceding sections were about management and control whereas Action is about being a configurable execution engine. Again it is worth noting now the difference between the Processor Engine and Execution Engine plus the Quiddity and agent terms: •Processor Engine

Is the complete physical entity of the Triumvir or Triumvirate, this does not include any fixed CPU cores that may be contained within. •Execution Engine

Is either a fixed CPU core, Locked Configurable Process Block or a Configurable Process Block (CPB) that is within a Processor or Triumvir/Triumvirate.

•Quiddity

Is a CL Component Processor with a running application system like ICOS but has no fixed role assigned to its CPB. Quiddities are said to be 'pending for service assignment'.

•Agent

Is a CL Component Processor with a running application system like ICOS and has its CPB assigned, this means the Agent has now taken on some role and responsibility. An Agent is said to be; 'An Agent of: some service'.

The Action Process Block (3.1) manages the activities of the triumvir. The Builder Process block (3.2) takes the schema instance stream and builds or shapes CPB to the schema specification. The Process Cache (3.3) stores the active CPB schema instances.

The Action CFB (Configurable Function Block 3.4) is a configured execution engine. The number of Action CFB 's is dependent on:

•The technology used to implement the Triumvir/Triumvirate processor

•The family type designed

•The desired functionality of the processor

The Action RISC CPU (3.5) is an extension to the Action CFB's providing:

•Script engine: to process schema, language and other application idiom scripts or statements to aid efficient execution of the scenario parcel

•Vector processing to alleviate branch the complexities of branch exceptions

•Backup to Process execution if an internal or external CFB resource fails

•Prefetching of instance data to the CFB 's •Sequencing and scheduling prediction

Together with the Action Process block the Action CPU lessens the complexities of certain application service scenarios that FPGA type processors are limited on. Also it enable a reliable predictor to current a future activities ensuring efficient instance feed to the execution engines.

The Action Interface (3.7) scopes the exchange types: •Service Connection Paradigms •Action control messages

The Action Interface (3.8) scopes the exchange types: •Service request/response •Any action related service/facility/operation I/O

Virtualisation Model; The processor can execute these configurable virtual patterns:

On-the-Fly Reprogram

To avoid incurring a long latency by resetting the entire CPB/CFB sets we could just stop the clock going to some or all of the CFB/CPB, change the logic within that region, and restart the clock. That way, there isn't as much wasted time, or configuration overhead. The more configuration overhead there is the more likely the system performance will be unacceptably below that of a fixed- hardware version.

Partial Reprogrammability

The ability to leave most of the internal logic in place and change just one CFB/CPB. Any gate or set of gates within a CFB/CPB may be changed without affecting the state of the others.

A feature of this device is the ability to:

•Externally- Visible Internal State

The internal state of the Triumvir/Triumvirate can be analysed, configured and managed at any time, then it is also possible to capture that state and save it for later use. This allows the internal state of the CL Component Processor CFB/CPB's to be read and written just like memory. Which, makes it possible to "swap" hardware designs in much the same way that pages of virtual memory are swapped into and out of physical memory

For all these features to work another model is incorporated into the CL Component Processor which is explained in Fig 14. Effective cycle management ensures stable changes and assignment occurs and overcomes the metastability issues that plague some high-speed systems.

Describes in generic example terms how these cycles work. Fig 20 describes the generic stream paradigm The system has two distinct cycles it goes through an Assignment cycle is when the Etiquette is decoded and then encoded to the targeted entity. This cycle is a single-shot one so in a scenario one Assignment cycle is execute. But, Quiddities can execute initialisation cycles during a job assignment.

The engagement cycle is the method by which the parties involved in a project determine the means by which they can interact with each other. This cycle is governed by the Etiquette processor, uses Discovery and encodes Judgment. Furthermore, these processes initialise the system to an application or applications that it will run. Once completed the work cycle begins and continues to do so according to the etiquette.

The work cycle occurs only when a successful engagement cycle has produced a working contract. The Judgment processor manages the action processor(s)

involved in the project. Any event that strays beyond the contract or known rules Judgment will decide the best course of action.

5.2.2 Implementation.Architecture

Generally, the Implementation architect has to research many differing devices, risking differing response models and performance characteristics.

CL-CSM has a generic model quad-model M.C.C.P (Metaware Configurable Computing Processor);

The core is another step-change usually it is the microprocessor but here its the Meta Memory Processor which acts as the co-ordinator of the system. The memory system structures and organises the applications at either, system, business or customer level.

The Memory MCCP feeds application infosets to the application accelerator bank normal the role of a microprocessor. Here the Application bank organises its structure and work load free from Operating services the focus of the system is to run application code efficiently.

The Memory MCCP also feeds presentation infosets into the presentation processor which will then coordinate with monitors, graphical devices.

The Connectivity MCCP contains the security model thus providing a firewall between the system and the outside world. All forms of attacks and can be blocked here, also secure communication is coordinated here.

The Storage M.C.C.P takes care of all infoset flow to-and-from storage to the memory, can also provide direct large query feeds from the Application MCCP.

5.2.3 Computer System

The above diagram is an example implementation of a computer using the Connectivity Network card as a means of secure access to the Internet. The model is based on these facts :-

The application processor executing the application

Coordination is by the Memory MCCP.

5.2.4 Network Processor

One very important aspect of the system is the ability to execute live structure hanges in real time, this means the capability to apply adaptive security models over the Internet. To be able to understand XML instance from XJVIL schemas performing:

•Validation

•Variable instance encryption

•Variable instance polymer coding

1.2 Summary

Conceptual Logic - Computer System Model is a step change of the current system models as:

•It addresses and changes all aspects of the computer system model: •System Architecture

•Operating environment, effective firmware and compilers for application development. •Computer Architecture

•The lob-sided microprocessor approach

•The frozen-in-time scenario resulting in quick out-of-cycle devices •Implementation Architecture

•Mixing of incongruent system and the flaws they bring.