Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ARTIFICIAL INTELLIGENCE SYSTEMS AND AUTONOMOUS ENTITY APPARATUSES INCLUDING ARTIFICIAL INTELLIGENCE SYSTEMS
Document Type and Number:
WIPO Patent Application WO/2019/081891
Kind Code:
A1
Abstract:
Autonomous entity apparatuses and artificial intelligence systems comprising: an artificial neural network at least part of which is hardware implemented, the neural network being a multiple-layer neural network having input layer nodes, output layer nodes and at least one hidden layer of nodes, the neural network being configured to repeatedly propagate its inputs to its outputs with each propagation being termed a decision cycle, wherein the neural network has encoded within it at least one node which is a symbolic representation (symbol) for 'self which has within at least part of its definition an association with said at least part of the neural network which is hardware implemented and wherein at least one of the output layer nodes is connected to at least one of the input layer nodes such that internal feedback data may be provided to said at least one of the input layer nodes and the neural network is configured to carry out successive decision cycles in response to said internal feedback data irrespective of whether any data external to the neural network is provided to the input layer nodes.

Inventors:
SLADE GLEN JONATHAN (GB)
Application Number:
PCT/GB2018/053001
Publication Date:
May 02, 2019
Filing Date:
October 17, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZYZZLE LTD (GB)
International Classes:
G06N3/00; G06N3/04
Foreign References:
US20070288407A12007-12-13
US6038556A2000-03-14
US20070239641A12007-10-11
Attorney, Agent or Firm:
FAULKNER, Thomas John (GB)
Download PDF:
Claims:
CLAIMS:

1 . An artificial intelligence system comprising:

an artificial neural network at least part of which is hardware

implemented, the neural network being a multiple-layer neural network having input layer nodes, output layer nodes and at least one hidden layer of nodes, the neural network being configured to repeatedly propagate its inputs to its outputs with each propagation being termed a decision cycle, wherein the neural network has encoded within it at least one node which is a symbolic representation (symbol) for 'self which has within at least part of its definition an association with said at least part of the neural network which is hardware implemented and

wherein at least one of the output layer nodes is connected to at least one of the input layer nodes such that internal feedback data may be provided to said at least one of the input layer nodes and the neural network is configured to carry out successive decision cycles in response to said internal feedback data irrespective of whether any data external to the neural network is provided to the input layer nodes. 2. An artificial intelligence system according to claim 1 in which the output layer nodes comprise at least one actuator node, which is arranged to generate outputs for use in operation of actuators external to the neural network. 3. An artificial intelligence system according to claim 2 in which the at least one actuator node is connected to at least one of the input layer nodes such that actuator feedback data may be provided to said at least one of the input layer nodes. 4. An artificial intelligence system according to claim 3 in which the input layer nodes comprise at least one actuator feedback node for receiving feedback from an actuator node in the input layer.

5. An artificial intelligence system according to any preceding claim in which the input layer nodes comprise at least one sensory data node for receiving external information. 6. An artificial intelligence system according to any preceding claim in which the neural network comprises a first set of connections and neural weightings which are predefined and a second set of connections and neural weightings which are dynamically programmable. 7. An artificial intelligence system according to any preceding claim in which the neural network has encoded within it at least one respective node which is a symbolic representation (symbol) for a respective one of:

a direct external action associated with one of said actuator nodes, a concept, for example at least one amongst the group of: goals, plans, objects, actions, vocabulary.

8. An artificial intelligence system according to any preceding claim in which there are a plurality of input layer internal feedback nodes, and a respective one or more of the plurality of output layer internal feedback nodes is encoded as a symbolic representation (symbol) for a respective one of: the entity's current worldview,

the entity's current selfview,

the entity's current train of thought,

nature and status of the entity's past actions,

nature and status of the entity's plans,

nature and status of the entity's goals.

9. An artificial intelligence system according to any preceding claim in which the at least one node which is a symbolic representation (symbol) for 'self has within at least part of its definition an association with an entity to be controlled by the artificial intelligence system.

10. An artificial intelligence system according to any preceding claim in which the input layer of nodes comprises at least one tool input node for accepting data from a respective peripheral device external to the neural network.

1 1 . An artificial intelligence system according to any preceding claim in which the output layer of nodes comprises at least one tool output node for supplying data to a respective peripheral device external to the neural network.

12. An artificial intelligence system according to any preceding claim which comprises at least one override control to allow a human operator to intervene and stop the system carrying through an action which has been decided upon at the end of a decision cycle.

13. An artificial intelligence system according to claim 12 in which the override control is arranged to block signals from one or more respective actuator node reaching the respective actuator.

14. An artificial intelligence system according to any preceding claim which is arranged to allow a plurality of parallel streams of thought.

15. An artificial intelligence system according to any preceding claim in which the system is arranged such that neural network is capable of processing multipart objectives, and/or the neural network is arranged to have duplicated portions such that a first portion can handle a first issue whilst a second portion can handle a second issue and/or the system is configured to control the neural network to time share resources between two issues.

16. An artificial intelligence system according to claim 14 or 15 in which the neural network is arranged to have a plurality of replicated portions such that each portion can handle its own respective issue whilst the other portions handle their own respective issues and/or the system is configured to control the neural network to time share resources between a plurality issues.

17. An artificial intelligence system according to any preceding claim in which the system is arranged to switch between a conscious mode of operation and a non-conscious mode of operation.

18. An autonomous entity apparatus comprising an artificial intelligence system according to any preceding claim and at least one actuator for accepting control signals/data from a respective actuator node of the neural network.

19. An autonomous entity apparatus according to claim 18 further comprising at least one sensor for feeding information to at least respective sensory input node of the neural network.

20. An autonomous entity apparatus according to claim 19 wherein the at least one sensor is physically separated, say geographically separated, from the artificial intelligence system.

21 . An autonomous entity apparatus according to any one of claims 18 to

20 wherein the at least one actuator is physically separated, say

geographically separated, from the artificial intelligence system. 22. An autonomous entity apparatus according to any one of claims 18 to

21 which comprises at least one peripheral device.

Description:
ARTIFICIAL INTELLIGENCE SYSTEMS AND AUTONOMOUS ENTITY APPARATUSES INCLUDING ARTIFICIAL INTELLIGENCE SYSTEMS

This invention relates to artificial intelligence systems and autonomous entity apparatuses comprising artificial intelligence systems.

There is a desire to produce artificial intelligence systems and devices or apparatuses including such systems which can provide high levels of intelligence and in particular say human-like intelligence and behaviour.

One aspect of human-like intelligence and behaviour is that of consciousness. In at least some circumstances it may become interesting to be able to produce artificial intelligence systems and hence apparatuses including such artificial intelligent systems which are able to behave as though they are conscious or in fact experience consciousness. Commercial applications may include improved decision making in contexts requiring empathy, eg. nursing.

It is an aim of the present invention to provide artificial intelligence systems and apparatuses including artificial intelligence systems which may be able to exhibit high levels of autonomy, in particular self-directed behaviour over a protracted or indefinite time period. Furthermore it would be desirable if such autonomous entities can possess a form of consciousness.

There is currently no unequivocally accepted definition of 'consciousness' in the fields of artificial intelligence, neuroscience, philosophy, or elsewhere as far as the inventor knows. Assuming that, it is not possible, even in principle, to design a machine which is uncontroversially conscious. Thus we do not claim this nor claim to have devised such a machine. What is described and claimed in this specification are artificial intelligence systems and autonomous entity apparatuses including such artificial intelligence systems which can facilitate the generation of a machine with a high level of autonomy, in particular self-directed behaviour over a protracted or indefinite time period. Moreover, these ideas may, in at least some circumstances, allow the generation of a machine with animal-like or human-like autonomous behaviours and may, in some circumstances, acknowledging the lack of certainty around the precise meaning of this term, allow the generation of a machine which can be considered to be conscious. Note that in saying this, it is recognised by the inventor of the present ideas that there might be different levels of consciousness. Thus what constitutes an adult human level of consciousness may not be a unique form of consciousness. There may be higher or lower forms of consciousness. Simple implementations of the systems described herein will likely not come close to human-like behaviour or consciousness; however, taking the perspective that human-like behaviour and consciousness have evolved, it is considered that simple implementations would, like certain animals, display simpler behaviours and potentially experience simpler forms of consciousness. According to one aspect of the present invention there is provided an artificial intelligence system comprising:

an artificial neural network at least part of which is hardware implemented, the neural network being a multi-layer neural network having input layer nodes, output layer nodes and at least one hidden layer of nodes, the neural network being configured to repeatedly propagate its inputs to its outputs with each propagation being termed a decision cycle,

wherein the neural network has encoded within it at least one node which is a symbolic representation (symbol) for 'self which has within at least part of its definition an association with said at least part of the neural network which is hardware implemented and

wherein at least one of the output layer nodes is connected to at least one of the input layer nodes such that internal feedback data may be provided to said at least one of the input layer nodes and the neural network is configured to carry out successive decision cycles in response to said internal feedback data irrespective of whether any data external to the neural network is provided to the input layer nodes.

Such an arrangement can lead to a system that can be considered conscious. The internal feedback data may be considered as thoughts of the system. The propagation of thoughts (irrespective of whether there is external stimulation) coupled with the symbolic representation for 'self can be considered to lead to consciousness. One might then describe the artificial intelligence system as a conscious entity or conscious system. In this document the term 'thoughts' is considered to include 'feelings' and other elements of consciousness except conscious actions which are treated separately.

In this document, the most important representation in the neural network is of the autonomous entity, or 'self, being the hardware neural network itself and, as appropriate, the apparatus associated with and controlled by the neural network. Such a representation is readily trained by those skilled in the art, for example by creating a training set where the neural network's inputs (eg. vision, sound, feedback), pre-processed into features as appropriate, are labelled as 'self or not according to whether the inputs contain the entity, for example as some embodiment (eg. part of the physical apparatus in its field of vision) or manifestation (eg. the audio signal produced by its speaker) or internal feedback (eg. its thought). The resulting configuration will be referred to here as "at least one node which is a symbolic representation (symbol) for 'self which has within at least part of its definition an association with said at least part of the neural network which is hardware implemented". Note that such a configuration could be hard-wired after learning in order to resist corruption by further learning processes.

The output layer nodes may comprise at least one actuator node, which may be arranged to generate outputs for use in operation of actuators external to the neural network.

Thus the output layer nodes may fall into at least two categories, namely: actuator node(s), internal feedback node(s).

The at least one actuator node may be connected to at least one of the input layer nodes such that actuator feedback data may be provided to said at least one of the input layer nodes. In such a case actuator feedback can represent an indication of the data/instructions provided to the respective actuator. Such feedback is distinct from the above mentioned internal feedback and may be provided along therewith where appropriate. The actuator feedback relates to external actions decided upon by in the immediately preceding decision cycle, whereas the internal feedback relates to outputs of the immediately preceding decision cycle that did not result in external action being decided upon.

The input layer nodes may comprise at least one sensory data node for receiving external information. This information may comprise sensory information which correspond to human-like senses and/or other forms of data. Thus, for example the at least one sensory data node might be arranged to receive say image data, sound data or say a real-time data stream of specific information not (at least directly) related to human-like senses - say weather data. The data may be pre-processed before being provided to the input node. The system may comprise a device for carrying out said preprocessing or the pre-processing may occur outside of the system. In an example the device for pre-processing data may comprise another neural network.

The input layer nodes may comprise at least one actuator feedback node for receiving feedback from an actuator node in the output layer.

Thus the input layer nodes may fall into at least three categories, namely: sensory data node(s), actuator feedback node(s), internal feedback node(s).

The output layer of nodes can be considered to represent conscious actions and/or thoughts. The hidden layers can be considered to represent subconscious processes.

The neural network may comprise a first set of connections and neural weightings which are predefined and a second set of connections and neural weightings which are dynamically programmable. This allows the provision of the system with some predetermined symbols and behaviour whilst allowing configuration of the system by training of the neural network/allowing the neural network to learn. The neural network may have encoded within it at least one respective node which is a symbolic representation (symbol) for a respective one of:

a direct external action associated with one of said actuator nodes, or a concept, for example, at least one amongst the group of: goals, plans, objects, actions, vocabulary.

There may be a plurality of output layer internal feedback nodes and a respective plurality of input layer internal feedback nodes.

A respective one or more of the plurality of output layer internal feedback nodes may be encoded as a symbolic representation (symbol) for a respective one of:

the entity's current worldview,

the entity's 'selfview' being the nature and status of 'self,

the entity's current train of thought,

nature and status of the entity's past actions,

nature and status of the entity's plans,

nature and status of the entity's goals.

The artificial intelligence system can be for controlling an autonomous entity apparatus.

The at least one node which is a symbolic representation (symbol) for 'self may have within at least part of its definition an association with an

autonomous entity apparatus to be controlled by the artificial intelligence system.

The input layer of nodes may comprise at least one tool input node for accepting data from a respective peripheral device external to the neural network. The output layer of nodes may comprise at least one tool output node for supplying data to a respective peripheral device external to the neural network.

The peripheral device may be for assisting with operation of the artificial intelligence system. The artificial intelligence system may comprise the peripheral device. The peripheral device may comprise a computer readable memory device. The peripheral device may comprise a computer, say one configured to carry out a particular task, calculation, or type of calculation.

In some circumstances the neural network may be configured to make decisions as to when to call for input/assistance from a respective peripheral device.

There may be feedback connections within the neural network, other than those between the output layer and the input layer. At least some of these may be considered to represent subconscious processes. Subconscious processes may include:

i. perceiving changes to the worldview (if any),

ii. perceiving changes to the selfview (if any),

iii. triggering memories and other associations,

iv. performing unconscious actions (eg. maintaining balance),

v. evolution of ideas including possible goals or plans.

In some embodiments, the system is arranged so that at the end of each decision cycle, the system will have:

i. activated zero, one or more of the actuators that correspond to the direct actions at its disposal (and these signals will be fed back as inputs to the next decision cycle), ii. updated zero, one or more of its thoughts (which will be inputs to the next decision cycle), which may include the entity's worldview, selfview, actions, plans, goals and other concepts,

iii. updated inputs to any peripheral devices that are present.

In some cases the system is arranged so that at the end of each decision cycle, the system will have:

made zero, one or more changes to its 'memory' (which will also be an input to the next decision cycle) and/or

made zero, one or more changes to its 'questions' (which will be an input to other systems).

The frequency of the decision cycles may be fixed or variable. It may be desirable although it is not mandatory to permit each cycle to complete, including propagate feedback, before the next one starts. It may be desirable to 'gate' the input and/or output signals. It may be desirable to avoid cyclic graphs of neural connections. Notwithstanding this, the decision cycle frequency needs to be consistent with the entity's environment and goals; typically this will be at least once per second but may be much faster especially in simple entities.

The system may comprise at least one override control to allow a human operator to intervene and stop the system carrying through an action which has been decided upon at the end of a decision cycle.

The override control may be arranged to block signals from one or more respective actuator node reaching the respective actuator.

The system may be arranged to allow a plurality of parallel conscious streams of thought.

Three potential approaches are considered, which may be used individually or in combination according to design. The first approach involves including sufficient abstraction capability in the system's problem definition set that it is effectively asking 'What shall I do now?' for two specific but discrete actions, such as 'drive a car' and 'write an email'. The second approach supports this by duplicating sections of the subconscious processing so that they can be directed to the different questions; for example, hand-eye coordination. The third approach is a type of interlacing of the two thoughts where a controller circuit enables the two parallel thoughts to alternately access the entity's senses, thinking and actions; feedback would also be interlaced, but also provide crossover between the trains of thought so the single entity is having both thoughts, rather than being two individuals time-sharing a brain. Thus the system may be arranged such that neural network is capable of processing multipart objectives, and/or the neural network may be arranged to have duplicated portions such that a first portion can handle a first issue whilst a second portion can handle a second issue and/or the system may be configured to control the neural network to time share resources between two issues.

The neural network may be arranged to have a plurality of, parallel, say replicated, portions such that each portion can handle its own respective issue whilst the other portions handle their own respective issues and/or the system may be configured to control the neural network to time share resources between a plurality of issues.

The system may be arranged to switch between a conscious mode of operation and a non-conscious mode of operation.

According to another aspect of the present invention there is provided an artificial intelligence system comprising:

a non-linear classifier system at least part of which is hardware implemented, the non-linear classifier system having inputs and outputs and being configured to repeatedly propagate its inputs to its outputs with each propagation being termed a decision cycle,

wherein the non-linear classifier system has encoded within it a symbolic representation (symbol) for 'self which has within at least part of its definition an association with said at least part of the non-linear classifier system which is hardware implemented and

wherein at least one of the outputs is connected to at least one of the inputs such that internal feedback data may be provided to said at least one of the inputs and the non-linear classifier system is configured to carry out successive decision cycles in response to said internal feedback data irrespective of whether any data external to the non-linear classifier system is provided to the inputs. According to another aspect of the present invention there is provided an autonomous entity apparatus comprising an artificial intelligence system and at least one actuator for accepting control signals from the artificial intelligence system,

the artificial intelligence system comprising:

an artificial neural network at least part of which is hardware implemented, the neural network being a multi-layer neural network having input layer nodes, output layer nodes and at least one hidden layer of nodes, the neural network being configured to repeatedly propagate its inputs to its outputs with each propagation being termed a decision cycle,

wherein the neural network has encoded within it at least one node which is a symbolic representation (symbol) for 'self which has within at least part of its definition an association with at least one physical part of the autonomous entity apparatus,

wherein at least one of the output layer nodes is connected to at least one of the input layer nodes such that internal feedback data may be provided to said at least one of the input layer nodes and the neural network is configured to carry out successive decision cycles in response to said internal feedback data irrespective of whether any data external to the neural network is provided to the input layer nodes.

According to a further aspect of the present invention there is provided an autonomous entity apparatus comprising an artificial intelligence system as defined above and at least one actuator for accepting control signals/data from a respective actuator node of the neural network. The autonomous entity apparatus may further comprise at least one sensor for feeding information to a respective sensory input node of the neural network.

The autonomous entity apparatus may comprise at least one peripheral device of the type mentioned above.

Where the autonomous entity apparatus comprises at least one actuator, the at least one actuator may be physically separated, say geographically separated, from the artificial intelligence system.

Where the autonomous entity apparatus comprises at least one sensor, the at least one sensor may be physically separated, say geographically separated, from the artificial intelligence system.

The autonomous entity apparatus may for example be implemented as a robot, a drone, a vehicle or any other appropriate device. The at least one actuator may, for example, comprise one or more of:

sound generation apparatus;

light generation apparatus;

image generation apparatus;

a control system for controlling part of the autonomous entity or an external device;

a motion generation system, and so on.

Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:

Figure 1 shows an autonomous entity apparatus including an artificial intelligence system and in the form of a robot;

Figure 2 shows a second autonomous entity apparatus including an artificial intelligence system and in the form of a vehicle; Figure 3 shows a third autonomous entity apparatus including an artificial intelligence system and in the form of a drone;

Figure 4 shows in isolation and in more detail, but still at a schematic level, an artificial intelligence system of the type used in the autonomous entity apparatuses shown in Figures 1 , 2 and 3;

Figure 5 shows a schematic high level architecture of a central chip of the artificial intelligence system shown in Figure 4; and

Figure 6 shows at a more detailed level the architecture of the central chip of the artificial intelligence system shown in Figure 4.

Figure 1 shows an autonomous entity apparatus in the form of a robot. The robot comprises a control system 1 which as a key component comprises an artificial intelligence system 2. The robot further comprises a number of sensor units 3 which can provide inputs into the artificial intelligence system 2 and a number of actuators 4 which can be controlled by the artificial intelligence system 2 as part of the overall control system 1 . In the present embodiment the sensor units 3 include a camera and a temperature sensor for use by the robot. Of course it will be appreciated that many other sensor units might be provided in addition to or in alternative to these in particular examples.

Similarly in the present embodiment the robot includes a number of different devices amongst its actuators 4. These range from the simple such as a light that may be lit in certain circumstances to more complex actuators such as a transport system or a robot arm. Further of course, in at least some

circumstances, a particular part of the robot might include both sensor elements and actuator elements. In general terms, each of these will be connected back to the control system 1 for control under the overall control of the artificial intelligence system 2.

Figure 2 shows an autonomous entity apparatus in the form of a vehicle.

Again this comprises a control system 1 and an artificial intelligence system 2. In this instance the artificial intelligence system 2 is outside of the control system 1 but is in communication therewith. Again the autonomous entity apparatus comprises a plurality of sensor units 3 as well as a plurality of actuator units 4. In this case the plurality of sensor units include proximity sensors for use in sensing objects outside of the vehicle and the actuator units include a light, a steering actuator and a braking system. In at least some circumstances there may be a plurality of connections between the artificial intelligence system 2 and the control unit 1 . As such there may be a dedicated connection for each sensor or actuator unit, or for example for groups of sensors units and actuator units as appropriate. In other alternatives, data concerning each of these may be carried over a single connection between the artificial intelligence system 2 and the control system 1 and handled appropriately at each end of this connection.

Figure 3 shows autonomous entity apparatus in the form of a drone. Here there is a distributed system with parts of the system in the drone unit D and parts of the system in the base station B.

Again the autonomous entity apparatus in Figure 3 comprises a control system 1 and an artificial intelligence system 2. In the present embodiment the artificial intelligence system 2 is provided in the base station B whereas the control system 1 is provided in drone unit D. As will be appreciated this separation is not essential but is a possibility and hence why this is illustrated in the present autonomous entity. Here the idea is that although the artificial intelligence system 2 is provided in the base station B, the base station B and drone unit D together form an autonomous entity which can be considered as a whole. Thus as part of the definitions provided in the artificial intelligence system 2, a drone unit D and base station B are considered as part of the same entity. Here again the system includes sensor units 3 and actuator units 4. In this case the sensor units include a camera and the actuator units include a light. Furthermore a transceiver unit 5 is provided in the base station B and a corresponding transceiver unit 5 is provided in the drone unit D to allow communication therebetween. As described above, each of the autonomous entity apparatuses shown in Figures 1 , 2 and 3 includes an artificial intelligence system 2. Such an artificial intelligence system 2 is shown, still schematically, but in more detail in Figure 4. The artificial intelligence system 2 comprises a central chip 6 (which will be described in more detail below) as well as peripheral devices 7 connected to the central chip. The peripheral devices 7 may, for example include: a memory system, a dedicated computer system for carrying out particular types of calculation or analysis, a communication unit for communicating outside of the artificial intelligence system 2. This communication may for example be via a computer network such as a local area network, a wide area network or the internet and so on. In principle any peripheral device which is useful to the artificial intelligence system 2 can be provided. These may be selected bearing in mind the type of autonomous entity apparatus with which the artificial intelligence system 2 is intended to be used. Thus, for example if the artificial intelligence system 2 is to be used in a drone system such as shown in Figure 3, the peripheral units might include a connection to a source of weather data and/or a system for processing such weather data and providing an output to the central chip 6.

In the present systems the central chip 6 carries a neural network and at least a part of that neural network is implemented in hardware directly on the central chip 6. In a preferred case, substantially the whole of the neural network may be implemented in hardware on the chip 6. In some cases however there may be part of the neural network which is implemented in software alongside the hardware portion. Note that the central chip 6 is a physical, hardware component carrying a hardware implemented neural network. That said parts of the system including say the peripherals and control systems may be software implemented on general purpose computing devices including a processor, computer readable memory and so on, indeed part of the neural network may be implemented by software running on general purpose computing devices including a processor, computer readable memory and so on.

Figure 5 shows schematically at a high level, the architecture of the central chip 6 in the present embodiment. As mentioned above, this comprises a neural network 8, the nodes 81 and connections 82 of which are shown in Figure 5. The neural network 8 is a multilayer neural network having an input layer of nodes 81 a and output layer of nodes 81 b and hidden layers of nodes 81 c.

At this high level the input layer nodes can be considered in three groups. Actuator nodes Al sense nodes SI and thought nodes Tl. Similarly the nodes in the output layer can be considered to fall into two groups: actuator nodes AO and thought nodes TO.

The sense nodes SI are arranged to accept inputs from the sensor units 3 provided in the autonomous entity of which the central chip forms a part (or alternatively to accept input data from sensors (or elsewhere) external to the autonomous entity where that is appropriate). Similarly the actuator nodes AO in the output layer are arranged for sending control signals and data to the actuator units 4 provided in the autonomous entity apparatus of which the central chip 6 forms a part.

Outputs from these output layer actuator nodes AO are also connected back into the input layer actuator nodes Al as feedback. Thus whilst the output of the output layer actuator nodes AO is fed towards the actuators themselves for causing actuation, information concerning these actuation instructions states and so on is fed back into the input layer of the neural network.

Separately from this the outputs of the thought nodes TO in the output layer are connected back into the thought nodes in the input layer Tl to provide internal feedback. That is to say these connections represent outputs of the neural network which are not intended to cause changes externally to the artificial intelligence system 2 but rather represent internal states or data which can be considered to be thoughts of the artificial intelligence system 2. One of the nodes in the hidden layers is a symbolic representation (symbol) for self. As is well understood in neural networks, each node in a neural network is representative of some symbol in the system. In this case having a node which is representative of self and which has as at least part of its definition an association with the entity of which the artificial intelligence system forms a part and including an association with the actual hardware implementation of the neural network, gives the neural network a concrete definition of 'self, or what itself is. It is considered that such a definition of self within the neural network which is tied to the physical entity

apparatus/hardware of the neural network itself can provide one aspect of what might be considered to constitute consciousness.

The neural network 8 is arranged to continually propagate its inputs to its outputs so as to perform continuous decision cycles. Furthermore, these decision cycles take place irrespective of when any new stimulus external to the artificial intelligence system is provided - that is, irrespective of whether any new inputs are received via the sense nodes SI in the input layer. Thus during these cycles internal feedback may continue to generate different outputs at the output layer which might subsequently lead to signals being sent to actuators. This continual processing irrespective of external input can be considered equivalent to thought processes.

Figure 6 shows a more detailed schematic architecture of the central chip 6.

In this more detailed architecture, as well as actuator nodes Al, sense nodes SI and thought nodes Tl in the input layer, there are also input tool nodes TLI and similarly in the output layer of nodes there are output tool nodes TLO. These input and output nodes TLI, TLO provide connection to the types of peripheral devices 7 mentioned above in relation to Figure 4. Thus for example the tool nodes TLI, TLO can provide connections to memory and special purpose computing apparatus and so on.

Furthermore in this more detailed architecture the sense nodes SI are broken down into two separate groups, those which accept raw signal data and those which accept pre-processed data. Furthermore the thoughts nodes are separated into different sets based on the categories of thoughts to which the nodes relate. Thus in the present example there are groups of nodes regarding the entity's worldview, selfview actions, plans, goals as well as other thoughts.

Particular symbols within the hidden layers are shown in this more detailed architecture. Again of course these are merely examples. Thus there may be nodes in the hidden layer which correspond to direct actions which may be carried out by the central chip 6 by virtue of sending appropriate symbols to respective actuator output nodes AO. These nodes are indicated in the diagram by 'DirActl ' and so on. Further in the diagram there are illustrated nodes which correspond specifically to the concepts of plans, goals, objects, actions and vocabulary. Again note these are merely indicative and the system may be set up with whichever symbols are appropriate for the use of the artificial intelligence system 2.

Although not shown in the diagram, a human override may be provided between the output of the output layer actuator nodes AO and the output of the artificial intelligence system 2. That is to say in certain circumstances by pressing a button or other appropriate action, a human can prevent actions being carried out which the artificial intelligence system has otherwise determined should be carried out. Note that by virtue of the sensor units which the autonomous entity associated with the artificial intelligence system 2 will likely possess, and/or the actuator feedback, the neural network 8 will often be able to sense that its actions have been overridden. This may in appropriate circumstances cause learning of new behaviour by the neural network over time so as to not continuously repeat decisions to carry out certain actions which are then overridden.

In considering these diagrams as well as the description it will of course be appreciated that these are only example implementations and only schematic representations of what in practice may be extraordinarily complex neural network systems with many layers and many nodes and connections. It will also be appreciated that characteristics of the artificial intelligence system 2 such as its input bandwidth, number and configuration of nodes and connections, processing speed and output bandwidth may determine its level and nature of consciousness

In general terms standard methods for the construction and training of the neural network 8 of the artificial intelligence system 2 may be used. Further, various nodes and connections in the neural network 8 may be predefined and non-configurable as the artificial intelligence system 2 is provided to a customer/installed in the autonomous entity apparatus whereas others of the nodes and connections may be reconfigurable to allow learning. As mentioned above various different peripheral devices 7 are provided as part of the artificial intelligence system 2. This in part is in recognition of the fact that whilst the main neural network 8 will be powerful for carrying out some processing and delivery of artificial intelligence capabilities, it will not provide the best solution for certain problems such as complex maths problems. In such a case a dedicated calculator machine may be provided as a peripheral device to enhance this capability.

The provision of different peripheral devices may also be useful for providing different levels of capability of device and perhaps different levels of consciousness. The provision of different peripherals could provide different levels of performance.

The present systems may be arranged to enable parallel processing of multiple thoughts. This might be achieved by abstraction of the problem and/or selected partitioning of the neural network and/or enabling interlaced processing where the neural network is allowed to work on one problem in one set of blocks of time and another problem in another set of blocks of time. The artificial intelligence system may be arranged to allow the selective activation and deactivation of any conscious like behaviour that the neural network is able to provide. This might be achieved by switching the neural network out of operation altogether or altering its functioning. It is desirable if it is possible for the autonomous entity to receive upgrades and/or new peripherals without the loss of continuity of consciousness assuming that consciousness can be achieved. The use of peripheral devices outside of the main neural network is one way to help this. The artificial intelligence system may be arranged so as to allow control of whether the neural network can learn. The artificial intelligence system may be arranged so that learning is allowed in respect of some aspects of operation but not others. Alternatively or in addition, the artificial intelligence system 2 may be provided with means for a user to enable and disable learning on an ad-hoc basis or for a timed period or permanently, or so on.

Whilst the above description has been written in terms of the artificial intelligence system being implemented by virtue of a neural network as shown in Figures 5 and 6, in an alternative other architectures may be used, for example another form of nonlinear classifier, if available, rather than a neural network may be used to provide similar functionality.

It is considered by the inventor that implementing at least part of the central chip 6 in hardware is a requirement for obtaining consciousness. The inventor believes that a software implementation of the hardware would constitute only a simulation of the conscious entity. Thus in some circumstances a switch between conscious and nonconscious operation might be achieved by switching between processing being carried out by the hardware implementation and processing being carried out by a software based auxiliary system.

It can be considered that the core function of the artificial intelligence system 2 and in particular the central chip 6 is to decide repeatedly "What shall I do now?" based on a plurality of inputs and to enact directly its decision through a plurality of actuators.

It should be noted that whilst in general terms the operation of the neural network consists of a propagation of its inputs to form new outputs with this propagation playing in a left to right direction as shown in Figures 5 and 6, there may be feedback loops within the neural network itself and also there may be connections between different layers rather just between the adjacent layers as shown in Figures 5 and 6.

Note that in a preferred embodiment, the central chip 6 has a preprogramed neural network configuration including neural weightings consistent with a plurality of predefined symbols that enable key features from its environment to be interpreted as well as other predefined connections and neural weightings that result in desired initial 'behaviour'. This is complemented by a different plurality of nodes and connections that are dynamically

programmable such that they can enable the entity's general purpose learning. Examples of what could be considered conscious thoughts of a system or what might also be termed internal feedback may include without limitation

(i) one or more symbols that represent the entity's worldview (and as appropriate the place of self within it)

(ii) one or more symbols that represent the entity's selfview (including the nature and status of self)

(iii) one or more symbols that represent the entities current train of thought (which may include a verbal representation of the same)

(iv) one or more symbols that represent the nature and status of the entity's past actions (if any) (v) one or more symbols that represent the nature and status of the entity's plans (if any)

(vi) one or more symbols that represent the nature and status of the entity's goals (if any)

The neural network will in practice likely include feedback connections of different types from layers other than the output layer to layers other than the input layer. These can be considered as the entity's subconscious processes including:

(i) perceiving changes to the worldview (if any)

(ii) perceiving changes to the selfview (if any)

(iii) triggering memories and other associations

(iv) performing unconscious actions (e.g. maintaining balance)

(v) evolution of ideas including possible goals or plans

As will be appreciated, by the end of each decision cycle the neural network will have:

(i) activated zero, one or more of the actuators that correspond to the direct actions at its disposal (and these signals will be fed back as inputs into the next decision cycle)

(ii) updated zero, one or more of its thoughts (which will be input to the next decision cycle) which may include the entity's worldview, selfview, actions, plans, goals or other concepts

(iii) updated inputs to any tools, that is peripheral devices (that are present): made zero, one or more changes to its memory (which may also be an input into the next decision cycle)

made zero, one or more changes to its questions which will be an input into other systems, i.e. peripherals. Note that it would be possible to provide an autonomous entity apparatus without necessarily all of the above described functionality but such a device would generally have more limited commercial utility. For example a machine which was capable of the type of thought processes which could be

considered to be conscious thoughts discussed above could be produced even it were not able to receive new sensory information and/or even it could not provide any output.

There follows some additional supporting and/or background information. a) Symbolic representations ('symbols') are the accepted interpretation of intermediate ('hidden layer') and particularly final layer classification outcomes of neural networks. When being trained, a neural net can have target symbols pre-defined or can be left to decide for itself how to classify inputs. While some of the symbols within the central chip may be dynamically determined (and hence unknown a priori), it is mandatory for the symbol 'self and potentially advantageous for certain other symbols to be known to be present and functional, ie. correspond well in practice to their intended purpose. Potentially, a central chip could be designed to cause these symbols to arise after an initial period of operation, resulting in limited consciousness and/or utility until then.

i. The symbol for 'self is essential for the consciousness properties of self-awareness and self-reference. Depending on the context in which this symbol is associated (used) within the central chip, it's natural language interpretation may be Ί', 'me', 'my' etc.

ii. The central chip's possible direct actions are those over which it has direct control because they are wired to the central chip, as opposed to the intended outcome. For example, if the entity has a robotic arm that can move up or down, the direct action is 'activate move up', not 'move up', because the action may not work for some reason, eg. the movement is blocked by a solid object. Direct actions can take diverse forms as long as the symbols are configured appropriately to correspond. In the simplest case, each action option will be binary (eg. activate or don't activate), but actions could also be multi-option (eg. move joystick in given direction or not at all) and additionally have a digital or analogue associated value (eg. power level 3 out of 10).

iii. Other pre-programmed symbols are expected to be useful in many applications to reduce the learning time of the entity.

Goals are likely to be important in commercial applications.

Plans may be a map from goals to direct actions and it could be helpful to pre-program certain pre-trained sequences, for example to give a hand wave. Applications might also benefit from defining symbols for passive objects (eg. furniture, route map), active objects (eg. owner, friend), relevant actions (eg. ring doorbell, turn on lights) and other vocabulary (eg. start, stop, good, bad).

The frequency of the decision cycles may be fixed or variable. It may be desirable although it is not mandatory to permit each cycle to complete, including propagate feedback, before the next one starts. It may be desirable to 'gate' the input and/or output signals. It may be desirable to avoid cyclic graphs of neural connections. Notwithstanding this, the decision cycle frequency needs to be consistent with the entity's environment and goals; typically this will be at least once per second but may be much faster especially in simple entities.

The feedback of the actuators' status is useful for the self-awareness aspect of consciousness, as well as for obvious reasons of control. Sensory inputs are not strictly necessary for consciousness, but are likely to be needed at least for certain periods in commercial

applications. Basic consciousness can have very simple inputs, potentially a single bit channel. Commercial applications will include entities with at least human-level audio-visual capabilities as well as other sensory abilities. Given the high bandwidth of this data it may be desirable to perform a significant amount of pre-processing before submitting the data to the central chip. Such pre-processing may include established techniques for image analysis include Al pattern recognition; this would enable the pre-processor to input directly in the form of symbols that are pre-defined in the central chip architecture, eg. human being, stop sign. The pre-processing will not form part of the consciousness of the central chip, regardless whether it is done in hardware or software; as such it may be useful for some form of (relatively) raw data, eg. the image, to be provided to the central chip in addition to the pre-processed data so that the central chip does consciously perceive the full environment and also has available the information that it may use to derive other interpretations,

Feedback of the entity's thoughts are useful for it to maintain a train of thought, ie. respond to its evolving inner state as well as to its changing sensory inputs. In principle, any or all of the central chip's symbols could have a feedback loop; however, there are several key symbols particularly relevant to conscious self-awareness:

i. To have a representation of the world including how the self relates to it is important to consciousness; this is called the worldview and relates only to the world of which the entity is aware which could be a virtual reality or a limited environment (eg. a room or a house) or an essentially unlimited context. Note that the self may be unseen, for example if the entity is connected to the controls of a video game but is not the entity in the game.

ii. To have a representation of the nature and status of self; this is called the selfview, which may include feelings such as sensations and emotions that are not crystallized into conceptual thoughts. For example a sensation of being stressed is different to and in general more complex than the captured thought Ί am stressed'.

iii. In the context of an entity that decides 'What shall I do now?', some immediate history of actions and/or summary of past actions will be desirable and this is readily implemented within the proposed architecture.

iv. In the context of an entity that decides 'What shall I do now?', it may be expedient to implement the concept of a plan as a potential sequence of actions; this could include 'macro' sequences, comprised of a plurality of individual direct actions, or other abstractions. In some implementations, it will be important for the entity to be able to evaluate potential plans before committing to put them into effect,

v. In the context of an entity that decides 'What shall I do now?', a goal is a purpose, reason or target for its actions. Whereas consciousness evolved to avoid danger and seek food and other rewards, artificial conscious entities may be in environments where they are directly cared for and their purpose is conceptual. It is possible that entities may be implemented with a pre-programmed goal or alternatively have to either decide autonomously or receive later instructions about what to do. In any of these cases, it is desirable that goal-related symbols and/or values are pre-programmed, either to encode the entity's purpose or to provide a 'vocabulary' for thinking and/or receiving instructions.

f) Sensory pre-processing is not the only type of additional computation that could be performed externally to the central chip to enhance its capabilities in an efficient manner. Any of these auxiliary systems could be triggered by conscious thought of the central chip (as shown in figure 6), or could tap neural settings and provide inputs which would seem to be 'hunches' (not shown). For example:

a. Neural nets are known to be inefficient at certain types of

memory function, so a complementary tool may make the central chip more efficient.

b. Neural nets are known to be inefficient at mathematical functions so a standard calculator could be made available. The interface for the central chip to specify a calculation could be a special set of neurones that code to characters of mathematical equations and calculator functions like AC and M+; both the full text and the computed answer could be available as subsequent inputs to the central chip. Other uses of this type of architecture may include to provide language translation or quantum computing capabilities.

g) Subconscious processing is in one respect similar to conscious

processing - it uses inputs to form outputs via a hardware neural network; the process is not conscious, however, because it does not have direct action capability - the intervening conscious process can accept, modify or reject any 'suggestion'. Furthermore, in terms of learning:

a. The subconscious is capable of learning aligned to automatic or subliminal goals, for example a conditioned response resulting from repeated association, such as feeling the urge to study in Spring, due to impending Summer exams in previous years. b. The subconscious is additionally trainable by the conscious process via 'feedback' on the adequacy of its 'suggestions'. For example, a subconscious process may want to scream because it has perceived a spider, but the conscious process has to suppress this due to being in a social setting; after repeated events, the subconscious may learn not to suggest screaming just because a spider is perceived.

As the central chip is hardware based, the outcomes of each decision cycle may be asynchronous notwithstanding the care needed to ensure that all relevant signals have propagated. One implementation option is to use a clock cycle to gate the following actions once the neural network is stable.

i. As discussed in a) ii. above, the direct actions are those that are wired directly to the central chip, so what happens at the end of each decision cycle is that any selected actions are initiated; there may be delay in the action happening including for electromechanical reasons. The action could be a finite one such as toggle an LED or an indefinite one such as illuminate an LED; as with any computer system careful consideration is needed to engineer timings. Conversely the actions that are not selected to happen will have their deactivation initiated (if previously activated).

ii. The information bandwidth in the thought feedback loop will correlate to the overall quality of thought process of the entity. Note that while the suggested thought elements of the entity's worldview, etc have a direct interpretation in reality, other symbols in the output layer may not.

iii. While neural networks are a general computing platform and can in principle compute anything that humans can, they are not optimized for certain types of calculation, such as floating-point arithmetic. Creating an interface to other computing systems may be an efficient way to enhance the entity's capabilities rather than scale the entire central chip and/or other system features. The external system could be passive, such as memory, have simple functionality such as a counter or timer, or provide more complex services including calculator functions, language translation and/or quantum computing algorithms. It is possible that more advanced capabilities may need more than one decision cycle to deliver the result and appropriate circuitry and flags will be used to address this.

Human override of conscious entity's actions

Given the signals leave the central chip to arrive at the actuator, it is straight forward to put in a break circuit that disables the signal when an override is pressed. This may be important if a new central chip specification is being tested or if an entity is being trained to use a critical system (in the same way as driver instructors use dual control cars).

As in the analogy of driving instruction, this override mechanism does not stop the entity from acting as it thought fit. However, feedback will relatively quickly inform the entity that it has 'lost control' of its actuators and it may therefore change behaviour including stopping its activation of actuators.

Consciousness of different levels and qualities

The central chip may provide its entity with an experience of consciousness that includes self-awareness, conscious perception, conscious thought and conscious action. Notwithstanding the likely need to have some balance between these elements, it is readily clear that having a greater bandwidth of sensory inputs, having a more extensive neural net for thought processing and having a wider array of action will provide a greater level or sense of consciousness. These parameters can in principle reach and exceed the values associated with adult humans.

Furthermore, it is anticipated that certain configurations may influence other characteristics of the entity's consciousness. For example, the processing speed of the central chip will be one of the factors that correlates to alertness, while having more symbols and connections related to others' emotions may enable the entity to possess greater empathy, subject to appropriate training.

Multiple parallel conscious streams of thought

The basic configuration of the central chip provides its entity with a single train of thought as it makes repeated decision cycles of 'What shall I do now?'. To achieve two parallel threads of thought, three potential approaches are considered, which may be used individually or in combination according to design. The first approach involves including sufficient abstraction capability in the central chip's problem definition set that it is effectively asking 'What shall I do now?' for two specific but discrete actions, such as 'drive a car' and 'write an email'. The second approach supports this by duplicating sections of the subconscious processing so that they can be directed to the different questions; for example, hand-eye coordination. The third approach is a type of interlacing of the two thoughts where a 'controller circuit' would enable the two parallel thoughts to alternately access the entity's senses, thinking and actions; feedback would also be interlaced, but also provide crossover between the trains of thought so the single entity is having both thoughts, rather than being two individuals time-sharing a brain. More than two parallel threads of thought might be facilitated, for example the neural network may be arranged to have a plurality of parallel, say replicated, portions such that each portion can handle its own respective issue whilst the other portions handle their own respective issues and/or the system may be configured to control the neural network to time share resources between a plurality of issues. Activation and deactivation of consciousness

In its simplest and most concrete form, this feature would be achieved by switching between hardware and software execution of the calculations embodied in the central chip; ideally all parameters would pass seamlessly between the two implementations. Depending on the implementation, it may be possible and more efficient to leave a significant part of the central chip hardware operational and just re-route into software enough of the decision cycle calculation to remove the consciousness. A software implementation of the central chip is just a simulation of its function; even when 100% accurate, it will not create a conscious experience any more than a weather simulation will create rain.

Geographical distance between central chip and entity

It will be readily seen that if the sensory inputs are received and the actuator signals are transmitted over a wireless network, then the physical entity can be located separately from its central chip. This would give continuity of consciousness even if the entity's physical form is lost or destroyed. There may be some reduction in the quality of consciousness due to the time lag between decision and action, sense and perception.

Continuity of consciousness despite upgrade to or addition of new peripherals The clear separation between the central chip and its sensory inputs on one hand and its actuators on the other hand mean that in principle such

'peripherals' can be upgraded or added without losing continuity of

consciousness, even though training is likely to be required for the central chip to adjust to the new situation. The same applies in principle to the upgrade or addition of 'tools'. For the central chip to accommodate improvements to peripherals may be no more daunting than a human looking through night-vision goggles or using an exoskeleton to carry more weight - the human brain adapts rapidly to the new situation drawing on previous learning. By extension the central chip can also adapt to the addition of new peripherals (senses, actuators or tools), although the training and accustomization may be longer. Furthermore, the entity may experience 'stress' if it is not adequately prepared for these changes and they interfere with decisions during the transition.

There is an underlying need for the central chip to have sufficient bandwidth for the upgraded or new peripheral, eg. a high definition video camera as opposed to a lower definition one. Moreover, to use the upgraded or new peripheral there is a need for the central chip to be capable of learning (see below) although this may be offset by pre-processing of data inputs as shown in the detailed description.

Learning

Learning is not a mandatory feature. Furthermore, in some commercial applications it may be advantageous that learning is prevented from occurring in order to improve the predictability of the entity's behaviour. However, in many applications it may be desirable for the entity to be able to learn.

Learning in this context means reconfiguring the central chip such that a given set of inputs and internal states will lead to a different output.

The field of artificial intelligence has developed a wide range of machine learning algorithms that could be applied to the central chip for it to learn, both to a directed goal or to undirected goals. Such algorithms are likely to be configured to run automatically as part of the central chip's operation, in the feedback loop of each decision cycle and/or in a time allocated as

'sleeping/dreaming'. However, it would be possible to give the central chip some control over when and how learning occurs.

Also note that it is important to distinguish between learning and behaving in response to changing stimulus. Let us take the (unlikely) example of a conscious entity that has been trained to operate a grocery store checkout by taking items from a conveyor belt, locating the bar code and holding this in front of a bar code scanner until an acknowledgement beep is heard. The normal operation of this activity - perceiving items arrive on the belt, deciding how to handle them, finding the bar code and orienting it for scanning, then waiting for a success signal before placing it in the collection area - does not constitute learning. Although the environment is changing, the entity is not 'learning' that there is a packet of biscuits to scan, it is observing it. If the same item arrives at a future point in time, it would, without learning, behave in exactly the same way. Even if the entity had previously been trained to know the names of all the items and a new item arrived, it can continue to operate as before so long as it has been trained to continue as normal with unrecognized items. An example of what would constitute learning for this entity is if it gradually observes that the packet of biscuit scans most reliably if it is tilted two degrees higher than it was initially trained to hold it and so changes how it behaves when it sees packets of biscuits. A learning capability could potentially also be implemented using external memory as described in h) iii. above, particularly where a neural pattern is remembered and so persists longer than the initial configuration of recurrence.

Regardless of the specific implementation of learning, it will be seen that it would be straight forward to adapt relevant circuitry to allow an external human operator to enable or disable the ability to learn for specific time periods, including indefinitely.

Cloning

Where the hardware implementation of the central chip allows its configuration data, eg. neural net topography and connection weights, to be read and written, then a central chip can be 'cloned' by reading its data and writing that to a different central chip. If these two central chips are connected to equivalent peripherals, say a drone, then the second central chip will already have the experience and skill at flying as the first central chip.

Note that from the perspective of the two central chips, they would not know (unless reliably informed) which of them (if either) was the one that did the drone training. Conversely from the moment that they exist independently, the differences in their experiences may cause divergence in their behaviour if they are still able to learn. Blend pre-programmed characteristics with autonomous conscious thought It is an implementation option to lock the weights of certain connections or regions of the neural net while leaving others to continue training, or evolving with experience. A specific example of this is that the sensory inputs could be pre-processed. For example, an autonomous car's visual system could label cars, people and road signs in its view. This represents pre-programmed learning that could have equally been coded inside the central chip.

Nonetheless, the programmable part of the central chip can continue to learn On top of these definitions, so that if it were appropriate it could learn further behaviours using these inputs.

The same principle applies to other concepts in the central chip's neural net. For example, above is mentioned the potential to pre-program or train certain symbols. It is envisaged that this could be very important to a commercial application wanting a newly conscious entity to already understand the world, demonstrate certain behaviours and be able to express opinions with an extensive vocabulary. Conventional (non-conscious) artificial intelligence tools have already been developed that achieve this type of capability and in principle after training, the learned symbols could be encoded into a neural network with fixed connection weightings and then incorporated as part of the central chip.

Note that dynamic learning may refine or override the pre-programmed behaviours, giving the conscious entity a 'freedom' that is as great as the learning capacity of the configurable part of its neural network. The ability for the conscious entity to create a new thought and hold that as an input to subsequent thoughts will permit that it considers and maintains that any part of the pre-programmed information is false. Furthermore, in a preferred implementation the programmable neural network will be complemented by hardwired memory that reduces the amount of hardware based neural learning as an efficiency measure. Central chip progressively upgraded

If the specification of central chips evolves as rapidly as the specification of CPUs evolved, then there will be a clear commercial need to upgrade obsolete central chips. One option may be a process similar to cloning, where the status of an old generation central chip is transferred to the next generation central chip which is sufficiently compatible.

However, in such a process, the consciousness experience of the first central chip will be terminated - even if it seems to 'wake up' in a new central chip. To address this potentially undesirable situation, the central chip could also be configured to accept modular upgrades to its neural network and other relevant components. This could permit a smoother transition to an upgraded state where the same consciousness that is in the central chip is sustained. Ultimately this process may lead to the upgraded central chip having no physical part in common with the original central chip, but the net conscious effect including memories and experiences will have been preserved without discontinuity.

Some specialized control hardware may be needed to achieve the transition envisaged above. To take a simplified example, if a central chip has a 1 GB 'net' and is to be upgraded to 1 TB, then in the first place the additional 'net' must be connected to the central chip and become part of it. But some process is needed to ensure that, after a period of time, the 1 GB 'net' can be disconnected and the 1 TB 'net' accurately preserves what the original 'net' contained.

Central chip transferred to new substrate without losing consciousness

The above approach to upgrading a central chip without losing consciousness can be extended to situations where the initial and final central chip are in different substrates, including without limitation where the first central chip is a human brain and the second central chip is implemented in a synthetic substrate. One approach will be to create an unprogrammed central chip using a synthetic substrate that can be interfaced to biological neural networks

(brains). This new neural network area could be arranged as a 'peer' that progressively acquires the knowledge and behaviour of the host through natural learning and self-organization characteristics of neural nets.

Alternatively the new neural network area could additionally have a plurality of higher-level layers in its network, with the intention that as the state of the biological brain migrates across to the synthetic substrate, then the control centre of consciousness will automatically migrate to the higher level layers; a comparable phenomenon is seen in human adolescence.

Such a transition may also be assisted by a virtual reality construct, guiding the centre of consciousness control from one location to another within a fused 'neural net'. This may also assist with the transition to new sets of input senses and motor actuators.

In any of these configurations, it would be possible to configure the connection between substrates to have a reversible 'trial disconnection' once the consciousness has migrated to check that the process has completed satisfactorily before disconnecting the (defunct) biological entity.

Given the advantages of autonomous behaviour and conscious-like behaviour, the systems and apparatuses described here aim to reproduce artificially these characteristics. They do not aim to be necessarily the minimal implementation that would achieve such characteristics, but rather a sufficient specification that would do so in a commercially useful way.

The apparatus described is based on neural network architectures since these are well known and have certain properties that are fundamental to the artificial intelligent systems and autonomous entity apparatuses. In particular, neural networks can be used to learn conceptual representations of complex inputs. In simple systems a single node may take a value that represents a probability that the system's current inputs correspond to a certain concept (eg. whether a picture is of a cat). In more complex systems, a target concept may have a more diffuse representation in the sense that a large number of nodes collectively evaluate and hence represent a concept (eg. the extent to which this drawn animal is cat-like); in such cases each individual component node considered in isolation may or may not represent a meaningful concept. At least some of the arrangements described above might be considered, as a whole, to constitute a recurrent neural network (RNN). However, the

configuration differs from the prior art relating to RNNs in that existing RNN architectures are used to enable processing and prediction of ordered series, particularly time sequences, such as the sounds that make up speech. (Such an application of RNNs may be a subset of the present system.) In standard RNNs, the feedback loops create an aggregation of inputs (and partially processed inputs) to form additional inputs to the processing system that is typically a classifier. In the present ideas, the recursive connectivity from the outputs of the system to its inputs creates continuity of processing ('train of thought') and the scope for complete autonomy, ie. self-directed behaviour over a protracted or indefinite time period, which includes continued

processing in the absence of external input.

Furthermore, contemporary designs and applications of neural networks use programmatic control to determine when each decision cycle is invoked, and whether the resulting output is used to train the system by adjusting the neural network weights. For example, after training, a RNN may be presented with an audio sample to obtain its response of text transcription. When there is no new audio sample, the RNN does nothing because it is not invoked. In contrast, the current system may at some point receive a number of audio samples through its inputs and may have behaviour that includes providing a text transcription via its output actuators, but after the audio samples stop arriving (and in between audio samples if time permits), the system will continue to process its thoughts which may (or may not) include thoughts related to the audio samples it has been 'hearing'.

As a further example, even where a neural network in the prior art is used to repeatedly process inputs, such as the frames of a video stream, the iterations do not arise as a configuration of the network but as a result of programmatic control. In the present ideas, the neural network is depicted schematically as 'left to right' but in practice may represent an 'infinite loop' that has varying external input signals and produces varying output signals. In the case where the artificial intelligence system is implemented entirely in hardware, successive iterations of the decision cycle will be propagated as a signal from the output layer to the input layer; this arrangement may include gated signals and/or clock-driven signals. The arrangement does not preclude the use of interrupts or interlaced actions (for example to read the status of nodes for external analysis or to amend any dynamic characteristic of the network).

The challenges of the present architecture relate to stability and control since the behaviour is not directed by programmatic control nor by an externally specified objective. In contrast, standard RNNs do not face these challenges but rather learn and predict time sequences. Hence while documents such as US 2007/0265841 have diagrams such as fig 4 and 5 that appear superficially similar to the present architecture, the design criteria and function are completely different. Standard RNNs are used to train then predict; the present architecture is used to confer autonomy and potentially

consciousness.