Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND PROCEDURE FOR PREPARATION OF INTERACTIVE ELECTRONIC OBJECTS BY VIDEO PROCESSING FOR USE IN SYSTEMS OF ELECTRONIC DEVICES AND/OR ELECTRONIC TOYS AND INTERACTIVE ELECTRONIC OBJECTS PREPARED THEREWITH
Document Type and Number:
WIPO Patent Application WO/2020/261157
Kind Code:
A1
Abstract:
The invention relates to a system and a procedure for the preparation of interactive electronic objects by video processing, which captures objects in the real world, and to interactive electronic objects thus prepared, which are used in systems of electronic devices and/or of electronic toys. The described interactive electronic objects are used on portable smartphones.

Inventors:
JURČIČ JANI (SI)
Application Number:
PCT/IB2020/055993
Publication Date:
December 30, 2020
Filing Date:
June 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MORALOT STORITVENO PODJETJE D O O (SI)
International Classes:
A63F13/655; A63F13/213; A63F13/215; A63F13/63; A63F13/825
Domestic Patent References:
WO2018006071A12018-01-04
WO2010002921A12010-01-07
WO2010128329A22010-11-11
Foreign References:
US8591329B22013-11-26
EP3438796A12019-02-06
Other References:
ANONYMOUS: "Interactive film - Wikipedia", 19 June 2019 (2019-06-19), XP055735080, Retrieved from the Internet [retrieved on 20200929]
MARK HACHMAN, TESTED: HOW FLASH DESTROYS YOUR BROWSER'S PERFORMANCE, Retrieved from the Internet
Attorney, Agent or Firm:
KETNER, LEGAL CONSULTANCY, REPRESENTATION AND PROTECTION, LTD. (SI)
Download PDF:
Claims:
Patent claims

1. A system for preparation of interactive electronic objects by video

processing for use in systems of electronic devices and/or electronic toys, comprising: at least one or more cameras for capturing a video clip and/or a film of an object in the real world, where said video clip and/or film is recorded and/or captured in a digital form, which comprises at least one or more electronic storage media of the camera,

- at least one or more peripheral units for transferring the recorded film and/or video from the electronic storage medium of the camera to at least one or more computers,

- at least one or more computers, which comprises:

- at least one or more processing units for performing the said processing or handling of video or film,

- at least one or more permanent memory units for storing the software and the said video or film in a digital form, which enables implementation of said preparation of at least one or more interactive electronic objects,

- at least one or more temporary memories for

implementation and intermediate storage of said video or film and of said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more graphics units for implementation of graphics processing and of graphics displaying and playing of said video or film and of said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more peripheral computer units for implementation of handling and processing of the said video or film and of the said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more computer screens for visual playing or displaying of video and/or film in a digital form and for visual playing or displaying of said at least one or more interactive electronic objects during its preparation, for visual playing or displaying of the prepared said at least one or more interactive electronic objects in a final shape and visual displaying of its operation in the final shape,

- if necessary, at least one or more microphones for recording sound and sound effects and/or at least one or more speakers for playing sound and sound effects in said videos and/or films, when the sound is part of said video or film and/or of said at least one or more interactive electronic objects, and - if necessary, at least one or more sound cards for implementation of post-production handling or processing of sound and/or of sound effects in said video and/or film in the preparation of the said at least one or more interactive electronic objects, when the sound is and/or when the sound effects are part of the said video and/or film and/or of the said at least one or several interactive electronic objects, characterized in that said computer is configured in a way to enable processing of the said video and/or film and its post-production handling or processing and preparation of at least one or more of said interactive electronic objects and control of said procedures and processes of the said computer.

2. A system for preparation of interactive electronic objects according to claim 1, characterized in that

- with the said camera the said video and/or film of the object in the real world is recorded in a digital form in RAW format, and

- that with the said computer a procedure of extracting colours, a procedure of designing of motion graphics and animation of said object, a procedure of designing of visual effects in said video or film, a procedure of designing of video composition in the passage of time, a procedure of editing and designing of colours, colour tones, brightness and contrasts, a procedure of exporting of video into a compressible format, a procedure of segmentation of video by defining key characteristic video segments, a procedure of converting video into code by encoding process, and a procedure of programming of an interactive electronic object and of its operation are implemented. 3. A system for preparation of interactive electronic objects according to claim 2, characterized in that

- with the said computer, if necessary, steps or procedures for editing and designing of sound and/or sound effects in said video clip and/or film and in its segments and in the preparation of said at least one or more interactive electronic objects are implemented, when the sound is part of the said video or film and/or of said segments of the video clip and/or film and part of the said at least one or more interactive electronic objects, and

- that in the procedure of programming of an interactive electronic object and of its operation, sound and/or sound effects are also programmed as an integral part of said at least one or more interactive electronic objects.

4. A procedure for preparation of interactive electronic objects by video

processing for use in systems of electronic devices and/or electronic toys, characterized in that it comprises the following steps or procedures:

- a procedure of recording or capturing a digital video in RAW format of an object in the real world,

- a procedure of extracting colours,

- a procedure of designing motion graphics and animation,

- a procedure of designing visual effects,

- a procedure of designing a video composition in the passage of time,

- a procedure of editing and designing of colours, colour tones,

brightness and contrasts,

- a procedure of exporting video into a compressible format,

- a procedure of segmentation of video by defining key characteristic video segments,

- a procedure of converting video into code by encoding process, and - a procedure of programming of an interactive electronic object and of its operation.

5. A procedure for preparation of interactive electronic objects by video

processing for use in systems of electronic devices and/or electronic toys, characterized in that it comprises the following steps or procedures:

- a procedure of recording or capturing a digital video in RAW format of an object in the real world,

- a procedure of extracting colours,

- a procedure of designing a video composition in the passage of time,

- a procedure of designing motion graphics and animation,

- a procedure of designing visual effects,

- a procedure of editing and designing of colours, colour tones,

brightness and contrasts,

- a procedure of exporting video into a compressible format,

- a procedure of segmentation of video by defining key characteristic video segments,

- a procedure of converting video into code by encoding process, and

- a procedure of programming of an interactive electronic object and of its operation.

6. A procedure for preparation of interactive electronic objects by video

processing for use in systems of electronic devices and/or electronic toys according to claim 4 and/or 5, characterized in that it comprises, if necessary, steps or procedures for editing and designing of sound and/or of sound effects in the said video clip and/or film and in its segments and in the preparation of the said at least one or more interactive electronic objects.

7. A procedure for preparation of interactive electronic objects by video

processing for use in systems of electronic devices and/or electronic toys according to claim 4, 5 and/or 6, characterized in that a system for preparation of interactive electronic objects according to claim 1, 2 and/or 3 is used for its implementation.

8. An interactive electronic object, characterized in that it is prepared by using a system for preparation of interactive electronic objects according to claim 1,

2 and/or 3.

9. An interactive electronic object, characterized in that it is prepared by a

procedure for preparation of interactive electronic objects by video processing for use in systems of electronic devices and/or electronic toys according to claim 4, 5 and/or 6.

10. An interactive electronic object, characterized in that it is prepared by a

procedure for preparation of interactive electronic objects by video processing for use in systems of electronic devices and/or electronic toys according to claim 4, 5 and/or 6 and by using a system for preparation of interactive electronic objects according to claim 1, 2 and/or 3.

11. An interactive electronic object according to claim 8, 9 and/or 10,

characterized in that the said interactive electronic object on the electronic screen of the user electronic device, on which the encoded record of the operation of the interactive electronic object is downloaded and with which this user electronic device is configured, is displayed and played in the form of individual characteristic segments of the video of motion and response or reacting or conduct of the interactive electronic object.

12. An interactive electronic object according to claim 8, 9, 10 and/or 11,

characterized in that the said interactive electronic object in a combination with an electronic device of a user, which comprises an electronic screen that is sensitive and responsive to a touch, detects or senses, recognizes and responds in real time to the recognized characteristic input stimulus actions of the user of the device and responds to the said characteristic input stimulus actions of the user by displaying and playing on the electronic screen of the user electronic device a characteristic segment of the video of the output response action of the interactive electronic object, that is corresponding to each respective characteristic input action of the user, in real time after detecting and recognizing a characteristic input stimulus action of the user.

13. An interactive electronic object according to claim 8, 9, 10, 11 and/or 12, characterized in that the said characteristic input stimulus actions of the user of the device are different types of touches on different areas or on different parts of the interactive electronic object that is displayed and played on the screen of the said device.

14. An interactive electronic object according to claim 8, 9, 10, 11, 12 and/or 13, characterized in that the said characteristic input stimulus actions of the user of the device are touches on areas or on parts of the interactive electronic object that is displayed and played on the screen of the said device.

15. An interactive electronic object according to claim 8, 9, 10, 11, 12, 13 and/or 14, characterized in that it is in a shape of an animal.

16. An interactive electronic object according to claim 8, 9, 10, 11, 12, 13, 14 and/or 15, characterized in that it is in a shape of a domestic animal and/or of a domesticated animal.

17. An interactive electronic object according to claim 8, 9, 10, 11, 12, 13, 14, 15 and/or 16, characterized in that it is in a shape of a puppy or a dog.

18. An interactive electronic object according to claim 8, 9, 10, 11, 12, 13 and/or 14, characterized in that the said user electronic device is a portable smartphone.

Description:
System and procedure for preparation of interactive electronic objects by video processing for use in systems of electronic devices and/or electronic toys and interactive electronic objects prepared therewith

BACKGROUND

[0001] Technical field

The present invention relates to a system and to a procedure for preparing interactive electronic objects by using video processing procedures, where thus prepared interactive electronic objects are used in systems of electronic devices and/or in systems of electronic toys, and to interactive electronic objects prepared by using this system and procedure.

BRIEF SUMMARY OF THE INVENTION AND REUATED ART

[0002] Description of a technical problem

The present invention aims to solve a technical problem of constructing and preparing one or more interactive electronic objects, which should respond in real time to real stimuli by the user on an electronic device in a system of electronic devices and/or in a system of electronic toys in such a way that the electronic device will detect or will be detecting and will recognize or will be recognizing key characteristics of at least one or more input stimuli and/or commands by user and, based on this, will respond or will be responding in such a way that an individual interactive electronic object will, based on the input stimulus, perform an appropriate response action, which will be as similar as possible to the response action of an individual object, which and in a manner such an object would perform in response to such a specific stimulus or command by user in a real world and in real time. Said electronic device is preferably a portable user electronic device and most preferably a portable smartphone. It is crucial that on the said electronic device the response of an individual interactive electronic object should comprise a response action that would be as similar as possible to the response action of such an object in the real world, that is as similar as possible to the response action, which such an object or a figure or a character, if it were a living being or a device, would perform in the real world and in the real time as a response to a specific stimulus and/or command, that in relation to him/it a user would have previously performed, where this user in the real world would actually be a caretaker and/or an owner and/or an operator of such an object, which is either a living being or a device or a subject, wherein in a case of a living being the user would breed, nurture and/or raise such a living being or, in a case of a device, would use and operate such a device.

[0003] Information about the state of the art

We have found some patent documents, which describe different solutions for the preparation and use of electronic toys than the present invention.

Patent application WO 2010128329 (published on 11.11.2010) describes a device, a system and a method for entertainment. The entertainment device comprises an image receiver that receives images from a video camera capturing them, the logic of the display screen connected to the display screen displaying the images captured by the video camera together with one or more selection icons, where each selection icon represents a game object, an image processor that detects the presence and position of an augmented reality marker in images received from the video camera, a detector that detects the presence of an augmented reality marker at an image position

corresponding to a selection icon for at least a predetermined selection time interval, and an associating logic that responds to such a detection by associating the game object corresponding to one of the selection icons with the augmented reality marker, where the logic of the display screen displays the game object so that it moves according to the detected position of the augmented reality marker. The described entertainment system comprises a video camera, a display screen displaying the images captured/recorded by the camera, and said entertainment device.

Patent application EP 3438796 (published on 6.2.2019) describes systems and methods for gaming haptics in casinos (haptics means communication via tactility or through touches and preferably detection with touch sensors using user interfaces).

The casino gaming haptic system includes a) a touch-sensitive input device configured to sense a touch by/contact from a user, b) an actuator coupled to the touch-sensitive input device and configured to perform a haptic effect on the touch-sensitive input device, c) a display screen configured to receive a display signal and to display an image associated with a casino game, and d) a processor that communicates with the touch-sensitive device and the actuator and is configured to implement the method for the casino gaming haptics. The described method includes a) generating by help of a processor a display screen signal configured to cause an image to be displayed on the screen, where this image is associated with a casino game and comprises an operational object associated with the casino game, where the operational object includes a virtual playing card, a virtual dice, a virtual roulette wheel or a lever on a virtual slot machine; b) receiving by help of the processor an input signal from a touch-sensitive input device configured to sense a touch by/contact from the user, where the input signal is associated with the touch; c) determining by help of the processor an interaction between the touch and the image; d) generating by help of the processor an actuator signal, which is associated with the interaction and wherein the actuator signal is configured to cause an actuator to generate a haptic effect which changes according to the interaction conditions in order to simulate the motion of the operational object associated with the casino game; and e) wherein the simulation of the motion of the operational object includes changing magnitudes and/or frequencies of a vibration based on the type of the operational object and on a mode of dragging, while the user is dragging the operational object.

In the stated cases it goes for the so-called augmented reality technology (English "augmented reality", abbreviation "AR"), which solves a technical problem of interactive objects or gaming figures or interactive gaming environment in a different way. Namely, by using a camera on the screen, we position the object in the real world in real time, that is, on the screen we display the real world in real time and then change the position of the added 3D object in real time.

Existing games in versions under the label Mortal Kombat® or Mortal Kombat™ or in versions of technology under trademark Adobe Flash® or Adobe Flash™ are based on the processing of photographic images of an object in the real world, which are designed with sequential animation into a more or less moveable object, that is then used as a gaming object in video games and/or electronic games either on computers and portable electronic devices and on other similar electronic devices. Due to such a procedure of preparation of the said gaming objects or characters/figures, it is noticed and perceived when playing games, that these gaming figures or characters are cumbersome in movement and do not move completely continuously or smoothly like their original figures/characters in the real world and real time.

Additional deterioration of continuity or of smoothness in the movement or in the motion of game figures/characters brings the use of the so-called Flash technologies with the brand Adobe Flash®, all of which are being increasingly abandoned and will be supported only until year 2020. Animation technology using the Adobe Flash® software enables the storage of moving images or video in a compressed format of file, wherein every point part of an image or the so-called pixel of an image

(hereinafter also: pixel) is not stored separately, but several pixels are stored together on the basis of merging together pixels similar to each other within a certain time interval by help of an algorithmic recording. According to the article titled "Tested: How Flash destroys your browser's performance" by Mark Hachman at

https://www.pcworld.com/article/2960741/tested -how-flash-destroys-your-browsers- performance.html the Adobe Flash® technology consumes a lot of programming space in web browsers if the web pages and/or their elements are prepared using this technology; depending on the web browser used, it amounts from 4.72 GB of memory and 84.1% of cycles of central processing unit (hereinafter abbreviation and code for unit: % CPU) for Microsoft® Edge browser, through 4.23 GB of memory and 71.4% of CPU for Chrome browser, up to 3.47 GB of memory and 81.2% of CPU for Opera browser, and all of this significantly slows down the performance of the electronic device, when Adobe Flash® software is downloaded on it, and the web browsing on it. In addition, the Flash technology is not the most compatible with other software, the files prepared with it are large, thus prepared animations are not completely smooth/continuous in displays of motion, and at the same time after year 2020, the available support for it will be questionable. The biggest problem when using Flash technology is high memory consumption of the temporary memory, which is especially crucial when displaying animations on mobile devices and is almost impossible with Flash technology.

In the prior art we have not found published solutions and patent documents that would solve the herein presented technical problem in the same way and with the same system and procedure for the preparation of an interactive electronic object as solved by the present invention.

DETAILED DESCRIPTION OF THE INVENTION AND BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Description of drawings of the invention

Figure 1 schematically shows elements of a system for preparation of interactive electronic objects by video processing according to the present invention, wherein said interactive electronic object is intended for use in systems of electronic devices and/or electronic toys.

Figure 2 schematically shows one of the procedures for preparation of interactive electronic objects according to the present invention, that is, by processing of a video or a film of an object captured or recorded in reality, which comprises a sequence of individual steps or individual procedures in the preparation of an interactive electronic object according to this invention. Figure 3 schematically shows one of the alternative procedures for preparing interactive electronic objects according to this invention by processing of a video or a film of an object captured or recorded in reality, which comprises a sequence of individual steps or individual procedures in the preparation of an interactive electronic object according to this invention.

[0005] Description of a solution to the technical problem

The present invention solves the set technical problem with a procedure and a system for preparing one or more interactive electronic objects by video processing, where such an interactive electronic object is then used on an electronic device in a system of electronic devices and/or in a system of electronic toys. An interactive electronic object as prepared with this procedure and/or with this system enables

the said electronic object to interactively respond to input stimuli and/or input commands of the user of the said electronic device each time when the user executes an input stimulus or a command or an input stimulus action on the interactive electronic object on the said electronic device, and in real time.

In addition, said electronic device is assembled and configured in such a way, to receive, detect and recognize the key characteristics of at least one or more input stimuli or input commands or input stimulus actions of the user and to respond so, on the basis of this, that an individual electronic object or figure or character performs an appropriate response action, which is as similar as possible to the response or the response action of an individual object, figure or character, as if this object or figure/character would be a living being and/or device in the real world, which would thus respond to a concrete stimulus in the real world and in the real time.

The key technical problem in the preparation of said interactive electronic object according to the present invention is to capture object or figure or character, which otherwise exists and is found in the real world, and then to display this object or this figure or character on an electronic screen of an electronic device, and then use the thus prepared interactive electronic object in a system of electronic devices and/or in a system of electronic toys, where such an interactive electronic object in responding, that is in response reactions and/or in response movements and/or in motion and/or in reactions, responds as continuously and smoothly as possible during the passage of time to input stimuli and/or to input commands of the user of the electronic device.

To solve the technical problem of capturing an object or figure/character from the real world and its displaying on the screen of the said electronic device in the form of an interactive electronic object or figure/character that responds continuously or fluidly and smoothly to input stimuli and/or user commands, in the past a technical solution was used by photographing such an object in the real world and consequently sequential animation of thus captured photographic images in a sequence of different movements, gestures and/or positions of the thus captured object/ figure/ character such as, e. g. in older games marked with label Mortal Kombat® or Mortal Kombat™ or in other versions, which used technology of trademark Adobe Flash® or Adobe Flash™.

However, such a solution with sequential animations of a photograph or captured image has huge limitations in the quality of the display of the object on the screen, because each image in a certain sequence of animation must first be loaded into the temporary memory. Because images based on photographs take up a lot of memory or electronic space in memory, such a temporary memory fills up quickly, most often in photographs of higher resolutions and bit depths, that is photos taking up more memory space, measured in bits.

At the same time, an additional problem of read and write speeds (English: read/write speeds) arises, which is significantly observed in the so-called smoothness of animations, that is, in more or less smooth or continuous sequence of moving of the animated object, because the so-called jamming in the movement of the animated object is observed, which is a visible result of non-constant decoding of the video or of non-fluid or incontinuous video decoding. This happens especially if we want to achieve a certain minimum of the current sequence (fluidity) of the movement of the displayed object, which is in the range of 24 images of the object per second, which is often the case when showing classic films of the film industry. The human eye can e.g. detect the display of the sequence of images at a rate of up to 150 frames per second, and in some cases of special training even up to 240 frames per second.

If we take an Hd image (also label HD) with a resolution of 1280 x 720 in the standard format 24BPP JPEG, then we have an image of a size of about 300 Kb on the hard disk, which during decoding of the compressed format in the temporary memory (English: RAM) fills about 2.8 Mb of memory or electronic space with a range of deviations depending on the quantity or number of different colours in the image, the brightness of the image, and similar other factors that affect image quality.

If we consider that in a single second at least 24 images with an average size of 2.8 MB, depending on the selected image resolution, must be refreshed in order to detect the fluid sequence of images of the displayed object, then such image processing takes up a lot of electronic space in user electronic devices, which are preferably memories of user mobile devices, which amounts about 72 Mb per second (hereinafter: Mb/s). This means that on mobile devices and preferably mobile phones, which are usually in the range of medium and low-capacity mobile phones, in such a case there will, with all probability, be a rapid filling up of available electronic memory and the temporary memory or RAM will therefore become too slow when displaying images and their sequence, because the main core of this memory will be overloaded and therefore, the mobile device will be too slow to respond, including user interfaces (hereinafter also referred to as UI), the operation speed of mobile device and the speed of its responsiveness (in English the so-called performance issues) and the speed of displaying the sequence of images and/or videos on it, which is significantly reduced in the case of memory congestion (English term for this problem: frame drops).

In the event of such an overload of the mobile device, one of the possible solutions is to diminish or to reduce the image resolution or the number of dotted image fields and the number of colours used or colour palettes. This, consequently, drastically reduces the quality of the consecutive or sequential animation of images of the displayed object.

Therefore, the present invention solves the set technical problem by capturing or recording in a form of a video, a video clip or a film of a movement, response, reaction and/or conduct of the said object in the real world and in the real time and preferably by capturing or recording of the movement, response, reaction and/or conduct of the said object in the real world and in real time to real and concrete stimulus actions or stimuli of the educator, caretaker, caregiver, foster parent, user and/or operator of the said object in the real world, where such a video or a video clip or a film is then handled or processed as described below, thereby preparing an interactive electronic object for use on a user electronic device and further on, using it in the systems of electronic devices and/or of electronic toys.

The user within the meaning of this invention is a person, which uses an electronic device. In alternative implementation examples or embodiments of this invention, such a user is one of the users, who in the system of electronic devices uses one or more user and/or other electronic devices and/or in the system of electronic toys uses one or more of such electronic devices. The user then uses such a system by using a portable electronic device. Preferably, such an electronic device is a portable electronic device as known and used in the state of the art, and most preferably, a portable smartphone.

The user's portable electronic device or electronic device of the user or the user electronic device withing the meaning of this invention comprises at least one or more processors, at least one or more temporary memories, at least one or more other memories (preferably permanent memories), at least one or more information and communication units (preferably this is a WIFI antenna), at least one or more display screens, sensitive to a touch, (preferably a screen on the front side of a portable electronic device, if necessary, on both sides of a portable electronic device) for the purpose of graphics and, if necessary, of audio and, in special implementation examples, of motion displaying or playing of one or more interactive electronic objects according to this invention. Furthermore, the user's portable electronic device within the meaning of this invention comprises at least one or more microphones for detecting, capturing or recording of sound, at least one or more speakers for playing sound, at least one or more operable buttons and/or switches for operating the portable electronic devices (e.g. switch/button for on, off and other key controlling functions of a device).

Preferably, such a portable electronic device is selected from portable or mobile electronic devices, which have a display screen, which is sensitive to touch and movement on the screen, and have an antenna or other similar system for wireless connection to the system of mobile telephony and/or to the Internet. Most preferably, such a portable electronic device is selected from portable phones, portable smartphones, tablets, electronic readers, and similar electronic devices as known in the state of the art. One of such electronic devices of the user or user electronic devices is specified in more detail in Implementation example 2.

Said object within the meaning of this invention, which is captured or recorded in the real world is any object from the real world and/or from a fictional world as known in the prior art, and is either a living being as, e.g., a human, an animal, a plant, or man made product, that is either simple or complex and is, further on, either a device, including robots and automatons, a machine, a gadget, a tool, or any other object as, e.g., a land vehicle, a watercraft, an aircraft and/or a spacecraft. Such an object from the real world can be an object as we know it from the present time, from the past, or as we anticipate it for the future. Such an object from the fictional world can be an object either from a futuristic world or from a fairy-tale, fantasy and/or otherwise imaginary world.

The real world within the meaning of this invention means the real environment, which is a set of subjects or actors and objects or various means, that subjects/actors use in relation to the objects or means and/or in combination with the operation of subjects/actors with each other, either independently or dependently and/or in connection with the use of facilities or funds or without such use. Furthermore, the real world within the meaning of this invention comprises, among others, the laws, conditions and situations of life, being, working, functioning, exposure to stimuli and acceptance of stimuli, and various responses to such stimuli by response actions.

Within the meaning of this invention, said stimuli are also referred to as input stimulus actions of the user and mean different types of input stimulus actions in a real environment, originating either from said subjects/actors and/or from said

objects/means, as recognized in a real environment. A real environment within this meaning is the environment of the planet Earth itself and its location among other planets and formations of the universe as we know, get to know and specify them by physical, chemical, biological, medical, mathematical, geological and other similar lawfulnesses.

Within the meaning of this invention, the said responses or response actions of the object or output response actions of the object are actions, reactions, reacting, movements, conducts and/or responses of the object to such stimuli with response actions and mean different types of response or output actions in the real environment or in the real world and in the real time.

Real time within the meaning of this invention means response time as expected in the real world and/or in the real environment under conditions without blockage of any cause. In individual alternative implementations of this invention, real time means the shortest possible time without delays, lateness and/or standstills.

Precisely to achieve the interactive response of the electronic object, according to this invention, to the input stimulus of the user, the herein described solution comprises the concept of capturing the video instead of the images. By capturing the video, we avoid the limitations of the temporary memory or RAM (and the size on the hard disk), because when decoding the video, we refresh only the changed dotted display of the image (or English: pixel) according to certain time intervals of image playing (English: frame). This means, e.g., that when selecting a video in the form of 60 frames per second or 60 fps (English unit: fps or frames per second; hereinafter: fps), 60 images in a linear time of 1 second must be decoded and handled or processed in one second. At a speed of 60 fps, therefore, one image of the video is displayed on the screen every 0.0166666666667 second or every 16.6666666667 milliseconds

(hereafter unit: ms).

This creates a problem, which differs from the existing solution by using image formats that are in formats like, e.g., RAW, Jpeg, Png, Tiff, namely, this comprises a) the question of tracking the decoding process, that is, how many images the codec (English: codec) decoded in a given time, that is, in the time interval from a particular start of decoding, and b) the question of good codec responsiveness, that is, fast enough response or response in real time, and sufficiently accurate response of the object in relation to the stimulus or the stimulus action of the user or to the input stimulus action of the user and/or in alternative implementations, to the input command.

This means that according to the user's input command, the processor or decoder unit of the user electronic device must process and decode the compressed video quickly enough, that is, the codec must contain additional information, which is entered during the encoding process, that is, in the video compression process, to achieve the desired responsiveness effect without delay. To address these issues, we select video decoders from hardware as known in the state of the art and as used on mobile devices to support many types of codecs.

The codec, within the meaning of this invention, is a video codec, as known in the state of the art, and is an electronic circuit and/or a software that enables compression or encryption of a digital video file or reducing the space of electronic memory occupied by the digital video file, and/or releasing or deciphering or extension of the digital video file or increasing the electronic memory space occupied by the digital video file. The digital video, within the meaning of this invention, is either a digital video clip, a film in a digital form and a similar other digital display in the form of a sequence of moving images in a certain time and of objects displayed in or on them. Simply said, the codec, within the meaning of this invention, is a process of compression or compressing of the captured image or captured video with the purpose of reducing the size of image or video on the hard disk and/or on any other memory of the electronic device, on which such an image or video is saved. Useful codecs are, e.g., open-source codecs such as e.g. VP9, x264, Xvid and Lagarith.

The subject matter of this invention is a system for the preparation of interactive electronic objects by processing of a video for use in systems of electronic devices and/or of electronic toys, which comprises:

- at least one or more cameras for capturing of a video or a video clip and/or a film of an object in the real world, where said video clip and/or film is recorded and/or captured in a digital form, which comprises at least one or more electronic storage media or electronic memory media of the camera,

- at least one or more peripheral units for transferring the recorded film and/or video from the electronic storage medium or electronic memory medium of the camera to at least one or more computers,

- at least one or more computers, which comprises:

- at least one or more processing units for performing the said processing or handling of video or film,

- at least one or more permanent memory units for storing the software and the said video or film in a digital form, which enables implementation of said preparation of at least one or more interactive electronic objects,

- at least one or more temporary memories or temporary storage units for performing and intermediate storage of said video or film and of said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more graphics units for implementation of graphics processing and for graphics displaying and playing of said video or film and of said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more peripheral computer units for implementation of handling and processing of the said video or film and of the said at least one or more interactive electronic objects during the preparation of the said at least one or more interactive electronic objects,

- at least one or more computer screens for visual playing or displaying of video and/or film in a digital form and for visual playing or displaying of said at least one or more interactive electronic objects during its preparation, for visual playing or displaying of the prepared said at least one or more interactive electronic objects in a final shape and visual displaying of its operation in the final shape,

- if necessary, at least one or more microphones for recording sound and sound effects and/or at least one or more speakers for playing sound and sound effects in said videos and/or films, when the sound is part of said video or film and/or of said at least one or more interactive electronic objects, and - if necessary, at least one or more sound cards for

implementation of post-production handling or processing of sound and/or of sound effects in said video and/or film in the preparation of the said at least one or more interactive electronic objects, when the sound is and/or when the sound effects are part of the said video and/or film and/or of the said at least one or several interactive electronic objects, wherein the said computer is configured in a way to enable processing of the said video and/or film and its post-production handling and processing and preparation of at least one or more interactive electronic objects and control of said procedures and processes of the said computer.

One of the implementation examples of the preparation system or for preparing the interactive electronic objects according to this invention by video processing is schematically shown in figure 1. This figure schematically shows elements and/or units of such a system, which comprises a camera K, which captures or records an object in the real world, that is the starting point for the preparation of an interactive electronic object according to this invention, in a video or a video clip and/or on film. Such a video and/or film is recorded/ captured in a digital form and stored on the electronic storage medium or electronic memory medium of the camera SNK. The recorded video and/or film is then transferred from the electronic storage medium of the camera SNK by means of a peripheral unit for transferring the recorded film and/or video EPV to a computer R. Such a computer R comprises a processing unit PRE performing, among others, functions and operations such as described in Implementation example 1, the processes, procedures and steps of processing or handling of video/ video clip or film. Furthermore, the computer R comprises a permanent memory unit TPE for storing the software and the said video or film in the digital form and at the same time a temporary memory unit or temporary memory ZPE for implementation, that is, playing, reviewing, editing and handling or processing of the video, video clip and/or film, and simultaneously intermediate storing of the said video, video clip and/or film and the said at least one or more interactive electronic objects during the preparation of such one or more of such interactive electronic objects. Furthermore, the computer R comprises a graphics unit GE for

implementation of graphics processing and graphics display, displaying and playing of said video, video clip and/or film and at least one or more interactive electronic objects during the preparation of at least one or more of such interactive electronic objects. In addition to this, the computer R comprises another peripheral computer unit, which is a computer mouse PEM, by help of which we perform handling or processing of the said video, video clip or film and at least one or more interactive electronic objects during the preparation of such an interactive electronic object or of several interactive electronic objects. Computer R comprises a computer screen Z, which is used for visual playing, displaying or display of video, video clip and/or film in the digital form and for visual playing or display or displaying of said at least one or more interactive electronic objects during its preparation, for visual displaying or display of a prepared at least one or more of such interactive electronic objects in a final shape and a visual representation of its or their operation in the final shape. The computer R, shown in figure 1, of the preparation system or for the preparation of the interactive electronic objects comprises also a microphone MIK for capturing of sound and sound effects and a speaker ZV for playing the sound and sound effects in the said videos, video clips and/or films, as well as a sound card ZVK for post production handling or processing of sound and/or sound effects in the said video, video clip and/or film in the preparation of at least one or more interactive electronic objects for cases where sound and/or when sound effects are part of said video and/or film and/or of said at least one or more interactive electronic objects prepared by using such a system. Such a computer R is configured so as to enable processing of the said video and/or film and its post-production handling or processing and preparation of at least one or more interactive electronic objects and at the same time control of said procedures and processes of said computer. In figure 1, hollow arrows between the individual elements or the units of the system for preparation of the interactive electronic object according to this invention schematically show the key direction of the flow of the main processes, procedures and/or steps and, at the same time, the main connections between individual elements or units of the said system. In such a system for preparing interactive electronic objects according to this invention, the said video and/or film of an object in the real world is recorded in a digital form in the RAW format by means of a camera, which is selected from cameras as used in the state of the art for capturing videos or video clips and films in the real world, and preferably, videos or video clips and/or films of objects, which hover or move in the real world and respond or react and/or act in relation to the surroundings in the real world, comprising various stimulus actions of other objects and/or actors in the real world, which these perform in relation to or on the respective object, which is captured or recorded with a video clip. Among suitable cameras are, e.g., digital cameras as known in the state of the art, namely, e.g. Blackmagic URSA Mini 4.6K EF by producer Blackmagic Design or Fujifilm X-T3 by producer Fujifilm Holdings Corporation.

On said cameras we record a video or a film on the electronic media or on the camera electronic storage media or camera electronic memory media as known and used in the state of the art, which are selected, e.g., from media such as CFast 2.0 cards from various manufacturers (e.g. cards with labels SanDisk Extreme Pro CFast 2.0

SDCFSP-128G-x46D from producer Western Digital Corporation or Sony CFast 2.0 G Series CAT-G64 from producer Sony Corporation). Alternatively, we can choose SD cards (e.g. cards San Disk Extreme Pro UHS-II 300 MB/s SDXC from producer Western Digital Corporation) or the so-called memory drives with labels SSDs together with peripheral implements, where SSDs drives are used in the case of longer video recordings (e.g., SSDs with labels SanDisk Ultra 3D or with labels WD Blue 3D NAND SATA SSDs, both from producer Western Digital Corporation).

Recorded or captured video clip and/or film of the object, which is in the form of a digital video or film, is transferred from an electronic storage medium of the camera to a computer, such as, e.g., is described in more detail in the Implementation example 1. The transfer of video from the electronic storage medium of the camera to the computer, preferably, to a computer hard disk, is implemented by procedures and by using devices and gadgets, which are at least one or more peripheral units for transferring the recorded video and/or film from the electronic storage medium of the camera to at least one or more computers. Said peripheral unit is, depending on the electronic storage medium of the camera used, selected from the card readers, as are known and used in the state of the art. In the case of using different types of cards, among which are, e.g., cards SD, we can choose for transferring the video or film from various camera memory cards to the computer's hard disk also one of the universal card readers, as are known and used in the state of the art, e.g. card reader with label Hama USB 3.0 Multi Card Reader, SD/microSD/Cf from producer Hama Ltd. When using a card CFast 2.0 as a camera electronic storage medium, e.g. labelled SanDisk from producer Western Digital Corporation, we select the card reader CFast 2.0 with the label SanDisk USB 3.0 CFast 2.0, also from producer Western Digital Corporation. By means of the said card reader we transfer the said video via the USB 3.0 interface (USB is the universal serial bus, used to connect the computer to its peripheral units, devices and/or components), which is the USB interface with the so- called super speed of data transfer, which amounts to 5 Gbit/s, to the computer's hard disk, on which we then handle or process the video or film according to the below described procedure in order to prepare an interactive electronic object according to this invention.

Then, we handle or process the video or video clip or the film, transferred to the said computer, with a program for video processing, which is selected from programs, as known and as used in the state of the art in the field of film industry, video production and television production. For handling or processing of video in this implementation example, we used the program Adobe® After Effects from producer Adobe Systems. This program is used in the state of the art in the field of film industry and television production for post-production processes and procedures in the production of films and television shows, and enables digital handling or processing of film or video clips in the digital form.

With the said computer, among others, we handle or process the said video, video clip and/or film so that we implement a procedure of extracting colours, a procedure of designing of motion graphics and animation of said object, a procedure of designing of visual effects in said video or film, a procedure of designing of video composition in the passage of time, a procedure of editing and designing of colours, colour tones, brightness and contrasts, a procedure of exporting of video into a compressible format, a procedure of segmentation of video by defining key characteristic video segments, a procedure of converting video into code by encoding process, and a procedure of programming of an interactive electronic object and of its operation.

In alternative implementation examples or embodiments according to this invention, we can implement the said procedures of extracting colours, of designing of motion graphics and animation of said object, of designing of visual effects in said video or film, of designing of video composition in the passage of time, of editing and designing of colours, colour tones, brightness and contrasts in any arbitrary sequential order.

If necessary, according to this invention, we implement in the system for preparing interactive electronic objects with the said computer, also steps or procedures for editing and designing of sound and/or sound effects in the said video or video clip and/or film and in its segments and in the preparation of said at least one or more interactive electronic objects, when the sound is part of said video or film and/or of said segments of video clip and/or film and part of said at least one or more interactive electronic objects. In this case, in the procedure of programming an interactive electronic object and its operation, we also program sound and/or sound effects as an integral part of said at least one or more interactive electronic objects.

Further on, the subject matter of this invention is a procedure for the preparation of interactive electronic objects by video processing for use in the systems of electronic devices and/or electronic toys, which comprises the following steps or procedures:

- a procedure of recording or capturing a digital video in RAW format of an object in the real world,

- a procedure of extracting colours,

- a procedure of designing motion graphics and animation,

- a procedure of designing of visual effects, - a procedure of designing of a video composition in the passage of time,

- a procedure of editing and designing of colours, colour tones, brightness and contrasts,

- a procedure of exporting of video into a compressible format,

- a procedure of segmentation of a video by defining key characteristic video segments,

- a procedure of converting video into code by encoding process, and

- a procedure of programming an interactive electronic object and its operation.

Such an example of the implementation of the procedure for preparation of interactive electronic objects is schematically shown in figure 2.

In an alternative implementation example of the procedure for preparation of interactive electronic objects by video processing according to this invention, we can arbitrarily combine the said procedures of extracting colours, of designing of motion graphics and animation of said object, of designing of visual effects in said video or film, of designing of video composition in the passage of time, of editing and designing of colours, colour tones, brightness and contrasts, and implement them in any arbitrary sequential order.

In an alternative implementation example, the procedure for preparation of interactive electronic objects by video processing according to this invention comprises the following steps or procedures:

- a procedure of recording or capturing a digital video in RAW format of an object in the real world,

- a procedure of extracting colours,

- a procedure of designing of video composition in the passage of time,

- a procedure of designing of motion graphics and animation,

- a procedure of designing of visual effects,

- a procedure of editing and designing of colours, colour tones, brightness and contrasts, - a procedure of exporting of a video into a compressible format,

- a procedure of segmentation of video by defining key characteristic video segments,

- a procedure of converting video into code by encoding process, and

- a procedure of programming of an interactive electronic object and of its operation.

Such an alternative example of the implementation of the procedure for preparation of interactive electronic objects is schematically shown in figure 3.

Further on, the said procedure of preparing interactive electronic objects by video processing according to this invention comprises also steps or procedures for editing and designing of sound and/or sound effects in the said video or video clip and/or film and in its segments and in the preparation of said at least one or more interactive electronic objects.

Preferably, we implement the said procedure for preparation of interactive electronic objects by video processing for use in the systems of electronic devices and/or electronic toys by using the system for preparation of interactive electronic objects according to this invention.

One of the implementation examples of the procedure for preparation of interactive electronic objects according to this invention is described in more detail in the Implementation example 1.

In the process of preparing an interactive electronic object according to this invention, among others, we first record/capture a video with a camera in the RAW format, that is, in the so-called raw format (English: uncompressed footage) or in the form of non- compressed video or video clip or in the so-called RAW video format without procedure of converting into code or encoding. Then we handle or process this video or video clip with various techniques that are used in the state of the art in the film or the so-called cinematic industry and technology. After finishing the processing, we export the thus processed video with any arbitrary settings into the so-called RAW FOOTAGE format or RAW FOOTAGE project, where we, among others, arbitrarily set the speeds of movement or of stringing pictures in the video or video clip (English: frame rate) and the speed of video playing in the size of electronic memory (English: bitrate) and the distinctness or resolution of the image (English: resolution).

Raw format of video or video clip within the meaning of this invention means a video or video clip, which is not compacted or compressed or encoded (English: encoded) and which therefore contains all the information captured during the capture of the recording of the object in a video form.

Speed of moving or stringing pictures in a video or video clip (English: frame rate) is measured in frames per second (English: frames per second or fps).

Speed of video playing in the size of electronic memory (English: bitrate) means the amount of data, which is handled or processed in a second and which thus affects the quality of the video or recording, the depth of its colours, and other parameters of the visual display of the video.

The term distinctness or partitioning or resolution of image (English: resolution) means the resolution of the video and the images displayed in it, and is indicated by the number of pixels or dots in the pixel display of images, where the label full HD means 1920x1080 pixels per image. The dot of the picture or the so-called pixel means a small dot on the screen in the dotted display of the screen and thus of the picture on it, where such a small dot changes colour and where such small dots or pixels in a set of many pixels or dots constitute the picture as shown on the screen. At resolution labelled with full HD, this is, e.g. 1920x1080 pixels or dots per picture.

Then we compress the video in the RAW format with any codec or we export it into a compressible format, wherein we use a codec selected from codecs, as known and as used in the state of the art. Most preferably, these are open-source codecs, such as, e.g., codecs with labels VP9, x264, Xvid, Lagarith, etc. In the said compression of the video we define to the codec the sequence and other similar information about the animation of the video and of the recorded object in it, including its positions, as described in more detail in the Implementation example 1.

After the encoding process if finished, we implement the procedure of video segmentation by defining the key video segments and then we convert these specific segments into code by the encoding process and further on, we programme all features of the interactive electronic object and its operation according to this invention into the program code while taking into account the interaction between the user of the said interactive electronic object and this object, as described in more detail in the Implementation examples 1 and 2.

Furthermore, the subject matter of this invention is also an interactive electronic object as prepared or made or constructed by using the system for the preparation of the interactive electronic objects according to this invention and/or as prepared or made or constructed by a procedure for preparing interactive electronic objects by video processing for use in the system of electronic devices and/or electronic toys according to the present invention.

The interactive electronic object, prepared/ manufactured/ constructed according to the said procedure for preparing the interactive electronic objects, is the final product of this procedure or of combinations of herein described processes, procedures and steps. This is an interactive electronic object in its final shape, which is written in the form of a computer program or computer program code in one of the programming languages, known in the state of the art, which are selected from, among others, the programming languages C #, JavaScript, Ruby, Java, Swift. The said record of the interactive electronic object in the form of a computer program comprises detection and recognition of individual characteristic input stimulus actions of the user and connection of these with the individual characteristic output response actions of the interactive electronic object, corresponding to them, which display and play on the screen of the user's electronic device in as short time as possible or in the real time after detection and recognition of the input stimulus action, namely, in the form of a single characteristic video sequence showing a single characteristic output response action of the interactive electronic object. Such an end product, which comprises an encoded record of the operation of the interactive electronic object, is downloaded on said user electronic device, which and as is further specified in the Implementation example 2. This user electronic device is configured in such a way, so that the electronic screen of this user electronic device displays or is displaying and plays the said interactive electronic object in the form of individual characteristic video segments of the motion and response or reacting or conduct of the interactive electronic object. Most preferably, this user electronic device is configured so that its electronic screen displays or is displaying and plays the said interactive electronic object in the form of individual characteristic video segments of the motion and response or reacting or conduct of the interactive electronic object in real time after the detection and recognition, preferably in the real time, of the characteristic input actions of the user of this electronic device.

Such an interactive electronic object in combination with the electronic device of the user, which comprises an electronic screen, sensitive and responsive to a touch, as is described in the Implementation example 2, detects or sensors, recognizes, most preferably in the real time, and responds in the real time to the recognized

characteristic input stimulus actions of the user of the device, among which are, most preferably, different types of touches in different areas or on various parts of the interactive electronic object, which is displayed and played on the screen of the said device, and responds to the said characteristic input stimulus actions of the user by displaying and playing on the electronic screen of the user electronic device a characteristic segment of the video of the output response action of the interactive electronic object, which is corresponding to each respective characteristic input action of the user, in the real time after detecting and recognizing the characteristic input stimulus action of the user.

The interactive electronic object thus manufactured, which is the final product or product of the procedure according to this invention, may be any object from the real world and/or the fictional world, as known in the state of the art, and is either a living being, such as, e.g., a human, animal, plant, or man-made product, which is either simple or complex and is further either a device, including robots and automatons, a machine, a gadget, a tool, or any object other, such as e.g. a land vehicle, a watercraft, an aircraft and/or a spacecraft. Such an object from the real world can be an object as we know it from the present time, from the past, or as we anticipate it for the future. Such an object from the fictional world can be an object either from a futuristic world or from a fairy-tale, fantasy and/or otherwise imaginary world.

In one of the implementation examples, the interactive electronic object, prepared according to this invention, is in the shape of a puppy or dog, and in alternative implementation examples, it is in the shape of a kitten or cat, in the shape of a horse, cow, bull, sheep, goat and/or other domestic, domesticated and/or wild animals. In further alternative implementation embodiments of this invention, the interactive electronic object within the meaning of this invention is in the shape of devices, machines, and/or any other objects from the real world and/or from the fictional world, as are known in the state of the art.

In a preferred implementation example according to this invention, the said characteristic input stimulus actions of the user of the device are different types of touches in different areas or on various parts of the interactive electronic object, which is displayed and played on the screen of the said user's electronic device. Furthermore, the said characteristic input stimulus actions of the user of the device are touches in the areas or on parts of the interactive electronic object, which is displayed and played on the screen of the said device.

In the most preferential implementation example, the said user electronic device is a portable smartphone.

In the case of the procedure for preparing the interactive electronic object according to this invention, we significantly relieve the main processing units of both, of the computer used in the system according to this invention as well as of the user electronic device during the use of the interactive electronic object according to this invention. With this we achieve as smooth or fluid or continuous as possible the transitions in the shape of the object during its movement in the video and the effects of the object while using the interactive electronic object according to this invention.

By implementing the procedure and the video encoding process according to this invention, we can achieve fast and accurate decoding on already existing codecs and decoding units.

Crucial in the encoding process is to create a video, which is responsive/ capable of fast decoding according to the specific position of the object, while still being able to achieve a small file size of the video and of the interactive electronic object according to this invention.

By combining the encoding process and the tracking of decoding, we can achieve an interactive electronic object, which responds smoothly to the user's input commands without interruption, irrespective of the technical specifications of the user electronic devices, which are preferably mobile or portable electronic devices and most preferably portable smartphones. With the procedure and the system according to this invention, we thus improve the operation of the user electronic device and enable its smooth operation and performance with majority of mobile or portable electronic devices, both less as well as more powerful ones.

Implementation example 1

Procedure of capturing video of the object in the real world and its processing to prepare an interactive electronic object

The procedure described herein, as schematically shown in figures 2 and 3, is otherwise time-consuming and demanding in terms of implementation, all with the aim, that with the recording and by its further handling or by further processing according to the procedure, as described hereafter, we get as close as possible to the preparation of an interactively responsive and moving object, which responds and moves as closely as possible to continuous and smooth response and movement of such an object in the real time and in the real world. The said object, also referred to as a figure or character within the meaning of this invention, may be any object from the real and/or the fictional world which is either a living being (e.g. a human, animal, plant) or a man-made product, which is either simple or complex (e.g. any gadget or tool or any device such as a vehicle either a land vehicle, watercraft and/or aircraft, machine, robot, etc.).

The object, character or figure, recorded, handled or processed and prepared according to this procedure, we preferably present in a 2-dimensional form.

We record or capture the said object, figure or character with a camera in the length of the video clip or film, which enables the recognition of key or characteristic features in the movement and/or response of the object, figure or character in the real world and in real time.

In this implementation example, we record a video or a film of the puppy's movements in different situations and in its responses and reactions to stimulus actions, which its caretaker performs in communication with it.

In the case of video clip 1.1, the puppy's caretaker moves his palm gently over the puppy's body by stroking it. The cameraman records the movement and response of the puppy to the stroking on a digital camera. In its response, the puppy shakes its tail and head, or its head and tail at the same time. A video or film recorded in this way is the input video for preparation of a segment of the video with the puppy's response to the stroking and thus for the preparation of implementation example 2.1, as described in the Implementation example 2.

In the case of video clip 1.2, the puppy's caretaker touches or is touching the puppy's tail and/or pulls the puppy by the tail. The cameraman records on the digital camera the movement and response of the puppy to the caretaker's touching its tail and pulling by its tail. In its response, the puppy twists its tail upwards. The thus recorded video is the input video for the preparation of a video segment with the puppy's response to the touching and/or pulling the tail, and thus for the preparation of implementation example 2.2.

In the case of video clip 1.3, the puppy's caretaker touches or is touching the puppy's head. In doing so, the cameraman records on the digital camera the movement and response of the puppy to the caretaker's touching of the puppy's head. In its response, the puppy shakes its head left and right. The video recorded in this way is the input video for the preparation of a video segment with the puppy's response to the touching of the head, and thus for the preparation of implementation example 2.3.

In the case of video clip 1.4, the puppy's caretaker touches or is touching one of the legs or paws of the puppy, which means the caretaker's command, that the puppy offers to the caretaker the respective leg/paw for handshake, which the caretaker has touched. In its response, the puppy sits on its buttocks and then offers for handshake the respective leg or paw, which the caretaker has touched. In doing so, the cameraman records on the digital camera the movement and response of the puppy to the caretaker's touching of the puppy's paw and the command for handshaking with the paw. The video recorded in this way is the input video for the preparation of a video segment with the puppy's response to the touching of the paw/leg and to the command for handshaking, and thus for the preparation of implementation example 2.4.

In the case of video clip 1.5, the puppy's caretaker touches or is touching the back of the body or the buttocks of the puppy. This means the caretaker's command to the puppy to sit down on the floor on its buttocks and to sit on the floor. The cameraman records on the digital camera the movement and response of the puppy to the caretaker's touch on the puppy's buttocks. The video recorded in this way is the input video for the preparation of a video segment with the puppy's response to the touching of buttocks and the command to sit down on the ground, and thus for the preparation of implementation example 2.5. In the case of video clip 1.6, the puppy's caretaker touches or is touching the upper part of the body or the back of the puppy. This means the caretaker's command to the puppy to lie down on the floor on its stomach and paws and thus to lie on the floor. In doing so, the cameraman records on the digital camera the movement and response of the puppy to the caretaker's touching the puppy's back. The video recorded in this way is the input video for the preparation of a segment of the video with the puppy's response to the touching of its back and the command to lie down on the ground with its paws and belly, and thus for the preparation of implementation example 2.6.

In the case of video clip 1.7, the puppy's caretaker touches or is touching the lower side of the body or the belly of the puppy. This means the caretaker's command to the puppy to lie down on the floor on its back and thus to lie on the floor. In doing so, the cameraman records on the digital video the puppy's movement and response to the caretaker's touch on the puppy's belly. The video recorded in this way is the input video for preparing a segment of the video with the puppy's response to the touching of its belly and the command to lie down on the floor with its back, and thus for the preparation of implementation example 2.7.

In the case of video clip 1.8, the puppy's caretaker gently slides with his palm or fingers anywhere on the puppy's body and/or in any direction, which means the caretaker's continuous caressing of the puppy. The puppy responds to this caressing by continuously shaking its tail and/or head. During this time, the cameraman records on a digital camera the movement and response of the puppy to the caretaker's continuous caressing. The video recorded in this way is the input video for the preparation of a segment of the video with the puppy's response to the caretaker's continuous caressing, and thus for the preparation of implementation example 2.8.

In alternative implementation examples we record the movement and responses of other animals to stimulus actions by caretaker, such as, e.g., movement and responses of a kitten, sheep, cow, horse and any other domestic, domesticated and/or wild animals. In special implementation examples according to this invention, we record the responses of various devices and/or machines to commands of their user or operator.

In a preferred implementation example, we capture a video recording in the form of a video clip or film, as we can record it directly with a camera without any handling or processing, which is also called by the English term RAW format of video

(hereinafter: RAW format) or in Slovene raw video format and means a recording that is stored on the camera without reduction or the so-called compaction or compressing of the size of the video file (English: compressing) and also without other handling/ processing, as implemented in the state of the art, if necessary, on the video clip or film either already with the camera itself or in other similar way. For this, it is most appropriate, to choose a suitable camera, which enables recording in the RAW format. Among suitable cameras are, e.g. digital cameras as known in the state of the art, namely, e.g. Blackmagic URSA Mini 4.6K EF by producer Blackmagic Design or Fujifilm X-T3 by producer Fujifilm Holdings Corporation. On the said cameras we record the video or film on the electronic media as known and used in the state of the art, which are selected e.g. from media, such as CFast 2.0 cards from various manufacturers (e.g. cards with label SanDisk Extreme Pro CFast 2.0 SDCFSP-128G- x46D from producer Western Digital Corporation or Sony CFast 2.0 G Series CAT- G64 from producer Sony Corporation). Alternatively, we can also select SD cards (e.g. cards with label San Disk Extreme Pro UHS-II 300 MB/s SDXC from producer Western Digital Corporation) or the so-called memory drives with label SSDs together with peripheral implements, where drives SSDs are used in the case of longer video recordings (e.g., SSDs with label SanDisk Ultra 3D or with label WD Blue 3D NAND SATA SSDs, both from producer Western Digital Corporation).

In alternative implementation examples of this invention, we can choose cameras on the smartphones for capturing video, which enable video capturing and as known in the state of the art, knowing that the video recorded in this way is not in the RAW format, which may consequently affect the quality of the recorded video and later on, the quality of displaying the interactive electronic object on the screen. In alternative implementation examples according to this invention, we capture the video also in other digital video formats of video.

When a sufficient amount of video is captured, we transfer the digital video from the electronic storage medium of the camera to a computer, as described in this implementation example, and then handle or process the video on the said computer by using a program for video processing, which is selected from programs as known in the state of the art.

The transfer of video from the electronic storage medium of the camera to the computer, preferably to the hard disk of the computer, is implemented with procedures and by using devices and gadgets, which are at least one or more peripheral units for transferring of the recorded video and/or film from the electronic storage medium of the camera to at least one or more computers. Said peripheral unit is selected, depending on the electronic storage medium of the camera used, from the card readers, as known and used in the state of the art. In the case of using different types of cards, among which are e.g. cards SD, we can choose for transfer of the video or film from various camera memory cards to the computer's hard disk also one of the universal card readers, as are known and used in the state of the art, e.g., card readers with label Hama USB 3.0 Multi Card Reader, SD/microSD/Cf from producer Hama Ltd. In the case of using a card CFast 2.0 as an electronic storage medium of camera, e.g. with label SanDisk from producer Western Digital Corporation, we select the card reader CFast 2.0 with label SanDisk USB 3.0 CFast 2.0, also from producer Western Digital Corporation. By means of the said card reader we transfer the said video via interface USB 3.0 (USB is the universal serial bus used to connect the computer to its peripheral units, devices and/or components), which is the USB interface with the so- called super speed of data transfer, which amounts to 5 Gbit/s, to the computer's hard disk, on which we then handle or process the video or film according to the below described procedure in order to prepare an interactive electronic object according to the present invention. For handling or processing of video in this implementation example we used the program Adobe® After Effects from the company Adobe Systems. This program is used in the state of the art in the field of film industry and television production for post-production processes and procedures in the production of films and television shows, and enables digital handling or processing of film or video clips in a digital form.

In this implementation example or embodiment, as shown schematically in figure 2, which schematically shows the steps of the entire procedure according to this implementation example, we implement the post-production handling or processing of said recorded video of the object in a digital form, where this processing comprises editing of the video clip or film, which comprises:

- procedure of extracting colours or colour keys,

- designing of motion graphics and animation,

- designing of visual effects,

- designing of composition of the video in the passage of time, and

- editing and designing of colours, colour tones, brightness and contrasts.

Here we use various techniques that are known in the state of the art and used in the field of film or cinematic industries and technologies. Most preferably, we use techniques from the field of post-production in the film industry and for this choose the above stated program Adobe® After Effects.

According to the captured video or film we use different steps, procedures and techniques. In this implementation example we first implement the procedure of extracting colours or the so-called procedure of chroma keying (English: chroma keying or chroma key or colour keying or green screen or blue screen), which is one of the post-production procedures and techniques as known in the state of the art, where we extract certain colours from the video and we thus exclude from the background of the video or from the surroundings of the object, which is shown in the video or in the film, all those elements, figures and/or characters that we do not want to show in the final video. Usually in this procedure, depending on the colour of the object itself, which we capture or record with the video or film, blue or green colour of the background is used. At the same time, if necessary, we eliminate with this procedure unclear or blurred elements in the surroundings of the recorded object, which arise due to different speeds of moving of the object and of the surrounding elements in the video. We then perform a procedure of designing of motion graphics and animation, a procedure of designing of visual effects, a procedure of designing of video composition in the passage of time, a procedure of editing and designing of colours, colour tones, brightness, and contrasts.

In an alternative implementation example, as shown schematically in figure 3, which schematically shows the steps of the entire procedure according to this alternative implementation example, we can change the sequence of said procedures so that, e.g., we implement the procedures in the order:

a procedure of extracting colours,

a procedure of designing of video composition in the passage of time, a procedure of designing of motion graphics and animation,

a procedure of designing of visual effects and

a procedure of editing and designing of colours, colour tones, brightness and contrasts.

The process of post-production handling or processing of video or the film herein comprises, among others, also the following procedures and steps, if necessary:

- Editing of video, where we edit the shape of the object captured in the video and the way of its movement and/or movement in response to certain characteristic input stimulus actions (hereinafter: response movements) and, if necessary, other elements, captured in the video, which accompany the captured object. In a preferred embodiment of this invention, we exclude the other elements from the video;

- Editing of video by defining and designing of key characteristic movements and/or response movements of the object captured with the video or the film, which we define, determine, and design in the following herein described procedures for preparation of the interactive electronic object as key output characteristic response actions in the form of characteristic video segments of the object to characteristic input stimulus actions of the user and/or operator of such an object, as described herein in the following;

- Designing of special effects in the appearance and movement of the object captured in the video, comprising animation of the object, where we, if necessary, improve and enhance the display of key characteristic movements of the object captured in the video sequence of the characteristic output action of the interactive electronic object;

- Designing of colours, colour tones or shades, colour contrasts and brightness (English term: colour grading) in the video sequence of a characteristic output action of the interactive electronic object;

- And, if necessary, also other procedures as are known in the state of the art in the field of film and video post-production.

In special alternative implementation examples, we can arbitrarily combine the above stated steps depending on the respective goals of the post-production design of the video object and its visual appearance during playing.

After the completion of the procedure of colour extraction and of other post production procedures of handling or processing of video, as stated above, we make a plan for handling or processing of video, where we take into account the sequence of steps implemented in the said video processing and the algorithm of these, and then we enter them into the program code according to the below described procedure of programming. Before the programming procedure, we perform the procedure of video segmentation.

When we finish with the said procedures of post-production of the video or film, we begin the procedure of segmenting the video or film, in which we determine the key segments or sections of video or film according to the film parameters of the video and according to the digital and computer characteristics/ parameters of the video. In short, this procedure is also specified as arranging or editing of video (English:

editing), wherein we all the time take into account the limitations in programming of such video segments as characteristic output response actions of the interactive electronic object according to this invention and input signals, which are input characteristic stimulus actions, that are preferably performed by the user on the screen with a touch, where this screen is a touch sensitive and responsive screen on the user's electronic device, and of interconnections of individual predetermined characteristic input stimulus actions of the user and of individual characteristic output response actions of the interactive electronic object according to this invention, which are playing of individual predetermined key characteristic video sequences of the response movement of the object according to this invention.

In the said step of video segmentation, we define or we specify the key segments of the video, which during playing the video affect the perception or detection of continuity and smoothness in the motion and/or response of the object as shown in the video.

After the implemented procedures of post-production and segmentation of video/film, we export the video clip or film into any arbitrary compressible format or codec, the so-called procedure of converting into code or encoding of video, wherein, among others, we optionally set the speed of animation or of motion of individual images (English: frame rate), distinctness or resolution of images of the recording (English: resolution), that is, accuracy of the display of images in relation to the number of displayed dots in the image (English: pixel), the size of the amount of data or the number of bits in time (English: bitrate). If necessary, we can optionally set also other parameters of a digital video, as are known in the state of the art. With the encoding procedure we encode the above described defined or specified segments of the video in a way, in which we enter or programme the specified video segments with the so- called encoding process, with which we reduce or compress the size that the video or the film takes up during storage in the digital form in the storage medium, which is usually the hard disk. With the help of a codec we convert into code/ encode the video or film in the RAW format into another format, which takes up less memory space or electronic storage space in the storage medium such us, e.g. on the hard disk. With the encoding process we thus reduce the video size to a more acceptable size. Said codec is selected from codecs as known and as used in the state of the art in procedures of converting video into code or the so-called video encoding. Preferably, we choose open source codecs (English: open source codec), which are, among others, selected from codes known in the state of the art VP9, Xvid, x264 (for encoding videos in the formats H.264/MPEG-4 AVC) and Lagarith.

The encoding procedure is followed by the procedure of encoding or programming or recording into the programming language of the interactive electronic object and of its operation on the user's electronic device, which comprises encoding of the recognition of each individual predefined input stimulus action of the user on the user's electronic device, its connection with each individual predetermined characteristic output response action of the interactive electronic object, corresponding to individual input stimulus action, in such a way that the predetermined characteristic video segments of this object are displayed and played on the screen of the user's electronic device, and that upon the recognition of input stimulus action of the user in the real time, the command of recalling of the corresponding characteristic video segment of this object as output response action of this interactive electronic object according to this invention is implemented, and following this, the command of playing this video segment in the real time is implemented.

In the said process of programming into the program code according to this invention, we, among others, write the course of the sequence of images in the video or film in the passage of time, in which we specify, determine and define all visual parameters of the video/ film in the passage of time, comprising the shape of the interactive electronic object that displays on the screen of the user's electronic device, and by changing the shape of this object on the said screen in the passage of time, which comprises the individual respective characteristic response actions of this object with a characteristic response movement, which comprises its response movements, gestures and/or reactions to a precisely determined individual characteristic input stimulus action of the user of the said electronic device. At the same time, we program predetermined characteristic or key characteristic movements and thus response movements of the interactive electronic object, prepared according to this invention, captured by video or film, which we, in herein described procedures of the preparation of the interactive electronic object, define, determine and design as key output characteristic response actions in the form of characteristic video segments of the object to the characteristic input stimulus actions of the user and/or operator of such an object: In doing so, we connect these actions with each other, that is, the input stimulus and the output response actions, in the process of recording into the program code in such a way, that we connect individual characteristic input stimulus actions of the user on the user electronic device with the herein described individual

characteristic output response actions of the interactive electronic object, prepared according to the herein described procedure. With this we connect a precisely predetermined individual output response action of the said interactive electronic object, which is displayed and played on the screen of the user electronic device as a precisely predetermined characteristic video segment of the object in the passage of time, together with a precisely predetermined individual input stimulus action of the user, which this one performs on the user electronic device. In the said characteristic segment of the video, we, namely, connect together the sequence of recorded characteristic movements of the object, captured in the video, which we define as a sequence of movements, and we connect together and design into one individual and predetermined characteristic output action of the interactive electronic object or into a so-called video sequence of one characteristic output action of the interactive electronic object. In doing so, we design, define, and classify such a video sequence as one of the possible characteristic and predetermined output actions of the interactive electronic object. With this we also determine and define the initial or starting shape or position of the said object or the so-called position pO and the final shape or position of the said object or the so-called position pK, which is a respective position of the object after performing the output response action of the object, that is, after playing a characteristic sequence of an individual response action. This position pK can be completely fluid, which means that it is the position of the object at each respective moment of the stoppage of the playing of a characteristic video segment of the response action of the object. In alternative implementation examples according to this invention, such an end position pK of the object can be a predetermined position, which is either the position of the object at the beginning of the use of the interactive electronic object on the user electronic device and thus always the same position pO or it is a precisely

predetermined end position of the object, which this object takes at the end of the playing of a precisely determined individual segment of the video of the characteristic output response action of the object marked with sign N or the so-called pN position, which is, further on, a position of the object pN+1 at the end of playing of a precisely determined further individual segment of the video of the characteristic response action of the object marked with sign N+l.

For implementation of the described procedure or process of programming into the program code we use programming languages, selected from the programming languages, as are known in the state of the art. Most preferably, we use the programming languages C sharp (label: C #), JavaScript, Ruby, etc.

Depending on the specified video segments we select and determine the video algorithms by taking into consideration the intended contents and extent of interaction of user of the video clip of the object with the said object, which is displayed on the electronic screen within such a video. For example, when touching a specific location of the interactive electronic object, displayed on the electronic screen, the program code recalls from memory of the user electronic device, on which the programmed record of the interactive electronic object according to this invention is downloaded, configured and stored in such a way, that the user's electronic device detects and recognizes the individual characteristic input stimulus action of the user, when this is equal to one of the predetermined and in the said program code recorded characteristic input actions of the user, such as, e.g. the actions of the user in the below in the Implementation example 2 described implementation examples 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7 and 2.8 and the outputs indicated in the above in this Implementation example 1 specified examples 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7 and 1.8 and connects it with the individual one, and while doing this recalls the correct predetermined, to this inputs stimulus action corresponding characteristic video segment of the object with the characteristic output response action of the said interactive electronic object or the correct position of this object, which we determined in the programming procedure, and wherein the precisely that individual video segment of the object is displayed on the screen without delays, which are characteristic for the decoding process of an encoded video. Such delays, which are characteristic for the decoding process of the encoded video, are delays, which occur when the video played on the screen of the user device does not display the precisely recalled video segment or its individual scene or image (English: frame) of the object in the real time or in the said program code in a predetermined time interval, where such a scene or image is a consecutive part in a sequence of scenes or images of the object, which together display the smooth movement and response of the object in the passage of time in the video segment. In the case of the said delay, the said image/scene is displayed either with a delay or with a time lag and thus with a slow playing of the video segment.

Additionally, to fast video playing can occur, where individual scenes or images play too fast or overtake in the program recorded playing of the sequence of scenes or images in a time interval. In this case, the said image/scene is displayed with overtaking in time and thus by playing the video segment too fast.

With the herein described procedure we combine the said steps and techniques and obtain the final product, which is the interactive electronic object or the interactive electronic figure or character, that is displayed preferentially on the electronic screen of the user's electronic device, which is sensitive to a touch, wherein such an object/ figure/ character gives the impression of an object, which is captured in the real time and in the real world and with the ability to respond and react to the user's input stimuli or input stimulus actions or input commands. This is described in more detail in the Implementation example 2.

The above stated procedures of processing or handling of the video of the captured object, including its post-production processing, which comprises the procedure of colour extraction, designing of motion graphics and animation, designing of visual effects, designing of video composition in the passage of time, editing and designing of colours, colour tones, brightness and contrasts, and the above stated procedures for preparing the interactive electronic object according to this invention, which comprises the procedure of exporting video into a compressible format, procedure of video segmentation by defining key video segments, procedure of converting of video into code by encoding process, and procedure of programming of the interactive electronic object, are implemented in a system of electronic devices and units, which comprise, among others, at least:

a computer, which comprises:

o at least one or more processor units, as known and used in the state of the art and which are selected, among others, from processors, such as, e.g. Intel® Core™ Ϊ7-5930K from producer Intel Corporation (with a base processor frequency of 3.50 GHz and a maximum turbo frequency of 3.70 GHz and a convenient memory or electronic memory in the size of 150 MB or the so-called pre-memory (English: cache) for frequently used operational actions and/or data) or newer processors than this one,

o at least one or more permanent memory units or the so-called hard memory or a hard disk (English: hard disk) with the available size of electronic memory of at least 1 GB,

o at least one or more temporary memories or read-write memories

(English: random-access memory; hereinafter abbreviation RAM) with a minimum recommended amount or size of electronic memory of 16 GB DDR4 (that is, the so-called fourth generation, where RAM is synchronous dynamic with double speed of data storage in the range of 200 to 400 MHz) or newer,

o at least one or more graphics units, which is selected from graphics units as known and as used in the state of the art on computers for processing or handling of videos and/or films in the digital form. Said graphics unit according to this invention is selected, among others, from graphics units or graphics cards such as, e.g., at least GeForce GTX 1080 Ti from producer NVIDIA Corporation or newer, o at least one or more peripheral units of the said computer or the so- called peripheral computer units, which is selected from peripheral units, as known and used in the state of the art, among which are, inter alia, a computer mouse and keyboard, and, if necessary, an electronic pen or the so-called stylus for writing on a computer screen, sensitive to a touch, in combination with a touch sensitive screen, o at least one or more computer screens for displaying and playing video and/or film in a digital form and for viewing and controlling the visual appearance of said video during the post-production process described herein.

If necessary, the said computer may in alternative implementations of this example comprises also:

o at least one or more microphones and/or loudspeakers for recording sound and sound effects and for playing sound and sound effects in videos and/or films, as known and used in the state of the art, o and at the same time at least one or more sound cards, which enables also the post-production handling or processing of sound in the video clip or film. Such a sound card is selected from sound cards as known and used in the state of the art, such as, e.g., sound card Scarlett 2i2 from producer Focusrite PLC.

Said computer is selected from more powerful personal computers, as known and used in the state of the art, among which are, inter alia, e.g. computers MAC from producer Apple Inc. or computers from producer Microsoft, which enable the herein described post-production handling or processing of videos and/or films in the digital form.

The said processing unit serves, in addition to processing or handling of video or film, also as a control unit. It executes commands of computer programs, which are downloaded on the said computer, performs input operations and actions and output operations and actions of such computer programs, which comprises, inter alia, detection, recognition and processing of input commands from the processor of video or film in the process of downloading video/film on the computer, handling or processing and post-production handling or post-production processing of video/film and other commands, processing of computer output actions, comprising visual playing or displaying of video/film on the computer screen, which, if necessary, also comprises the playing of sound and sound effects, and also comprises all displays of changes in video/film during its post-production handling or processing. In addition, the processor unit controls the processes of the computer itself. Such a processor unit is selected from processors, as known and as used in the state of the art for more powerful personal computers, among which are, inter alia, e.g. at least the above stated processor Intel® Core™ Ϊ7-5930K from producer Intel Corporation or newer processors.

The said computer is equipped with software and configured to enable processing or handling of video or film according to this invention, that is, it enables the implementation of processes, procedures and steps of post-production of video or film according to this invention, comprising graphics processing of video and/or film, and preparation of the interactive electronic object according to this invention, monitoring and control of these processes. The software of the operating system of the said computer is, inter alia, selected from operating systems (hereinafter also: OS) as known and as used in the state of the art, among which are e.g. operating system or OS with label macOS from producer Apple Inc. or OS Windows from producer Microsoft.

For performing the post-production process, the said computer is equipped with at least one program for video handling or processing, which is selected from programs, as known in the state of the art, and which enable digital handling or processing of film or video clips in the digital form, including e.g. editing of video clips, designing of visual effects, motion graphics, animation and video composition, editing and designing of colour tones, brightness and contrasts, etc. In the present implementation example, we used the program Adobe® After Effects from producer Adobe Systems, which is used in the state of the art in the field of film industry and television production for post-production processes and procedures in the production of films and videos and television shows. The said program Adobe® After Effects is downloaded either on a computer with storage on the hard disk or the so-called hard memory or the so-called permanent memory unit, so that the video or the film is processed or handled post-productionally without the internet connection (English: offline) or in the so-called island manner, either by downloading it and using it via the Internet cloud, so that the video or film can be processed or handled post- productionally with the internet connection (English: online).

The said computer screen is selected from computer screens, as are known and as used in the state of the art in the range of more powerful computers. Most preferably, these are thin-film screens of LED technology, that is, technology of light emitting diodes.

According to this Implementation example 1, we prepare the final product of this procedure or of the combination of the herein described processes, procedures and steps, which is an interactive electronic object in its final shape, that is written in the form of a computer program or a computer program code in one of the programming languages, as known in the state of the art, which are selected, among others, from the programming languages C #, JavaScript, Ruby, Java, Swift. Such a record of the interactive electronic object in the form of a computer program comprises the detection and recognition of individual characteristic input stimulus actions of the user and the connection of these with the individual characteristic output response actions of the interactive electronic object, that are corresponding to them, which are displayed and played on the screen of the user's electronic device in as short time as possible or in the real-time after the detection and recognition of the input stimulus action, namely, in the form of an individual characteristic video sequence, which is displaying an individual characteristic output response action of the interactive electronic object.

In the Implementation example 1, the said interactive electronic object, according to the above described examples of recordings 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7 and 1.8, is in the final shape of a puppy or a dog, which is an interactive electronic puppy or a dog, that is displayed and played on the screen of a user electronic device, which is most preferably a portable smartphone, in the form of individual characteristic video segments of the motion and response or reacting or conduct of the interactive electronic puppy or dog.

Such an interactive electronic object in the shape of a puppy in combination with the electronic device of the user, which comprises a touch sensitive and responsive electronic screen, as described in the Implementation example 2, detects or sensors, recognizes and responds in the real time to the recognized characteristic input stimulus actions of the user of the device, among which are, most preferably, different types of touches in different areas or on different parts of the interactive electronic object in the shape of the puppy or the dog, that is displayed and played on the screen of the said device, and that responds to said input stimulus actions of the user in such a way, that a characteristic video segment of the output response action of the interactive electronic object in the shape of the puppy/ dog is displayed and played on the electronic screen of the user electronic device in the real time after detecting and recognizing the characteristic input stimulus action of the user, which is corresponding to each characteristic input action of the user.

In alternative implementation examples, the said interactive electronic object may be in the shape of a kitten or cat, in the shape of a horse, cow, bull, sheep, goat and/or other domestic, domesticated and/or wild animals. In the alternative embodiments according to this invention, the interactive electronic object within the meaning of this invention can be in the shape of devices, machines, and/or any other objects from the real world and/or from the fictional world, as are known in the state of the art.

Implementation example 2

An example of the operation and use of the interactive electronic object made according to this invention on the user's electronic device

In the operation and use of the interactive electronic object, made according to this invention, which is the final product obtained after implementation of the processes, procedures and steps according to the Implementation example 1, on the user's electronic device, it goes within the meaning of this invention, for the interaction of the user with the interactive electronic object and for response of the electronic device of the user in such a way, that response action of the user's electronic device comprises playing of a segment of video on the user's electronic device screen, where said screen is sensitive and responsive to various types of touches made by the user on such an electronic device screen.

In this case, the said electronic device of the user consists of at least the below stated units and elements and is equipped with software, that is, with computer programs or computer program codes, which are downloaded on it, and which is configured with them in such a way, so as to enable the implementation of below specified and described user interactions with the said electronic device. This comprises, among others, the input of the user's input stimulus actions via the screen, sensitive and responsive to the touch, of the said electronic device, the detection of these input stimulus actions and their recognition or categorization by the user's electronic device, and performing the below specified and described response actions of the said electronic device to the user's input stimulus actions, wherein these response actions are implemented on said user's electronic device screen in such a way, that

corresponding to the predetermined recognized characteristic input stimulus actions of the user on the said user's electronic device, each time when such a characteristic input stimulus action is detected and recognized, the predetermined characteristic output response action of the user's electronic device is performed, appropriate for each said characteristic input stimulus action.

Said predetermined characteristic output response action of the user's electronic device most preferably comprises displaying or playing of a precisely predetermined segment of the video on said screen of the user's electronic device, where such a response action corresponds to the detected and recognized input stimulus action of the user. In alternative implementation examples according to this invention, such an output response action of the electronic device is selected also from the operations of the user's electronic device as are known in the state of the art.

The interactive electronic object prepared according to the above described

implementation example 1 is recorded in the form of a computer program or computer program code in one of the programming languages known in the state of the art, which are selected, among others, from the programming languages C #, JavaScript, Ruby, Java, Swift. Such a record of an interactive electronic object in the form of a computer program is downloaded on the user's electronic device via a web connection.

In this implementation example, such an electronic device is a portable smartphone, which is selected from the portable smartphones, which are known and used in the state of the art.

Such a smartphone must have at least the following units, elements and technical characteristics:

A processing unit, which is also a control unit and serves to execute commands of computer programs, which are downloaded on the said smartphone, to perform input operations and actions and output operations and actions of such computer programs, which, among others, comprises detection, recognition and processing of user input commands and other commands and processing the output actions of the user's smartphone, and to control the processes of the smartphone itself, to perform input commands and output actions and thus to interact with the user of the smartphone, where such a unit is selected from processors, as are known in the state of the art for use on smartphones, among which are e.g. ARM Cortex-A7 or ARM Cortex- A55, which are microprocessors manufactured by ARM Holdings, microprocessor Exynos Mongoose M3 or the so-called M3 microprocessor from producer Samsung or A 12 Bionic from producer Apple Inc.,

Read- write memory or the so-called temporary memory (English: Random- access memory; hereinafter also abbreviated: RAM), which is of the recommended quantity or size of memory or electronic memory of at least 1 GB or more,

A graphics unit for decoding video, which is selected from graphics units, as are known and as are used in the state of the art for portable smartphones. In alternative implementation examples according to this invention, the video on the portable smartphone can also be decoded by the processor unit of such a phone, but then the fluidity of the video is poor or the fluidity of video playing is poor,

A unit of hard memory on a portable smartphone, which must have at least 50 MB or more of free memory,

Operating system (hereinafter abbreviation: OS), which is selected from operating systems that are known and which are used for portable

smartphones in the state of the art. Most preferably, in the implementation of this invention, the OS is selected from operating systems by producer Google, such as, e.g., at least OS Android 4.4 (also called Android 4.4 “KitKat”) or a newer OS, or it is selected from operating systems by producer Apple Inc.’s, such as, e.g., at least iOS 10.3.3 or a newer OS.

According to this invention, the user's interactions with the interactive electronic object on the user's portable smartphone screen and the output response actions of such a phone comprise at least playing of predetermined video segments on the screen of such a portable smartphone, which are predetermined as corresponding to predetermined characteristic input stimulus actions of the user, which comprise at least the below described examples 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7 and 2.8.

In the case where the interactive electronic object in the final shape within the meaning of this invention is a puppy, that is, the final product after the execution of the Implementation example 1, such an interactive electronic object in combination with the user's electronic device, which comprises an electronic screen sensitive and responsive to a touch, detects or sensors, recognizes and responds in real time to the recognized characteristic input stimulus actions of the user of the device, among which are, most preferably, different types of touches on different areas or on different parts of the interactive electronic object in the shape of a puppy or a dog, which is displayed and played on the screen of the said device, and responds to said input stimulus actions of the user by displaying and playing on the electronic screen of the user electronic device a characteristic video segment of the output response action of the interactive electronic object in the shape of a puppy/ dog in the real time after detecting and recognizing the characteristic input stimulus action of the user and corresponding to each respective characteristic input action by the user.

Thus, an interactive electronic object in the shape of a puppy/ dog is displayed on the electronic screen of the smartphone, which we captured as an object for the preparation of an interactive electronic object in the Implementation example 1. The user touches on the said electronic screen, e.g., the head, body or torso or tail of the puppy displayed on the screen. The sensor on the screen recognizes this touch on the screen and according to the input interaction, that is, depending on the touch of either the head or the body or the tail, the program code, which is downloaded on the portable smartphone, detects and recognizes the position or the location of the touch and the type of the touch on the interactive electronic object, and recalls a video segment corresponding to this position and type of touch and triggers the playing of this video segment on the said screen.

In the implementation example 2.1 of the user's touch of the interactive electronic object on the screen of a portable smartphone, that is sensitive and responsive to a touch, where the user tenderly moves his finger on the puppy, which corresponds to caressing the puppy in the real world, a video segment is displayed on the said screen, in which the puppy shakes its tail and/or head.

In the implementation example 2.2 of the user's touch of the interactive electronic object on the said screen of a portable smartphone, in which the user touches the puppy's tail, which corresponds in the real world to touching and/or pulling the puppy's tail, a video segment is displayed on the said screen, in which the puppy twists its tail upwards. In the implementation example 2.3, when the user touches the interactive electronic object on the said screen of the portable smartphone by touching the puppy's head, a video segment is displayed on the said screen, in which the puppy shakes his head left and right.

In the implementation example 2.4, when the user touches the interactive electronic object on the said screen of the portable smartphone by touching one of the puppy's front paws, which corresponds to a command in the real world, that the puppy offers its paw to the user for handshake and greeting, a video segment is displayed on the said screen, namely, when the user touches the puppy's right paw on the screen, the puppy sits down and raises its right paw, or when the user touches the puppy's left paw on the screen, the puppy sits down and raises its left paw.

In the implementation example 2.5, when the user touches the interactive electronic object on the said screen of the portable smartphone by touching the buttocks or the rear part of the puppy's torso, which corresponds in the real world to the user's command that the puppy sits down on the ground with its buttocks, a segment of the video is displayed on the said screen, in which the puppy sits down on its rear part and sits on it.

In the implementation example 2.6, when the user touches the interactive electronic object by touching the upper side of the body or the back of the puppy, which corresponds in the real world to the user's command that the puppy lies down on the floor with its body so that it lies down on its paws and belly, a video segment is displayed on the said screen, in which the puppy lies down with its paws and belly on the floor and lies on its paws and belly.

In the implementation example 2.7, when the user touches the interactive electronic object by touching the bottom side of the body or the puppy's belly, which

corresponds in the real world to the user's command to the puppy to lie with its body down on the floor, so that it lies on its back, a video segment is displayed on the said screen, in which the puppy lies down on the floor and lies on its back. In the implementation example 2.8, when the user touches the interactive electronic object by touching the said object on the screen wherever the object is located and when the user moves his finger on the object in such a way that he slides with his finger on the object in one direction or in both directions, which corresponds in the real world to the user's caressing of the object, which is puppy, and thus to the user's command that the puppy reacts accordingly or responds to the caressing and moves in such a way, which shows the reaction of the puppy to caressing, where on the said screen a segment of video is displayed, in which the puppy shakes its tail and/or head, where this segment of the video is played or is rotating in the so-called loop (English: loop) infinitely repeating itself for the time until the user stops its movement or sliding with his finger on the object on the screen or until he stops touching the screen with his finger. At the moment of interrupting the contact of the finger with the screen, such repeated playing of the video segment stops and the puppy returns to each respective original position, that is, to the position the puppy had on the screen before the user began sliding his finger on the object.

The positions of the object pO and pK are, as described in the Implementation example 1, that is, initial or starting shapes or positions of the said object or the so-called initial position pO and final shapes or positions of the said object or the so-called positions pK. The said position pK can be completely fluid or the so-called position of the object at each respective moment of the stoppage of the playing of the characteristic segment of the video of the object's response action.

In alternative implementation examples according to this invention, such an end position pK of the object can be a predetermined position, which is either the position of the object at the beginning of the use of the interactive electronic object on the user electronic device and thus always the same position pO or it is a precisely

predetermined end position of the object, which this object takes at the end of the playing of a precisely determined individual segment of the video of the characteristic output response action of the object marked with sign N or the so-called pN position, which is, further on, a position of the object pN+1 at the end of playing of a precisely determined further individual segment of the video of the characteristic response action of the object marked with sign N+l.

The system described herein and the procedure for preparation of interactive electronic objects by video processing for use in the systems of electronic devices and/or electronic toys can alternatively be implemented, manufactured and constructed with appropriate adaptations according to each respectively captured video or the film of the object, which is used and handled or processed according to the herein described system and procedure for the preparation of interactive electronic objects within the meaning of this invention.

Within the scope of the invention described and specified herein and as defined in the enclosed claims, other implementations of the system and procedures of said preparation of the interactive electronic objects within the meaning of this invention may be possible, which comprise herein described and specified technical features of the system and procedure according to this invention and alternatively implementable procedures, processes and their steps with the devices and elements of the system described herein or with their alternatives, as could be foreseen by the person skilled in the art, comprising their functionalities and/or their alternative equivalents and other implementations of such systems and procedures and individual steps of their operation with various modifications and variations in the combination of herein described elements and/or devices of the system and in the combination of steps and procedures as well as in their technical features, as the person skilled in the art can, based on the herein described and explained solutions to technical problem of the system and procedure for preparation of interactive electronic objects by video processing for use in the systems of electronic devices and/or electronic toys within the meaning of this invention, also develop other embodiments of such systems and procedures, which does not change the essence of the invention as described and specified herein and defined in claims.