Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A 3-D SCULPTING APPARATUS AND A METHOD THEREOF
Document Type and Number:
WIPO Patent Application WO/2017/158584
Kind Code:
A1
Abstract:
The present invention relates to an apparatus for modeling a 3D object during sculpting of said object, the apparatus comprises: (a) a worktop, used as a base for sculpting said 3D model; (b) at least three cameras, incorporated within said apparatus, for capturing images, from at least three different viewpoints, of said object, during the sculpting of said object on said worktop; (c) a processor, electronically connected to said cameras, for receiving and processing said images from said cameras; and (d) communication means, connected to said processor, for transmitting data, related to the processed images, from said processor to an external processing unit for modeling said 3D object, present on said worktop.

Inventors:
RUBINCHIK ALEXEY (IL)
Application Number:
PCT/IL2017/050158
Publication Date:
September 21, 2017
Filing Date:
February 08, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FORMME 3D LTD (IL)
International Classes:
G01B11/25; G01B11/24
Foreign References:
US9163938B22015-10-20
US6341016B12002-01-22
US7103212B22006-09-05
US8232990B22012-07-31
US20060227133A12006-10-12
Other References:
T. OHDAKE ET AL.: "3D MODELING OF HIGH RELIEF SCULPTURE USING IMAGE BASED INTEGRATED MEASUREMENT SYSTEM", TOKYO DENKI UNIV., DEPT. OF CIVIL AND ENVIRONMENTAL ENGINEERING, 16 January 2017 (2017-01-16), XP055425098, Retrieved from the Internet [retrieved on 20170404]
Attorney, Agent or Firm:
WELLER, Hayim (IL)
Download PDF:
Claims:
Claims

1. An apparatus for modeling a 3D object during sculpting of said object, comprising:

a worktop, used as a base for sculpting said 3D model; at least three cameras, incorporated within said apparatus, for capturing images, from at least three different viewpoints, of said object, during the sculpting of said object on said worktop; a processor, electronically connected to said cameras, for receiving and processing said images from said cameras; and communication means, connected to said processor, for transmitting data, related to the processed images, from said processor to an external processing unit for modeling said 3D object, present on said worktop.

2. An apparatus according to claim 1, where the processing unit is one of the following: PC, Laptop, Tablet, TV or smartphone.

3. An apparatus according to claim 1, where the apparatus comprises five cameras.

4. An apparatus according to claim 1, further comprising at least one laser sensor for measuring the distance to said object.

5. An apparatus according to claim 1, wherein each of the cameras is positioned to hold an optical axis that has an angle of approximately 18 degrees vertically from the worktop.

6. An apparatus according to claim 1, wherein the cameras have a movable optical axis.

7. An apparatus according to claim 1, wherein the cameras are stationary.

8. An apparatus according to claim 1, where the apparatus has an internal light source.

9. An apparatus according to claim 1, where the 3D object is modeled into a 3D image of an STL format.

10. An apparatus according to claim 1, where the processor interpolates the images from the cameras into a 3D image.

11. An apparatus according to claim 1, where the processing unit interpolates the images from the cameras into a 3D image.

12. An apparatus according to claim 1, where the modeling of the object is done repeatedly during the progression of the sculpting

13. An apparatus according to claim 1, where a representation of the modeled object is shown to the user in real-time.

14. A method for modeling a 3D object, during sculpting of said object, comprising:

providing a worktop, for using as a base for sculpting said 3D model;

capturing images, from at least three different viewpoints, of said object, present on said worktop;

processing said images from said cameras; and

communicating data, related to the processed images, to an external processing unit for modeling said 3D object, present on said worktop.

15. A method according to claim 14, where the processing unit is one of the following: PC, Laptop, Tablet, TV or smartphone.

16. A method according to claim 14, where the images are captured from five viewpoints.

17. A method according to claim 14, where the modeling of the object is done repeatedly during the progression of the sculpting

18. A method according to claim 14, where a representation of the modeled object is shown to the user in real-time.

Description:
A 3-D SCULPTING APPARATUS AND A METHOD

THEREOF

Technical Field

The present invention relates to 3D modeling systems. More particularly, the present invention relates to 3D scanning systems used for the creation of computer readable 3D models.

Background

The expanding usage of 3D printing has created a real demand for the creation of computer readable 3D models of objects, otherwise known as 3D object modeling or 3D scanning. Computerized 3D models of objects have useful applications in many fields, such as digital imaging, computer animation, special effects in film, prototype imaging, and so on.

A 3D object modeling system typically constructs an object model from 3D spatial data and then associates color or other data with the specific areas of the model. Spatial data includes the 3D X, Y, Z coordinates that describe the physical dimensions, contours and features of the object. Existing systems that collect 3D spatial and texture data include both scanning systems and photographic "silhouette" capturing systems. A scanning system uses a light source, such as a laser, to scan a real-world object and a data registration device, such as a video camera, to collect images of the scanning light as it reflects from the object. A silhouette capturing system typically places an object against a colored background and then, using a camera, captures images of the object from different viewpoints. For example, a silhouette capturing system typically uses those pixels within each image which form a boundary or outside edge for creating a silhouette contour of the object. The boundary point-based silhouette contours made from one image can be combined with the boundary point-based silhouette contours found in other images to determine a set of 3D X, Y, Z coordinates which describe the spatial dimensions of the object's surface. One typical approach begins with a cube of, for example, 1000*1000*1000 pixels. Using this approach, the shape of the object is "carved" from the cube using silhouette outlines that are obtained from each silhouette image. Silhouette capturing systems can gather enough raw data from the silhouette contours to generate several hundred thousand 3D X, Y, Z coordinates for a full wraparound view of an object.

In general, some currently available 3D modeling systems, which use silhouette capture, place an object in a specially colored environment, such as an all green background, and then collect a series of images of the object's shape and texture by either moving the camera around the object or moving the object (e.g., in 360 degree circular direction) in front of a stationary camera. In each image, the system attempts to determine those pixels which form the boundary contours of the object's silhouette and create from the multiple silhouette images a 3D mesh model of the object. Such systems capture all needed data (both spatial mesh construction data and texture data) in a single series of images. The brightly-colored background used in this silhouette capturing approach enables the system to differentiate those pixels which describe the boundaries of the object (the pixels which are used to generate the 3D X, Y, Z coordinates of the 3D model). US 2006/0227133 discloses a system and method for constructing a 3D model of an object based on a series of silhouette and texture map images. As described, an object is placed on a rotating turntable and a camera, which is stationary, captures images of the object as it rotates on the turntable. In one pass, the system captures a number of photographic images that will be processed into image silhouettes. In a second pass, the system gathers texture data. After a calibration procedure (used to determine the camera's focal length and the turntable's axis of rotation), a silhouette processing module determines a set of two-dimensional polygon shapes (silhouette contour polygons) that describe the contours of the object. The described system uses the silhouette contour polygons to create a 3D polygonal mesh model of the object. The described system determines the shape of the 3D model analytically by finding the areas of intersection between the edges of the model faces and the edges of the silhouette contour polygons. The described system first creates an initial model of the 3D object, from one of the silhouette contour polygons, and then executes an overlaying procedure to process each of the remaining silhouette contour polygons. In the overlaying process, the system processes the silhouette contour polygons collected from each silhouette image, projecting each face of the 3D model onto the image plane of the silhouette contour polygons. The overlaying of each face of the 3D model onto the 2D plane of the silhouette contour polygons enables the described system to determine those areas that are extraneous and should be removed from the 3D model. Nevertheless, the described system is slow and costly.

It would therefore be desired to propose a system void of these deficiencies. Summary

It is an object of the present invention to provide a rapid and efficient 3D modeling apparatus.

It is another object of the present invention to provide a simple 3D modeling apparatus that can repeatedly scan an object, in 3D, during the sculpting of the object.

It is still another object of the present invention to provide an easy 3D modeling apparatus for children, for producing a copy of their sculptures in 3D computer readable medium for saving, playing or animating.

Other objects and advantages of the invention will become apparent as the description proceeds.

The present invention relates to an apparatus for modeling a 3D object during sculpting of said object, the apparatus comprises: (a) a worktop, used as a base for sculpting said 3D model; (b) at least three cameras, incorporated within said apparatus, for capturing images, from at least three different viewpoints, of said object, during the sculpting of said object on said worktop; (c) a processor, electronically connected to said cameras, for receiving and processing said images from said cameras; and (d) communication means, connected to said processor, for transmitting data, related to the processed images, from said processor to an external processing unit for modeling said 3D object, present on said worktop.

Preferably, the processing unit is one of the following: PC, Laptop, Tablet, TV or smartphone. Preferably, the apparatus comprises five cameras.

In an embodiment, the apparatus further comprises at least one laser sensor for measuring the distance to said object.

Preferably, each of the cameras is positioned to hold an optical axis that has an angle of approximately 18 degrees vertically from the worktop.

In one embodiment, the cameras have a movable optical axis.

In one embodiment, the cameras are stationary.

In one embodiment, the apparatus has an internal light source.

Preferably, the 3D object is modeled into a 3D image of an STL format.

In one embodiment, the processor interpolates the images from the cameras into a 3D image.

In one embodiment, the processing unit interpolates the images from the cameras into a 3D image.

Preferably, the modeling of the object is done repeatedly during the progression of the sculpting

Preferably, a representation of the modeled object is shown to the user in real-time. The present invention also relates to a method for modeling a 3D object, during sculpting of said object, comprising: (a) providing a worktop, for using as a base for sculpting said 3D model; (b) capturing images, from at least three different viewpoints, of said object, present on said worktop; (c) processing said images from said cameras; and (d) communicating data, related to the processed images, from said processor to an external processing unit for modeling said 3D object, present on said worktop.

Brief Description of the Drawings

The accompanying drawings, and specific references to their details, are herein used, by way of example only, to illustratively describe some of the embodiments of the invention.

In the drawings:

Fig. 1 is a diagram of an exemplified apparatus for scanning a 3D object.

Fig. 2 is a diagram of the exemplified apparatus for scanning a 3D object with an object placed on worktop.

Fig. 3 is a schematic diagram of some of the possible electronic parts of the exemplified apparatus, for scanning a 3D object, according to an embodiment.

Detailed Description

The terms of "down", "up", "bottom", "upper", "horizontal", "vertical", "right", "left" or any reference to sides or directions are used throughout the description for the sake of brevity alone and are relative terms only and not intended to require a particular component orientation.

Hereinafter, parts, elements and components that are depicted in more than one figure are referenced by the same numerals.

Fig. 1 is a diagram of an exemplified apparatus for scanning a 3D object, according to an embodiment. The creation of a computer readable 3D model of an object is referred to hereinafter as modeling. The depicted exemplified apparatus 100 may be used for sculpting on and for modeling a 3D object, sculptured on said apparatus. The apparatus 100 may repeatedly scan and capture a 3D image of a 3D object, for modeling, while the object is being sculptured on said apparatus 100. Thus, for example, a user may sculpt an object on apparatus 100 and have a computer readable model representation of his currently formed object, at each stage, during sculpting.

The apparatus 100 may be mostly flat with protruding angles, such as protruding angle 210, for incorporating the cameras within, such as camera 200 is depicted. The protruding angles height and shape may be arranged based on the cameras' sizes, desired view angles, desired view planes, or any other consideration or combination thereof. In one embodiment the maximum height of the apparatus 100 is less than 5cm. In another embodiment the maximum height of the apparatus 100 is less than 10cm. In another embodiment the maximum height of the apparatus 100 is about 2cm. In another embodiment the maximum height of the apparatus 100 is less than 2cm. The apparatus 100 essentially flat design can allow the easy sculpting on the apparatus 100, while the sculptured object is simultaneously scanned and captured. Thus the apparatus 100 allows the user, e.g. a child, free access to sculpture the object, present on apparatus 100, while the object is being scanned and modeled repeatedly. Evidently, there is no need to move the object for modeling, the object may be scanned and modeled as sculptured on the apparatus 100. In one embodiment the refresh rate of the modeling may be selected by the user. In another embodiment the modeling refresh rate is preselected during manufacturing.

The apparatus 100 may have a rigid worktop 110 for sculpting. The worktop 110 may be flat and pressure resistant, for easy sculpting and modeling. In one embodiment, the worktop 110 may be used for cutting on by special sculpting instruments. The worktop 110 may be made of plastic or any other rigid material for performing as a base for sculpting. In one embodiment the worktop 110 may be flat and essentially even for placing an object for 3D modeling. In one embodiment, the worktop 110 may be flexible and made of silicon or any other flexible material. An object may be a clay sculpture, or any other pliable object of any other material, such as plastilin, or any other 3D item intended for modeling. In one embodiment the worktop may have a polygon shape. In one embodiment the worktop may have a circular shape. In one embodiment the worktop may be rotatable. In one embodiment, the worktop may be rotated mechanically, or electrically, and may be used as a pottery banding wheel.

The exemplified apparatus 100, of Fig. 1, may have five cameras, such as camera 200, incorporated within the apparatus 100, according to an embodiment. The camera 200 may be a CMOS Web Camera X5tech from Centry International Co., Ltd., or any other camera which may have a matrix of 1600X1200, or any other matrix, and a snap shot function, or any other available digital camera. The cameras may be used for capturing images, from different viewpoints, of a 3D object present on worktop 100. The cameras may each be positioned in a different location, and in a different angle view, for capturing different images of the object from different angles. In one embodiment each camera's optical axis, which is the imaginary line that passes through the center of the lens and the center of the image sensor, may be adjusted to the center of the worktop 110. For example, the cameras' optical axes may all be adjusted to coincide 5cm above the center of the worktop 110. Thus the cameras can be focused on an object, present on the worktop 110, for capturing images from different angles, of the object.

In one embodiment, the apparatus 100 may have three cameras, which may be positioned in a horizontal angle of about 60 degrees between one another, in relations to the cameras. Meaning that, in relations to the object, the cameras may be positioned in a horizontal angle of about 180 degrees, between one another. In one embodiment, the apparatus 100 may have four cameras, which may be positioned in a horizontal angle of about 90 degrees between one another, in relations to the cameras. Meaning that, in relations to the object, the cameras may be positioned in a horizontal angle of about 90 degrees, between one another. In one embodiment, the apparatus 100 may have five cameras, which may be positioned in a horizontal angle of about 108 degrees between one another in relations to the cameras. Meaning that, in relations to the object, the cameras may be positioned in a horizontal angle of about 72 degrees, between one another. In one embodiment, the apparatus 100 may have six cameras which may be positioned in a horizontal angle of about 120 degrees between one another in relations to the cameras. Meaning that, in relations to the object, the cameras may be positioned in a horizontal angle of about 60 degrees, between one another.

In one embodiment, each of the cameras is positioned to hold an optical axis that has an angle of approximately 18 degrees vertically from the worktop. In another embodiment, each of the cameras is positioned to hold an optical axis that has an angle of at least 15 degrees vertically from the worktop. In yet another embodiment, each of the cameras is positioned to hold an optical axis that is vertically paralleled to the worktop. In yet another embodiment, the cameras of the apparatus may not all have the same vertical angle in relations to the worktop.

In one embodiment, some, or all of the cameras, of the apparatus, may be movable or have a movable optical axis. In one embodiment, the movability, e.g. of the optical axis, of the cameras may be a vertical movement. In another embodiment, the movability of the cameras may be a horizontal movement. In yet another embodiment, the cameras may move in all directions. In yet another embodiment, the cameras may be stationary.

In one embodiment, the cameras focus point may be changed. In one embodiment, apparatus 100 may have at least one laser sensor for measuring the distance to the object present on said worktop 110. In this embodiment the measured distance may be used for focusing the cameras. In one embodiment, each camera may have a laser sensor that can measure the distance, to the object. Thus each camera may be focused separately on the object. Embodiments of the invention may use the laser, to scan the object while the cameras are picturing the object, in order, to collect images of the scanning light as it reflects from the object.

Fig. 2 is a diagram of an exemplified apparatus 100, of Fig. 1, with an object 300 placed on worktop 110 for 3D modeling, according to an embodiment. As depicted, object 300 may be scanned by the cameras, such as camera 200, and sent, using any kind of communication method 320, wired and/or wireless, to processing unit 310. In one embodiment the apparatus 100 is connected by a cable, such as a USB cable, to the processing unit 310 for communication and/or power. In another embodiment the apparatus 100 is connected wirelessly, such as by Wi-Fi, to the processing unit 310.The processing unit which may be a PC, Laptop, server, tablet, TV, smartphone, or any processing unit capable of receiving processed images from the apparatus 100. In one embodiment, the processing unit receives the processed images and interpolates the images into a computer readable medium, i.e. a 3D image. In one embodiment the processing unit may also be capable of displaying an image 330 of the modeled object.

In one embodiment the apparatus 100 may be used for modeling an object that has a height of less than 10 cm. In another embodiment, the apparatus 100 may be used for modeling an object that has a height of less than 12 cm. In yet another embodiment, the apparatus 100 may be used for modelling an object that has a height of less than 15 cm.

In one embodiment, the apparatus 100 may be used for modelling an object that has a diameter of less than 7cm. In another embodiment, the apparatus 100 may be used for modelling an object that has a diameter of less than 10cm. In yet another embodiment, the apparatus 100 may be used for modelling an object that has a diameter of less than 5cm.

In other embodiments other diameters and heights of object may be possible when using cameras with wide-angle lenses, e.g. fish-eye cameras.

Fig. 3 is a schematic diagram of some of the possible electronic parts of the apparatus 100, according to an embodiment. The cameras, such as digital camera 200 may be electronically connected, i.e. linked, to the processor 430 where the processor may receive and process the images received from the cameras. The processor 430 may be an AR9331 Linux processor or any other processor capable of processing images. The processor 430 may be linked directly to the cameras, or linked through the USB host connector 420 and the USB Hub 470, or linked in any other way. In one embodiment the processor 430 controls the cameras 200 and dictates the imaging speed of the cameras. In one embodiment, the processor 430 is embedded or implanted on a PCB together with the USB host connector 420. A USB Hub 470 may be selected based on the number of cameras in the apparatus and may be connected to the USB host connector 420, internally or externally to the PCB. Thus the cameras may each be connected to the USB Hub 470 which may be connected to the USB host connector 420 thereby linking the cameras to the processor 430. Processor 430 may receive and process the images received from the cameras. In one embodiment the processor 430 may also compress the images received from the cameras using any known compression technique.

Processor 430 may also be linked to communication means such as Wi-Fi module 440 and/or Micro USB port 410, or any other communication port or means. The processor 430 may transmit the data related to the processed images to a processing unit using any known wired or wireless communication method such as USB, Bluetooth, Wi-Fi, cellular network, NFC, etc. In one embodiment, the processing unit receives the processed images and interpolates the images into a computer readable medium, i.e. a 3D image. The 3D image may be in any of the known 3D file formats such as in STL format or in any other known 3D file formats for representing 3D objects. In another embodiment the processing and interpolating of the images from the cameras into a 3D image is done within the apparatus, such as by the processor of the apparatus. Thus the 3D object may be modeled into a 3D image of STL format or in any other known 3D file formats. In one embodiment, the apparatus 100 may also have an internal power source 460. The internal power source 460 may be a battery or any other known power source. In another embodiment the apparatus may receive its power externally, such as receiving the power directly from the connected processing unit, e.g. using the connected USB cord. In one embodiment, the apparatus 100 may also have a microcontroller 450 for optimizing the power consumption and processing speed of the apparatus 100. The microcontroller 450 may be ATmega 32U4 microcontroller or any other controller capable of optimizing the power consumption.

In one embodiment the apparatus 100 may have internal light source for lighting the worktop 110 from beneath. The worktop 110 may be transparent or semitransparent in some embodiments. The light source may be LEDs or any other illuminating means. Thus the object present on the apparatus 100 may be illuminated from beneath by lighting the light source of the apparatus 100. In one embodiment, the illuminating of the object from beneath is done in relations to the 3D scanning, where the illumination may provide a better contrast to the object's contour for improving the quality of the images taken by the cameras, of the apparatus 100, and thus improving the scanning precision.

For the sake of enablement a method is set forth for interpolating a number of 2D images, taken from different angles, of the same object, into a 3D model. Other method may be used as well with the described apparatus above, for 3D modelling, which are within the scope of the present invention.

In one embodiment a minimum of 5 2D images is used for 3D modelling. Preferably these 5 images are images taken from 5 different angles. The images may be numbered, by the processor for example, before processing. In one embodiment the difference between the angles is about 72 degrees horizontally. For example, the images may be taken at the horizontal angles of 0, 72, 144, 216, 288 degrees in relation to the center of the object. In one embodiment these 2D images are in JPEG format. Preferably, these 2D images have a similar height and length. Thus two adjacent images should have parts that intersect between them, otherwise known as common area. In any case the images of the object should cover at least 80% of the total face of the object. In one embodiment the apparatus may approximate the parts of the object that have not been covered by the images.

In this embodiment it is preferable that the surface of the object will not be shiny, reflective or transparent. The photo analysis may be performed by using an automated search and finding similar points (right upper corner of one camera needs to interface with a left upper point of another camera, etc.). Once the similar points are found they are tagged as a circular panorama. In other words an analysis is carried out using optical system features; points, lines and planes are calculated in a three dimensions form. The analysis may be calculated by using the Generation of polygon model, otherwise known as the Spline model, and using a software algorithm that creates a 3D reconstruction. This algorithm may also be found in AUTODESK123D Catch, Pix4d or Photomodeller.

The modeling method may construct an object model from 3D spatial data where the spatial data includes the 3D X, Y, Z coordinates that describe the physical dimensions, contours and features of the object.

In this embodiment the processing system processes each captured image to obtain a set of "silhouettes" which describe the contours of the object. Each digitized image from the cameras contains a set of pixel assignments which describe the captured image. The processing unit can then identify those pixels within each captured image which make up the contours of the object.

For example, the processing unit may use those pixels within each image which form a boundary or outside edge for creating a silhouette contour of the object. The boundary point-based silhouette contours made from one image can be combined with the boundary point-based silhouette contours found in other images to determine a set of 3D X, Y, Z coordinates which describe the spatial dimensions of the object's surface. In one embodiment the processing unit can begin with a cube of, for example, 1000*1000*1000 pixels. Using this approach, the shape of the object is "carved" from the cube using silhouette outlines that are obtained from each silhouette image. The processing unit can then gather enough raw data from the silhouette contours to generate several hundred thousand 3D X, Y, Z coordinates for a full wraparound view of an object. The full wraparound view of an object can be digitally stored using one of the known 3D formats such as STL.

In one embodiment the described modeling apparatus may be used for saving a 3D computer readable copy of a sculpture. In one embodiment the described modeling apparatus may be used for saving a 3D copy of a sculpture for computer animation. In one embodiment the described modeling apparatus may be used for printing a 3D copy of a sculpture.

In one embodiment the resolution and/or accuracy of the 3D scanning may be lowered intentionally for a fast and easy scanning of the object. In one embodiment the modeling is a 90% approximation of the real object. In one embodiment, during the modelling of the object, curved surfaces may be modelled as approximately plain surfaces. In one embodiment, the modeling of the image may be done during the sculpting of the object itself. In this embodiment, the modeling may be done repeatedly and changed in accordance with the progression of the sculpting where the modeling represents in real-time the appearance of the object. Thus, during the sculpting of an object, a real-time model of the object may be shown to the user, on the screen of the processing unit for example. In other words, a representation of the modeled object may be shown to the user in real-time. In one embodiment, the progression of the modeling may be saved as well for providing user a function of "undo", where he may decide to return to an older version of his sculpture. Other functions, relating to the representation of the modeled object, may also be available such as, save of different stages, zoom in, zoom out, rotate, etc.

In one embodiment, the apparatus 100, or the software on the processing unit, is capable of differentiating between the object and the user's hands sculpting the object. Thus the modeling of the object may continue while the user's hands are sculpting the object and partially obstructing the object's view for some of the cameras. In an embodiment, the differentiation between the object and the user's hands is done based on color differentiations, where the object may have different colors than the statistical variance of possible hand color. This feature may also be available when sculpting with a certain tools, where the apparatus 100, or the software on the processing unit, is capable of differentiating between the object and the sculpting tools while sculpting the object.

According to some embodiments, the apparatus 100 may be used as a new input device for instant feedback, similarly to a graphic tablet or joystick, where new inputs and changes are instantly processed and shown to the user. Thus the user may view his sculpture on a screen during sculpting. Furthermore, any time the user modifies the sculpture the user can see the modification on screen in real-time.

In one embodiment the sculpting of the object may be done in relation to an existing 3D model where the processing unit, or any other processing means may compare the modeling of the sculpture during sculpture to the existing 3D model. In one embodiment the processing unit may grade the sculpture according to the comparison to the existing 3D model.

In one embodiment it is possible to join 2 users of the described apparatus for interactive play or work. For example, a user using the described apparatus may communicate, using the Internet, with another user using another apparatus and together they may play a game where each sculptures a part of the same sculpture.

While the above description discloses many embodiments and specifications of the invention, these were described by way of illustration and should not be construed as limitations on the scope of the invention. The described invention may be carried into practice with many modifications which are within the scope of the appended claims.