Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
UNWRAPPED UV PRINT FILES FROM CAMERA PROJECTION
Document Type and Number:
WIPO Patent Application WO/2018/163042
Kind Code:
A1
Abstract:
The present disclosure relates to a method for customers to use a 3D visualization of real-world products to create customized surface-designs using projection- texture alone. The method comprises projecting one or more 2D images consecutively onto a virtual 3D object, dividing surface of said object into projection-texture mesh surfaces, unwrapping the 3D object on each projection- texture mesh surface to obtain a UV-map for generating a 2D print file to be used in creating a replica result of the 3D visualisation in real-world. In the case of products which require stitching to be constructed in real-world, the present disclosure also provides a solution for adding seam-allowance to the print-files in a manner which resolves the apparent discontinuity caused by the stitching process, despite even adding normal seam-allowance; this method ensures an illusion of continuity where without it a noticeable discontinuity at the seam would have otherwise existed.

Inventors:
MCCRANN JAKE (AU)
Application Number:
PCT/IB2018/051396
Publication Date:
September 13, 2018
Filing Date:
March 05, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MCCRANN JAKE (AU)
International Classes:
A41H3/04; G06F17/50; G06T15/04
Foreign References:
US20150351477A12015-12-10
US20040049309A12004-03-11
US20090144173A12009-06-04
Download PDF:
Claims:
CLAIMS:

What is claimed is:

1. A computer implemented method for transforming a real world three dimensional 3D object into an unwrapped two dimensional 2D print file, the method comprising:

receiving, by one or more processor, at least one virtual 3D object from at least one source and a user input from a user;

projecting, by one or more processor, at least one 2D image onto a surface of at least one virtual 3D object based on the user input, wherein the at least one 2D image is selected by a user;

dividing, by one or more processor, the surface of the virtual 3D object into a plurality of projection-texture mesh surfaces, wherein the plurality of projection- texture mesh surfaces includes at least one part of 2D image which is projected on the virtual 3D object;

unwrapping, by one or more processor, the plurality of projection-texture mesh surfaces along with at least one-part 2D image located on the virtual 3D object;

generating, by one or more processor, at least one UV-map for each of the projection-texture mesh, and

generating, by one or more processor, at least one unwrapped 2D print file of the at least one UV-map for each of the projection-texture mesh.

2. The computer implemented method of claim 1, wherein the plurality of projection- texture mesh surfaces is a UV-island/ segment.

3. The computer implemented method as claimed in claim 2, wherein the addition of the pixels at the edge of UV map results in edge padding.

4. The computer implemented method of claim 1 , wherein method further comprise identifying at least one edge from the plurality of projection -texture mesh surfaces, wherein the identification of the at least one edge is done with help of an alpha channel information. adding pixels at edge of the plurality of projection -texture mesh surfaces, wherein color of the pixels is identical with color of an adjacent pixels in the adjacent projection-texture mesh surfaces.

5. The computer implemented method as claimed in claim 4, wherein method further comprise:

increasing pixels around the edge at least one UV-map for each of the projection- texture mesh surface, such that when each projection-texture mesh surface is stitched together a seam allowance is achieved on the 3D object.

6. The computer implemented method of claim 1 , wherein the at least one source is a memory which stores a plurality of the 3D object.

7. The computer implemented method of claim 1, wherein the at least one source is 3D object generating device in real time.

8. The computer implemented method of claim 1, wherein the at least one source is a remote server,

9. The computer implemented method of claim 1 , wherein the user input includes selection of the surface at which the 2D image is projected.

10. The computer implemented method of claim 1, wherein the virtual 3D object is a replica of a real -world 3D object.

11. The computer implemented method of claim 1, wherein each of the projection- texture mesh comprises at least one part of the projected 2D image.

12. The computer implemented method as claimed in claim 1, wherein the real-world object is a physical object.

13. The computer implemented method as claimed in claim 1, wherein at least one

virtual 3D object is a 3D image.

14. The computer implemented method as claimed in claim 1, wherein at least one virtual 3D object is animated 3D avatar.

15. The computer implemented method as claimed in claim I, wherein at least one

virtual 3D object includes all the sides and all angle of the at least one virtual 3D object.

16. The computer implemented method as claimed in claim 4, wherein unwrapped 2D print file after printing is attached on the real-world 3D object by using adhesive.

17. The computer implemented method as claimed in claim 4, wherein unwrapped 2D print file includes implementation instructions for the user, wherein the

implementation instructions include starting point, starting angle, exact location part of real-world 3D object and the combination thereof.

18. A non-transitory program storage device on which instructions are stored, the stored instructions comprising instructions for transforming a real world three dimensional object into an unwrapped two dimensional 2D print file, which when executed by one or more processors cause the one or more processors to: system for, the system comprising:

receiving, by one or more processor, at least one virtual 3D object and at least one user input from at least one source;

projecting, by one or more processor, at least one 2D image onto at least one virtual 3D object based on the user input, wherein the at least one 2D image is selected by a user;

dividing, by one or more processor, surface of the virtual 3D object into a plurality of projection -texture mesh surfaces, wherein the plurality of projection -texture mesh surfaces includes at least one part of 2D image which is projected on the virtual 3D object;

unwrapping, by one or more processor, the plurality of project! on -texture mesh surfaces along with at least one-part 2D image located on the virtual 3D object, generating, by one or more processor, at least one UV-map for each of the projection- texture mesh, and generating, by one or more processor, at least one unwrapped 2D print file of the at least one UV-map for each of the projection -texture mesh,

19. A computer system configured to process a three-dimensional (3D) representation of an object which includes identification of disparate locations in the memory that correspond to spatially similar locations in the object's 3D representation, the computer system comprising:

memory configured to store data, the stored data comprising program instructions, a display coupled to the memory,

a graphical user interface,

one or more processors coupled to the memory, the display and a graphical user interface, the one or processors being configured to execute the program instructions to:

receive, by one or more processor, at least one virtual 3D object and at least one user input from at least one source,

project, by one or more processor, at least one 2D image onto at least one virtual 3D object based on the user input, wherein the at least one 2D image is selected by a user;

divide, by one or more processor, surface of the virtual 3D object into a plurality of projection-texture mesh surfaces, wherein the plurality of projection-texture mesh surfaces includes at least one part of 2D image which is projected on the virtual 3D object;

unwrap, by one or more processor, the plurality of projection-texture mesh surfaces along with at least one-part 2D image located on the virtual 3D object;

generate, by one or more processor, at least one UV-map for each of the projection-texture mesh, and

generate, by one or more processor, at least one unwrapped 2D print file of the at least one UV-map for each of the projection-texture mesh.

Description:
UNWRAPPED UV PRINT FILES FROM CAMERA PROJECTION

CROSS REFERENCE RELATED TO RELATED CO-PENDING APPLICATION

[0001] This application claims the benefit of U.S. Provisional Application No. 62/467,089, filed on March 04, 2017 and U.S. Provisional Application No. 62/578,527 filed October 30, 2017

FIELD OF DISCLOSURE

[0002] The present disclosure relates to the field of print-on-demand customizable apparel. More specifically, the present disclosure provides a system and method for generating unwrapped UV print-files for printing on real -world objects, as well as for generating manufacturing patterns for objects which require stitching to be created in real- world.

BACKGROUND

[0003] Before explaining the claims of inventiveness/originality of the methods disclosed in this patent application, claimed here in combination of the provisional patent applications cited above, and how they solve an ongoing struggle in the print-on-demand manufacturing of customer-designed clothing and apparel, as well as the industry of vehicle-wrap design and interior design and their tediousness necessity of back-and-forth interaction with the customer during the design process, it's necessary to explain any similarity of the techniques used in this patent to those used in 3D engineering and 3D animation technology, as well as explain the current state of the industry of print-on- demand manufacturing and how they are operating, and how they are operating in this manner due to being absent of the solutions to be disclosed in this patent application.

[0004] All of the technology used in this application exists in different parts of the world, pieces of it being used for different purposes. But never has it been converted into what is disclosed in this invention. What will be disclosed removes a barrier between customer and manufacturer that is existing today and impeding the growth of this print-on-demand market potential. The technologies employed in this invention have only all recently converged to the point of it being viably possible to even create this invention. It's only in the last 2-3 years that some components of this invention have become viably usable for this innovation.

[0005] Today, even as of writing this patent application, various websites and applications exist online to assist customers in submitting artwork they would like to be created on the surface of a customizable product on offer, but all of these websites and their applications are limiting compared to what will be disclosed.

[0006] Some of these websites offer a 3D visualization of what the results will look like. The visualization is often disclaimed to be just a general impression of the result to expect. All of these websites essentially depend on the User creating the artwork on a 2D map, a 2D image, that will be used to produce the surface-design on the 3D product. When an App for visualization has also been provided they are invariably limited to showing the customer essentially what result has been generated from putting the customer's image over the UV-map of the 3D model and it has yet to even be seen any of these websites even offering 3D model visualizations you can zoom in on to check the quality of the resolution up close. All of these Apps, unable to provide a zoomed-in retina-resolution 3D visualization of what the product will look like, need to have built-in warning-messages about the resolution of the image the customer has uploaded, warning that the resolution quality of the product will not look very good if they use this image.

[0007] Evidently, they need such warnings systems and guides because they have thus been unable to find a viable way in which the customer can visualize the product online realistically as it would look in real-world. These websites also require the user then to have image-editing software that is not readily available to the ordinary customer, else they find a gallery of high-resolution images to select from, they are still limited in how to edit those images. It is also difficult to find high-resolution images for free online. For this reason, to combat this disinterest of customers in using the Apps they provide, these websites will include galleries of suggested designs for the customer.

[0008] Because of this tedious arrangement offered to customers, customers have been harder to find than manufacturers may have hoped for. Many manufacturers, in response to this lack of demand, have looked at business models that depend on an army of resellers parading themselves as designers or owners of online retail shops which sell their 'custom-designed' apparel. This activity is essentially an activity known as drop-shipping. Drop-shipping is banned on many market-place websites such as EBay and Amazon, but with these 'designers' having customized the item to be sold they are able to masquerade as otherwise. Drop-shipping is simply the activity of finding products on one website for a given price, renaming them in a different way and offering them for sale on another website, for profit. The activity of this army of re-sellers that the custom-manufacturing industry has come to depend on (due to being unable to provide an application which customers are attracted enough to use and purchase their own created designs) is essentially drop-shipping because once the designer has received a sample-copy of their designed product (since they are unable to determine in any other reliable way what the final design will really exactly look like) they then offer this product online for sale and when a customer purchases this product the manufacturer then makes the product and handles the purchase and delivery and defect-return of the product.

[0009] In the case of these manufacturers, discussed generally here, and how they have become dependent on these drop-shipping designers (due to seeing that customers are discouraged by the tediousness of the methods required to create their own designs for the customizable apparel on offer).

[0010] In most examples of these websites they expect the customer to edit their artwork on a 2D uv-map of the object and play back and forth between them until they have the result they are satisfied by.

[0011] In fact all of these websites and applications are essentially using the same method as claimed in: US Patent No. : US 9,345,280,B2 "USING UV UNWRAPPING TO CREATE MANUFACTURING PATTERNS FOR PRINTS".

[0012] Regardless of any variations on how they are representing and providing their service to the customer it has been observed that every single company in the market is using the same methods claimed in US Patent No. : US 9,345, 280,B2 and therefore, for this reason, as background to the current invention to be disclosed, it is most appropriate then just to discuss the methods claimed in the said US Patent No. : US 9,345, 280,B2, the methods used in the prior art flawed and fail to serve the market potential that the print- on-demand customizable apparel industry deserves.

[0013] A major problem of the prior art is that it is simply not possible to get continuity across the seams when you are using UV-map texture. You cannot rearrange the UV- islands or superimpose them to achieve this. Only with certain items would this occur when lines can be made collinear.

[0014] The prior art US 9,345, 280,B2, provides customers with a way to custom design on apparel is simply to take the UV-map of a 3D model of the real -world object, then, cut in on the edges of the UV-map to account for seam-allowance, and then stitch it together. The UV-map is used either by providing it directly to the customer to design on themselves, or providing them with an online 2D image-editing tool with attached gallery of images to also select, and a text-editing tool to create some text on the apparel, or it is used by showing the customer a simple 3D visualization of the model, invariably built using extremely limiting html5/flash platforms, and then allowing them to upload images which are placed on the hidden UV-map and displayed results on the 3D model. The User can then move the image using arrow keys on the user-interface, or change the size of the image, and they can click different parts of the 3D model to elect that part to be the next part they will upload an image for or choose an image for. Those parts they can click are invariably the UV-islands of the hidden UV-map.

[0015] The prior art only disclosed, extremely limiting in its facilitation and the reason for that is that it uses the UV map of the tshirt to place the customer's designs, which it then renders to the 3D model for low-quality visualization - The model may be rotated but not zoomed in on.

[0016] First problem as with all the others in the same market who have an App - is that it is built with Adobe Flash, a plug-in most customers will have to install first-time they visit the website before even using the App. A plug-in regularly requiring updates and shunned by the 3D gaming industry which today uses platforms such as Unity3D which does not require a plug-in to load in the browser and is easily cross-platform migrate - Meaning that a built App can easily be rolled out to function on multiple devices and operating systems. The current market-place for customer designed apparel, as far as it is apparent have not employed this more powerful and robust technology for 3D visualization. What is achieved in the invention to be disclosed is absolutely impossible in Flash.

[0017] Another problem of the prior art occurs in organization of the polygon mesh. During the unwrapping a 3D model, it is important to choose where you will cut the mesh of the model into pieces to be unwrapped. If this is not done with care or is performed automatically then polygon distortion will cause the UV-map texture not to join at the seams on the 3D model correctly, as well as other distortions across the entire mesh, but these are less noticeable as they are continuous. At the edges of the UV-islands the UV-texture will most often not connect smoothly to the other side when rendered onto the 3D model. Since the UV-map is divided into the same divisions as real-world t-shirt, and the edges of the UV-islands are the same seams of a real-world t-shirt, the discontinuities caused by applying texture to UV-map when rendered on the model will appear most noticeably at the places in what in the real world is actually the seams. Therefore 3D artists prefer this embodiment because after applying texture to the UV map their discontinuities in the model will be apparent mainly at the seams and therefore less noticeable as 'imperfection' to the casual observer.

[0018] It's difficult to know how it became such a mess but the guess would that the real model of the t-shirt was scanned and then the mesh was cut at the seams matching how the t-shirt is normally cut in 3D modelling and also matching how the sewing-pattern is normally cut in the real world. Guessing by the mess of the polygons a UV unwrap would have been generated then automatically. It's not disclosed in the prior art what process was used to achieve this.

[0019] To accomplish seam-continuity on the 3D model, one would still need to offer an adequate solution to dealing with seam -allowance. If you cut in to the UV-map, by using prior art methods, then this would result in cutting in to the pattern across the seam, destroying the continuity if there was any to begin with. [0020] Secondly, even if you added seam -allowance after printing, such that it would then be white fabric as is typically printed on, or a fabric of any other colour, you still going to have the problem of the lines of those colours being seen at the seams, thereby breaking the continuity of the image.

[0021] There is therefore a need for great improvement in serving customer interests and demands in this type of market-place, an application that is robust and easy to use, and that can provide the customer with a realistic visualization in 3D of what their product will exactly look like when delivered, and provide the customer a means with which to design patterns that can pass across the seams of stitched apparel.

[0022] There is also a need for the same invention to be disclosed for the customization of all products, such as even the car-wrap industry is still stifled by customer-client design process back-and-forth until a design is agreed on. There needs to be an App that simply puts all the creative power of vehicle wrap in the hands of the customer to be able to design on the car in 3D. At present the vehicle-wrap industry is still stuck in 2D design for cars, using blue-prints as their models for design.

SUMMARY OF THE DISCLOSURE

[0023] It should be understood that this disclosure is not limited to the particular systems, and methodologies described herein, as there can be multiple possible embodiments of the present disclosure which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present disclosure.

[0024] In an aspect, the present disclosure relates to a computer implemented method for transforming a real -world 3D object into an unwrapped 2D print file. The method comprises projecting a 2D image onto a virtual 3D object, dividing surface of the virtual 3D object into a plurality of projection-texture mesh surfaces, unwrapping the virtual 3D object on each of the plurality of projection -texture mesh surfaces to obtain a UV-map for each of the projection-texture mesh, and generating unwrapped 2D print file for printing on the real -world 3D object. [0025] In an aspect, the present disclosure relates to computer implemented method for transforming a real -world 3D object into an unwrapped 2D print file. The method comprises projecting a 2D image onto a virtual 3D object, dividing surface of the virtual 3D object into a plurality of projection-texture mesh surfaces, unwrapping the virtual 3D object on each of the plurality of projection-texture mesh surfaces to obtain a UV-map for each of the projection-texture mesh, adding pixels at the edge of the UV map, and generating an unwrapped 2D print file for printing on the real world 3D object.

[0026] The foregoing has outlined rather broadly the features and technical advantages of the present disclosure in order that the detailed description of the disclosure that follows may be better understood. Additional features and advantages of the disclosure will be described hereinafter which form the subject of the claims of the disclosure. It should be appreciated that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized that such equivalent constructions do not depart from the disclosure as set forth in the appended claims. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0027] For a complete understanding of the present disclosure, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:

[0028] Fig. 1 A, illustrate a prior art, where the model/avatar on the right, and on the left , how the mesh has been divided up; [0029] Fig IB illustrates an environment of the working of the present disclosure in a communication network, according to various embodiments of the present disclosure;

[0030] Fig. 2 illustrates a method for transforming a real -world 3D object into an unwrapped 2D print file, according to various embodiments of the present disclosure;

[0031] Fig. 3 A illustrates an exemplary embodiment of generation of a print file from projection of an image on a replica real world object, according to various embodiments of the present disclosure;

[0032] Fig. 3B-3E illustrates an exemplary embodiment of various views of the virtual 3D objects, according to various embodiments of the present disclosure;

[0033] Fig. 4 illustrates a method for transforming a real -world 3D object into an unwrapped 2D print file, according to various embodiments of the present disclosure;

[0034] Figs. 5A to 5 J2 illustrate an exemplary embodiment of printing an image on a real on a real -world object, using edge bleeding, according to various embodiments of the present disclosure, and

[0035] Figs. 6 A to 9B illustrate an exemplary embodiment of seam allowance and printing an image on a portion of a real -world object, according to various embodiments of the present disclosure

[0036] Like numerals refer to like elements throughout the present disclosure. DETAILED DESCRIPTION OF THE DISCLOSURE

[0037] Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples of other possible examples.

[0038] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail with respect to Figs. 1 to 9.

[0039] Referring now to the Fig. 1 A, illustrate a prior art, where the model/avatar on the right, and on the left , how the mesh has been divided up. This choice of division is very complex subject in gaming and animation, it will determine how the UV-map looks when you unwrap it, as you have decided this is how the model will be segmented. Understand, that when you make the model in the first place you do not make it in pieces as you see on the left. These have been cut like this to prepare the model for unwrapping that will give the best result. In gaming, the best result is considered to be the easiest one to relate to the 3D model so you can guess where to apply texture to it. This is the only objective when dividing the model up like this into separate meshes

[0040] The present disclosure relates generally, as indicated, to mobile applications/computer software stored in a memory and executed by one or more processor is capable of generating UV print files for printing on real world 3D objects. More particularly, the graphics software is digital imaging software; and according to various embodiments described below, such graphics software may be construed as graphics software generally and, more particularly, to digital imaging software.

[0041] Additionally, the disclosure relates to a method and system of converting 2D graphic images to 3D stereoscopic images using such plug-ins in combination with existing or commercially available graphics software. Reference to existing or commercially available graphics software means software that can be used to develop, to make, to modify, to view, etc., images. Those images may be displayed, printed, transmitted, projected or otherwise used in various modes as is well known. For example, the Adobe Photoshop software can be used for many purposes, as is well known. Reference to graphics software as being existing or commercially available is intended to mean not only at present but also that which has been existing or commercially available in the past and that which may exist or become commercially available in the future. For brevity, all of these are referred to below collectively and severally as "existing" graphics software.

[0042] It is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of "including" and "comprising" and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof. The use of the terms "mounted," "connected," "coupled," "positioned," "engaged" and similar terms, is meant to include both direct and indirect mounting, connecting, coupling, positioning and engaging.

[0043] Computer software, hardware, and networks may be utilized in a variety of different system environments, including standalone, networked, remote-access (aka, remote desktop), virtualized, and/or cloud-based environments, among others. FIG. 1A illustrates one example of a system architecture and data processing device that may be used to implement one or more illustrative aspects described herein in a standalone and/or networked environment. Various network nodes 105, 136, and 132 may be interconnected via a wide area network (WAN), such as the Internet. Other networks may also or alternatively be used, including private intranets, corporate networks, LANs, metropolitan area networks (MAN) wireless networks, personal networks (PAN), and the like. Network 101 is for illustration purposes and may be replaced with fewer or additional computer networks. A local area network (LAN) may have one or more of any known LAN topology and may use one or more of a variety of different protocols, such as Ethernet. Devices and other devices (not shown) may be connected to one or more of the networks via twisted pair wires, coaxial cable, fibber optics, radio waves or other communication media.

[0044] The term "network" as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term "network" includes not only a "physical network" but also a "content network," which is comprised of the data-attributable to a single entity-which resides across all physical networks. [0045] The components may include data server 136 or web server 136, and client computers 105. Data server 103 provides overall access, control and administration of databases and control software for performing one or more illustrative aspects describe herein. Data server 136 may be connected to a web server through which users interact with and obtain data as requested. Alternatively, data server 136 may act as a web server itself and be directly connected to the Internet. Data server 136 may be connected to web server 136 through the network 101 (e.g., the Internet), via direct or indirect connection, or via some other network. Users may interact with the data server 136 using remote computers 105, e.g., using a web browser to connect to the data server 136 via one or more externally exposed web sites hosted by web server. Client computers 105 may be used in concert with data server 136 to access data stored therein, or may be used for other purposes. For example, from client device 105 a user may access web server 136 using an Internet browser, as is known in the art, or by executing a software application that communicates with web server 136 and/or data server 136 over a computer network (such as the Internet).

[0046] Servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines. FIG. IB illustrates just one example of a network architecture that may be used, and those of skill in the art will appreciate that the specific network architecture and data processing devices used may vary, and are secondary to the functionality that they provide, as further described herein. For example, services provided by web server 136 and data server 136 may be combined on a single server.

[0047] Each component 105, 136, 132 may be any type of known computer, server, or data processing device, computing device, camera 132. Each component 105, 136, 132, e.g., may include one or more processor 111. Each component 105, 136, 132 may further include random access memory (RAM), read only memory (ROM), network interface, input/output interfaces (e.g., keyboard, mouse, display, printer, etc.), and memory. Input/output (I/O) may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. Memory 130 may further store operating system software for controlling overall operation of the data processing device, control logic for instructing data server to perform aspects described herein, and other application software providing secondary, support, and/or other functionality which may or might not be used in conjunction with aspects described herein. The control logic may also be referred to herein as the data server software. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.).

[0048] Memory 130 may also store data used in performance of one or more aspects described herein, including a first database and a second database. In some embodiments, the first database may include the second database (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. The memory can be a remote storage.

[0049] One or more aspects may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by one or more processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution or may be written in a scripting language such as (but not limited to) Hypertext Mark-up Language (HTML) or Extensible Mark-up Language (XML). The computer executable instructions may be stored on a computer readable medium such as a non-volatile storage device. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various transmission (non-storage) media representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibbers, and/or wireless transmission media (e.g., air and/or space). Various aspects described herein may be embodied as a method, a data processing system, or a computer program product. Therefore, various functionalities may be embodied in whole or in part in software, firmware and/or hardware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects described herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.

[0050] FIG. IB illustrates a block diagram of a system 100 for describing the environment of the functioning of the present disclosure for creating seam allowance of design patterns on UV map imaged objects. The system 100 includes a user device 105, a graphical user interface 110, a processor 115, a display 120, sensors 124, Input unit 122, a communication interface 123, mobile application 126, a database 128, memory unit 130, , a 2D/3D/360 degree image capturing camera 132, a real world object 134 and a remote server 136. The said remote server 136 is communicably coupled to the user device 105 which is being operated by a user 110. The said remote server includes a database for storing virtual replica of a number of real world objects varying from hats, mugs, T-shirts, and many more.

[0051] The memory unit 130 store instructions, user data and other related items. Example of user data and items includes but not limited to first name, last name, phone number, email ID, objects preference, design preference etc. The database 504 is coupled to the memory unit 130 for storing plurality of UV mapped 3D objects. Examples of object include but not limited to cap, T-shirts, trousers, bags, garments, soccer ball, basket-ball, and other similar items that may be stitched together.

[0052] The graphical user interface 110 displays the processed instructions. Example of the graphical user interface 110 includes but not limited to LED, LCD, OLED, touch- based display units etc. The processor 115 is coupled with the graphical user interface 110 and the memory unit 130 to process the stored instructions.

[0053] The instructions initiate with receiving an object selected by a user. The objects are explained in detail in conjunction with the database stored at a remote server 136. The user selects a design pattern that the desires to be printed on the object. The said design pattern can be selected from the memory unit 130 in the user device 105 or could also be fetched from the internet or could also be fetched from the remote server 136 or could also be fetched from camera 132. The user is able to communicate through the communication network to bring in various design patterns for printing on the objects. Examples of the design patterns include but not limited to flowers, animals, plants, characters, cartoons, self-portraits, geometrical designs etc.

[0054] It would be readily apparent to those skilled in the art that various types of the objects and design pattern may be envisioned without deviating from the scope of the present disclosure. The above step is then followed by a step of allowing the user to move the object in order to have the desired pattern at desired location on the object. The user then aligns the object with respect to the design pattern.

[0055] The processor 115 is configured to execute instructions for creating plurality of segments of the selected object. The system then divides the object into plurality of segments as required by the user.

[0056] In an embodiment, the software application is adapted to identify or derect edges of the image with the help of alpha channel information. Alpha changes from zero to nonzero and vice versa at edges. Further, the method is adapted to check Condition to check : if ((rgblmg[3] != rgblmg_right[3]) || (rgblmg[3] != rgbImg_bottom[3])) //edge, where in rbg is color of the pixel at edges of segments and rgblmg is current pixels & rgblmg right is right side pixel & rgblmg bottom is bottom row pixel. The number of pixels is added around the edge of each segment of the selected object, such that when each segment is stitched together there is seam allowance on the complete object. The pixels are increased so that when the design is printed on the real world object there is no obscured and erroneous pattern on the object especially around the seam.

[0057] In another preferred embodiment of the present disclosure, the instructions include a step of allowing the user to modify the design pattern. The user is able to modify the design pattern by using mobile application features such as X-ray feature for projection, zooming in, zooming out, rotating, changing colors, adding text to the design, save the UV-map, share the UV-map with manufacture, use ortho features for orthogonal projection, adjust angle, adjust size, freeze the image , resets the modification, save the modification for future projects, navigation buttons for moving the image left, right, up , down and rotate 360, search 3D object or model, UV map, UV-segments, 3-D objects, 2D/3D images etc. [0058] Referring now to the Fig. 2 of the accompanying drawings, there is shown a method 200. The said method 200 is a computer implemented method capable of being installed as application software in a computer or smart phone. The said method 200 starts at step 205 where a 2D image is selected by a user. It will be apparent to persons skilled in the art that computer graphics have gained popularity among people. Pictures having high resolution with large number of pixels are much clearer and appealing to the human eye. It is further important to have clearer print of an image on a real world 3-dimensional object. The texture being exactly same as a real -life projector projecting an image onto any 3D object, as can be seen the surface where the projection hits, and this disclosure enables such projecting.

[0059] At step 205 of the method 200, the user selects and projects an image to be onto a virtual 3D object. Such object could be any real -world object where upon the users wants the image to be printed. Such real -world object could be a hat, coffee mugs, apparels and many more and the list is endless. The present disclosure enables such projection on virtual 3D forms of any real -world objects.

[0060] At step 210, the process of dividing the surface of the virtual 3D objects into a plurality of projection texture mesh surfaces. Such division ensures a better distribution and alignment of the 2D image over the surfaces of the virtual 3D object.

[0061] At step 215, unwrapping of the mesh is carried out. Eeach of the plurality of projection-texture mesh surfaces is unwrapped to obtain a UV-map for each of the projection-texture mesh. It is the UV-map which will be able to generate a print-file that can be applied to the real-world replicate of the 3D model. Since a UV-map is generated from reach of the mesh-parts used to construct the 3D model, and the same 3D model can be constructed in many different combinations of meshes, it is practical always to make sure that the construction of the 3D model from its mesh components was done in such a way which would generate a UV-map that is suitable for the printing and application on the 3D real-world model. [0062] In an embodiment, unwrapped 2D print file includes implementation instructions for the user, wherein the implementation instructions include starting point, starting angle, exact location part of real -world 3D object and the combination thereof.

[0063] In an exemplary embodiment, in the case of a car, UV map is required having at least 5 separate pieces such as Front/back/left-right/top of the car. An efficient manner would be dividing the car-mesh up into every part of the car where a print-out has to be cut. In this manner dividing the mesh of the car up into its doors, windows, every place where the printed adhesive would have to be cut if it had been printed out continuously as one whole, for example, 'left side of the car". If it is printed out as a whole left-side of the car, then the manual process of applying this print-out to the side of the car requires the person to first stick the adhesive over the whole side of the car, and then manually cut very carefully with a razor all the parts which are going to be needed to be separated. For example all the edges around the door. This little gap seen when the door is in the car, this little amount of material left after they cut, is then wrapped around the edge of the door.

[0064] Once again, at this point, the image that was passing over that seam has been broken in its continuity. With the existing technologies in the car industry, this limitation is dealt by making sure that the designs avoid/account-for the reality of these seams of the car are going to be broken, and so they make sure that nothing important is passing over those seams. In the present disclosure, edge padding has been incorporated to resolve this problem. This will be explained later in the specification.

[0065] Referring again to the method 200 as shown in Fig. 2, at step 220, using the UV map, a 2D print file is generated which can be printed on real world 3D objects.

[0066] Fig. 3A, there illustrates an exemplary embodiment of generation of a print file from projection of an image on a replica real world object. There is shown an image which is required to be printed on a real-world building. Further, there is shown a virtual replica of the building which is shown in the application and can be viewed by users to understand and feel the exact look of the building when the image will be printed on it. Further, there is shown a UV print file which is generated from the said image and ready for printing. [0067] Referring now to Fig. 3B 3E, there are shown different views of a set of virtual 3D mugs when an image is projected on the said set. The front view of the said mugs when the handles are behind them is shown. Also, rear views of the virtual mugs are shown when the handles are visible to the user. In such cases, it is easier for the user to select proper positioning of the image for printing on real world 3D mugs. 300a - Open scene for the mug-set or any set scene as shown in fig 3 B . The diagram shows each section of the mobile application user interface 618, wherein the each section in mark in 300 a user interface is same in all the image showing user interface . The PUBLISH button 325 is used when the User is finally happy with the result. It will pop-up a window asking the User to name the project. This name is used for the url address under the User's account 335. At 300 b, The User has clicked IMAGE button 310 and loaded an image. The User has also clicked all of the top row of the Image-cells 355 which contain UV map for the mug object 630, so the mobile application receive all the images of the mug 630 user interface at 300 b. The User has used the SIZE control to enlarge image at 300 c. The User has used the ANGLE control 345. This rotates the object up to 360 degrees at 300 d. The User has used the arrow controls 330 to move the image around left/right/up/down at 300 e as shown fig. 3C. The User has used the Zoom control 340 at user interface 300f. This is one of the most brilliant features as allows user to check the resolution against real-world size by zooming in. Also the arrow keys on the keyboard can be used to move the whole scene up/down/left/right, which is needed when zoomed in close. The useer has enlarged the image now to a full scene across the mugs at user interface 300 g. The user has frozen the image by using the freeze button 315 and is now checking the back of the mugs at user interface 300 h. Notice some ugly areas. The User has projected last image on the ugly part of one mug found in user interface 300 I. This shows all the images added on the back of the scene to cover the ugly areas. L.Cohen R.I. P. The fig 3E shows the UV maps use on the mug as gown in image cell in the user interface The Fig 3B to 3-E, shows how a user can use the same interface and perform various functionality. It also show sequencally y how a user a project images on a 3D object which is a rea-world example.

[0068] In one embodiment, printing on apparels has been most popular among other graphics utilities. There are situations when a printed image on an unstitched clothing material gets either cut in the manufacturing process or is not sufficiently spread enough to be fitted on the final products of varying sizes. In such cases, there is a practical consideration that if a manufacturer asks for 0.25 inches seam-allowance, a seam allowance of 0.27 inches or some determinable extra amount to account for the inconsistency of the manual labour of stitching which differs between operators, as they are humans and will vary in how they push the fabric against the sewing machine.

[0069] The objective of giving a little more than the seam-allowance asked for is an experiential communication with the manufacturer until the safe-amount is determined. The concept is that over-stitching (i.e. where the manual operator has pushed too hard) is undesirable because it will definitely break continuity of the image across the seam, whereas under-stitching (which is also an ailment of wear-and-tear over time due to the natural stretching of stitches) is covered-up with the illusion that is enjoyed from edge- padding, the casual observer does not notice that the image has been stretched a few pixels in that point of the seam.

[0070] In the above context, moving now to Fig. 4, there is described a method 400 which describes the generating of UV maps with the technique of edge padding.

[0071] The said method 400 starts at step 405 where a 2D image is selected for projection by a user. The texture being exactly same as a real -life projector projecting an image onto any 3D object, as can be seen the surface where the projection hits, and this disclosure enables such projecting.

[0072] At step 405 of the method 400, the user selects and projects 2D image onto a virtual 3D object. Such object could be any real -world object where upon the users wants the image to be printed. Such real -world object could be a hat, coffee mugs, apparels, building, car and many more and the like. The present disclosure enables such projection on virtual 3D forms of any real -world objects.

[0073] At step 410, the process of dividing the surface of the virtual 3D objects into a plurality of projection texture mesh surfaces. Such division ensures a better distribution and alignment of the 2D image over the surfaces of the virtual 3D object.

[0074] At step 415, unwrapping of the mesh is carried out. Eeach of the plurality of projection-texture mesh surfaces is unwrapped to obtain a UV-map for each of the projection-texture mesh. It is the UV-map which will be able to generate a print-file that can be applied to the real-world replicate of the 3D model. Since a UV-map is generated from reach of the mesh-parts used to construct the 3D model, and the same 3D model can be constructed in many different combinations of meshes, it is practical always to make sure that the construction of the 3D model from its mesh components was done in such a way which would generate a UV-map that is suitable for the printing and application on the 3D real-world model.

[0075] At step 420, pixels are added at the edges of the UV map, thereby carrying out edge padding of the UV maps.

[0076] In an embodiment, the colours of the said pixels are same as their adjacent pixels. The said pixels appear as printed-dots, at resolution usually between 150-300 dpi. However, this should not be construed as a limitation of the resolution. There are other possible resolutions which could be used.

[0077] In an exemplary embodiment, using the current method of edge-padding, clothing/switchable articles which require seam-allowance, UV maps can be generated. Such process of edge-padding capable of being applied to print UV maps for printing on cars. The mesh that creates the UV-map in sections is made so that the whole door is itself an individual mesh, then this is printed alone and it has the edge-padding applied to it to account for the part which will be wrapped around to the inside of the door. An ample amount of edge-padding for each section is provided to ensure that efficient and perfect manual-application on the physical article.

[0078] Moving further to step 425, the step includes generating unwrapped 2D print files for printing on the real world 3D objects.

[0079] In an embodiment, the present disclosure is capable of providing a virtual replica of 100 different sizes of apparels and the user can select the one which fits to the customer. It is observed, in the market, there is limited number of sizes available in stores whereas there are many different body sizes and shapes of humans. Therefore, it becomes difficult for people to select the best size of the garment. [0080] The edge padding of the UV maps enables such generation of garments of varying sizes.

[0081] In yet another embodiment, the present systems and methods require input from the user and then select the best fit from among the 100 existing sizes for the said user.

[0082] Virtual replica of the hat is shown in 605. At 610, there is shown a divided surface of the virtual hat into a plurality of projection texture mesh surfaces. Such division ensures a better distribution and alignment of the 2D image over the surfaces of the virtual 3D object.

[0083] At 615, there is shown projecting of an image on the virtual 3D hat. At 620, there is shown a user interface of the application where the projected image is shown on the hat and the various divided textures are also shown.

[0084] At 625, in each of the divided texture few pixels are added to the edges of the texture so that when they are printed on a real world object, the lines of joining of each of the divided textures do not appear like a joining but a flawless continuous print of the image on the hat.

[0085] In an exemplary embodiment, Figs. 5A and 5H provide an example of a hat when an image is required to be printed on the hat. Figs. 5A and 5H shows the planar projection-texture of an image onto a hat. This gives that hat a continuity of image over the entire hat. The method which is implemented by the software, the software is adapted to receive the colour information of the 2D image projection 615 and export this information as 2D uv-texture map depending on the uv- space that includes the uv- segments 610/625 that are exactly the same of the patterns to be used in manufacturing except they are 0.25 inches around the edge smaller than the manufacturing pattern which includes the seam-allowance. As shown in Figs. 6a and 6b is that 2D uv texture map of segments 610/625, which was specifically designed for a specific manufacturer's specifications. It shows you 8 UV-islands/segments 610/625. UV-islands/segments 610/625 of them are from the head of the hat and 2 UV-islands/segments 610/625 of them are for the brim of the hat, top and bottom of the brim. [0086] In the one embodiment of the present invention, Figs. 5A-5H disclose edge- bleeding. The plurality of 8 mesh or UV-segments 610/625 that make up the 3D hat are connected in space simply by their coordinates. They are not connected by stitching such as needed in real world construction of a hat. Therefore, when the UV-map is produced (unwrapping the 8 plurality meshes or UV-segments 610/625 of the hat), it is still not the manufacturing pattern. But by adding the amount of seam-allowance that was specified to be 0.25inches around all the edges of the UV-islands in the UV-map, we now have a manufacturing pattern that can be sown together and the hat will look exactly the same as the hat visualised in 3D. There is one problem though. If the hat has seam-allowance added to it simply by printing any colour around it to extend its edges by 0.25 inches (which in the case of this hat was determined to be 26 pixels), then when the seams are sown together, it is more than likely and especially over time, that the white seam edges or whatever colour they were, will become visible against the backdrop of the original imagery on the hat. The present invention provide a way of automating the manner in which seam-allowance is dealt with is found to be edge-bleeding.

[0087] In an exemplary embodiment, using the current system and method, There are countless styles of hats over the ages. There are some more popular today, such as baseball hats 630, and amongst them even more styles of baseball hats. There are 5-panel 640, or 6 panel 635, or (find the other name). All of them fit on the head, and function the same as any other base-ball hat. The choice is one of preference and taste. One person might say a 5-panel 640 hat 630, looks same as a 6-panel 635 hat 630, or indeed would not remember which of either a person had been wearing, only that they had been wearing a baseball hat.

[0088] These different styles of hats are stitched together in different ways and have slightly different shape to them, some would say different style than a different shape. They also all involve certain tried-and-tested reinforcing paddings and folds internally stitched to the outer layer so as to assist in it holding its shape and also for greater comfort.

[0089] The difference between real-world and virtual world today is that all manufacturing designs and decisions are first done in the virtual world. Once the designs have been made the engineer knows they will work in real world. The making them in real-world is only for proof of concept and to determine more specific details of manufacturing elements, costs, procedure, in working towards a mass manufacturing model but not in knowing that the design was going to work in real world, only to check for anything that was unseen in virtual world if it shows up in real-world.

[0090] The knowledge of what can and can't be made in real-world today comes from virtual world computation. The 6-panel 635 baseball hat 630 or 5-panel 640, or "old man" hat, or whatever model there is out there in mass production, or came from 100 years ago through trial and error in some workshop. The two hats 630 were both manually unwrapped, so as to know that they were both exact copies of real-world reality. Both versions could be made in real world with the print-files shown in Fig.5 I to 5 J2 where edge-bleeding has been applied finally and they ready for printing.

[0091] It's worth saying, in order to streamline the output of 3D games in the gaming industry, engineers are constantly pursuing faster methods of creating these dynamical models which are to be animated and realistic. The greatest impediment to their rate of production is the mesh-segmentation and unwrapping process. And it is this area many academics have focused in trying to solve in order to automate better and faster this process.

[0092] The present invention provide the continuity of the image across a seam when the apparel is made in real-world, by providing continuity across the seam the apparel 3D model when using projection-texture in virtual world. If they are using projection-texture then seam-continuity is not a problem because they are not using the UV-map as the reference for the texture on the model, they are instead locking projectors to the model which permanently and continuously project the image onto the model as shown Fig.5 A at 615. But when they want to animate models they can't do this. They need to use UV- map texture. But these Uv-maps are so complex they can't easily design on them. So a method allows a user to painting directly onto a 3D model called Polypainting by using projection-texture . In fact they can't use projection-texture in most circumstances because it's not an adequate way in gaming to solve the problem of an avatar and it's skin and clothes textures, as they need to animate them. Whereas here we are using a frozen model and how it animates in the real world is up to the person wearing it. As shown in Fig. l the problems encountered in the gaming industry and solutions sought. In this paper's abstract it quotes - "3D paint systems opened the door to new texturing tools, directly operating on 3D objects. However, although time and effort was devoted to mesh parameterization, UV unwrapping is still known to be a tedious and time-consuming process in Computer Graphics production. We think that this is mainly due to the lack of well -adapted segmentation method."

And with regards to the creative process to making the texture of avatars the author writes in an paper "Online Submission ID: 0270 , What you seam is what you get: automatic and interactive UV unwrapping":

"The requirements of a U,V mapping are dictated by the way artists texture-map models. They use a mixture of 3D paint systems and 2D image editing. For instance, working in 2D makes it possible to use regular patterns and cutting/pasting existing 2D images.

Therefore, during the editing process, artists continuously go back and forth between 3D and 2D space, using different software's. To "know where they are" in 2D, they superimpose a stencil with a 2D projection of the mesh. For this reason, it is important to have a segmentation that yields a 2D space that is meaningful for the user"

[0093] See what this means is that the more complex your objects get (such as the range of fashion dresses available for women to choose from, with all the shapes and folds and frills) then the more impossible it will be for a customer to create a design for the dress. But Fig. l is discussing the issue of mass production of avatars, it takes a professional 3D modeller a week to finally work out the UV-map for the dress model to be added to the catalogue. The dress will sell in its millions over the pursuing years, whereas this avatar being created for a game release will sell hot for a few months and then die off. Gaming industry business systems models are entirely different to the market this App of this patent is created for. The present invention use an orthographic x-ray projection-texture as the starting point for the base design of any apparel gets the most fantastic results as shown in Fig.5 A at 615. Consecutive projections then are used only for stamping logos or smaller images onto the apparel.

[0094] In an exemplary embodiment, using the current system and method, the same shaped hat (as far as the outer layer is concerned) in two different ways, it's only difference being in where the seam-lines and stitching' s are - so this difference then would be only aesthetic - and therefore show how any UV-map of a 3D model of any real -world object which requires pieces/parts (UV island or segments) to be stitched or seamed together in order to construct it - can be converted into a plurality of manufacturing patterns, no matter how the 3D model was itself built in parts as of its meshes, meshes themselves being constructed of polygons which are conveniently organised into sections called meshes so that they may be defined as an entire section for which then can be computationally (automatically) unwrapped by using computing device, flattened out, into what is referred to as a UV-map. The rationale behind selecting carefully choose of mesh sets 635, 640 is often more due to the appearance of seams of a model you'd rather not be noticed in a 3D visualisation. Edge-padding, as it is called in is a way of adding pixels to the edges of the UV-islands or segments so as to create this illusion of the textured model being seamless in this location. Also another reason for choosing meshes carefully is so as to avoid distortion of the polygons that can occur when trying to force them flat. So it shown here that in fact a plurality of mesh sets for the same model can be determined that can then be stitched together in the real world to make the same object.

[0095] In this case here are two hats 630 which appear in 3D to be exactly the same, but from the two UV-maps 635, 640 as shown in Fig. 6G, that were deliberately illustrated in the mobile Application user interface 617, and that are not normally seen by the customer (This is normally the Object Options 618 pop-up window in user interface 617 of mobile/software application See Fig: 6B-6F) These two UV-maps were meticulously measured to be precise for manufacturing. We have cut the hat up two different ways, both of which can produce UV-maps with insignificant polygonal distortion. By insignificant is meaning that it is imperceptible to the human eye. As this is the objective of our targeted market. In other engineering applications requiring UV-unwrapping it may be deemed significant distortion. But consider here even to this day most baseball hats in the world are stitched by humans on a sowing machine, which have significant differences in how they were made. A boss of a factory could probably tell you which machine operator made it, due to just the most imperceptible difference a customer wouldn't notice but a boss of the factory making them would.

[0096] In the two hats 630 shown side by side through the process to the end, you can see that by projection they both appear to be the same. The end resulting UV-maps cannot be shown in their full resolution in this patent but are 4096x4096px. A 1/2 inch is equal to 46 pixels, so here has been achieved an output resolution of 184dpi. To achieve an output resolution higher than this would require computer power only 'gamer-boys' and 'gamer- girls' would have at home; the next step up is a canvas of 8192x8192px As shown in Fig.6H.

[0097] As was determined with the manufacturer to be the seam-allowance, and the sizes of the 6 panel 635 hat UV-map are known from the manufacturer. We then constructed the 3D hat 630 from that basis and then made a new hat exact replica of the other but with the mesh cut up into different plurality of meshes, but one that could be unwrapped without causing significant polygonal distortion. The present invention provides a method which know intuitively what kinds of mesh segmentations will work well with unwrapping and which will cause problem.

[0098] The UV-maps 610, 635,640 with the edge-bleeding is created using present invention - in fact it is not so simple to discover, but like most things once discovered they are simple, hence disclosure.

[0099] The present invention provides a method for converting UV-maps into manufacturing patterns by using projection-texture for customer-design apparel, and it is the use of projection-texture which then benefits the most from solving the seam- allowance problem in this manner, so as to ensure whatever pattern the User created, that the seams will ensure its continuity exists after the stitching process. Regarding the stitches themselves, they are unavoidable part of the manufacturing process and considered an aesthetic quality. As shown Fig. 5B Object Control Panel 618 there is a choice of colours for the User to decide on. The stitches, like the white rivets you can see, they are themselves mesh objects, but they are default excluded from the projection- texture, i.e. they are invisible to it. The mobile application using the present invention allow the customers to choose and visualize simultaneously the stitching and rivet-colour or stitching-colour choice whilst they are playing with the projection-texture tools to discover a design they fancy to purchase.

[00100] Fig. 5A-5 J2 illustrate edge-bleeding and its effect to do with a 2D image. Figures 5 I- 5 J2 are of a sunflower 630 which is suitable for this illustration at step 1. The software application or method is adapted to separate the sunflower 630 into two parts (635, 640) which is identical to separating two parts (635, 640) onto a UV map at step 2. Only difference is that this sunflower image 630 is separated in Fig. 5 I and 5 J2 as two UV-segment (635, 640) in the UV-map, and then edge-bleeding 645, 646 at two UV- segment (635, 640) is applied respectively at 3. It is necessary that these are large images 635, 640 in order to see the edge-bleeding magic of being present but invisible to the mind. Now the at step 4, then edge-bleeding 645, 646 at two UV-segment (635, 640) is stitched or joined by identifying the edges of two UV-segment (635, 640) and matching rgb value of the edges of the two UV-segment (635, 640) at step 4. both Flower at step 1 and the final Flower at step 4, at a distance they look exactly the same. Numbers have been put over the top of each of them so as to identify them as the same flower when zoomed in as has been done.

[00101] At step 3 and 4 has been put a doted box covering the same border-width that the edge-bleeding exists at. It is not even filled in with all white and yet it is immediately visible. This is what would happen if you just added seam allowance to the UV map by adding white.

[00102] The UV map generated by the App may include an algorithm/instructions which process the image data to provide desirable result, shown in Fig.5 1-5 J2, has UV- islands/segments sitting on an alpha canvas. All of the alpha channels are zero. This is what the algorithm uses to know where to attack:

[00103] Instruction for achieving Seam allowance

[00104] Input:

1. High resolution PNG file with 4 channels (Blue, Green, Red & Alpha)

2. Seam allowance in pixels

[00105] Output:

[00106] PNG file with seam allowance

[00107] Instructions:

1. edge detection: Detect edges of the image with the help of alpha channel information. Alpha changes from zero to non-zero and vice versa at edges.

Condition to check : if ((rgblmg[3] != rgblmg_right[3]) || (rgblmg[3] ! = rgbImg_bottom[3])) //edge

where rgblmg is current pixels & rgblmg right is right side pixel & rgblmg bottom is bottom row pixel.

[00108] 2. edge type 1 (pixel alpha value becoming zero)

If right value is zero, Copy the current pixel value to right pixel

If bottom value is zero, Copy the current pixel value to bottom pixel

Also update the alpha of current pixel to 255 to make the edge smooth.

[00109] 3. edge type 2 (pixel alpha value becoming non-zero)

If right value is non-zero, Copy the right pixel to current pixel

If bottom value is non-zero, Copy the bottom pixel value to current pixel

Also update the alpha of right/bottom pixel to 255 to make the edge smooth.

[00110] 4. Save the output file as <input_png_file_name>_extended_x_pixels.png x is the No. of seam allowance

[00111] Ex: input image(pixel with non-zero values marked as 1):

00000000000

00000000000

00011110000

00111111000

01111111100

00111111000

00011110000 00000000000

00000000000

[00112] Output image with 1 pixel extension (extended pixels are marked as E):

00000000000

000EEEE0000

00E1111E000

OE 111111 E00

E 11111111 EO

OE 111111 E00

00E1111E000

000EEEE0000

00000000000

[00113] Execution: extend_image.exe <input_png_file_name> <No. of pixel to extend>

[00114] This instructions is then written in C++ and compiled as an executable to input the UV-map and have it transformed in the manner above.

[00115] In a specific scenario, a user might want an image to be printed on a portion of and object rather than printing the image on the whole object. The embodiments of the present disclosure enable such selection by the user for printing on an object. For example, referring to Fig.6A and 6B, there is shown a virtual replica of a hat. As discussed earlier, the virtual hat is divided into a plurality of projection-texture mesh surfaces. Each of the projection-texture mesh surfaces is unwrapped along with at least one-part 2D image located on the virtual 3D object. The user may opt to select only the top portion of the hat from among various portions and project the image selectively on the top portion. Thereby, an unwrapped 2D print file a generated according to the present systems and methods. Fig.6A illustrates a section of the hat where the image is projected, which consists of two meshes. And then use it to make some 3D examples of each version of solution. Fig.6 A shows two mesh removed from the hat, and then Fig.6 A shows that those edge-bleeding applied is applied to the two mesh. When the edge-bleeding is applied at fig.6A, it extends the border or edges of the UV island or segments and provide the edge-bleeding on seam allowance, after completing the edge-bleeding on seam allowance, the result would be shown in fig.6 A after stitching, the edge-bleeding on seam allowance provide the continuity of the image, the hat will be the right size, but the colour of the seam edge added for the seam-allowance would be visible. In this case it was white. If user simply add seam allowance to the UV map after printing or even a colour for it in the printing, you will end up with colour of seam allowance visible to user or observer. However, as shown in fig. 6B, if user rather cut into the UV-map to account for seam allowance, as described in prior art, then user will ended up overlapping of UV maps and image does look ugly.

[00116] Now referring to Fig 7A-7C showing an exemplary embodiment of the present invention, the figures show an avatar having an full scene of clothing, and being able to swap different clothes. If they click on the Hat-object it would open the Object- Options Window where the User can go and search for another hat to replace the current model they are using, it's possible to use kitchen sets, or bathroom sets, apparel or any real world 3D object using this Application because of what it makes possible by using projection-texture only.

[00117] Now referring to Fig 8A-8D, showing an exemplary embodiment of the present invention, This example exemplifies the beauty of using projection-texture. One object can be partially (or wholly) blocking the view of the other and still be hit by the projection-image. Notice in Fig 1 that the female partner has been removed from scene. This is done by clicking on the Object in the Object Box. Double-Clicking it will open the Object Options Window. A projection can be removed from an object by double-clicking on the image-cell for that projection on that object.

[00118] For that matter, here below in Fig. 9 A-B is a demonstration of the technique and it's affect, and effect when seen by the observer from the viewpoint of the projector-camera, and a view-point away from the camera. [00119] This in Fig. 9 A-B is an example of the result attainable using Perspective Projection-Texture, there are countless types of projection-texture employed in 3D animation software to solve particular objectives. Such examples begging with the simplest being perspective, then planar, then cylindrical, then spherical, and then the complexity becomes limitless. It has been determined through experiment and experience in the development of this App and it's envisioned future usage that only perspective- projection and planar-projection are useful with the option of XRAY ON/OFF and the option for User to determine which object will or won't be involved in a given projection.

[00120] Perspective-projection, as seen in Fig.9 A-B is the preferred type of projection when you are determined to have a targeted audience in a known location of observation to see the 2D photo illusion on the 3D object it is projected on as an advertisement. Indeed, sunlight or any light source would partially give it away, however with a mat-surface finish (non-reflective) the illusion can be upheld. Planar Projection, as just mentioned above as one of the two types of projection-texturing to be used in the present invention

[00121] Fig.9 A-B Planar-projection works best when you want to use XRAY projection of a pattern through an apparel, so as to keep the symmetry of the pattern consistent - otherwise when using perspective-projection the image/artwork/pattern projected will come out larger on the other side. In most cases you want a continuity of the image-pattern through the whole object to its other side, such as with clothes

[00122] Fig. 9 A-B is an example of an application of perspective-projection that can also be used in this App to be disclosed below. This photo was taken whilst standing up. An aircraft passenger typical spend 99% of their time sitting down, and when standing up they are either looking for their seat, going the bathroom. The other time they are standing up is when they are opening these cabin doors. From the seated position it was not possible to capture the image well enough, thus it was taken standing up. The point made in this diagram is when comparing it to Fig 9 B of the Mona Lisa on the Kombi. It's possible to achieve a projection-texture 2D image-plane that stairs down at the passenger from the seated observation position they are in for the majority of the flights as shown Fig.9 B. Aeroplane Perspective Projection-texture rendering needed on planes. [00123] It is foreseeable that in near future the actual virtual head of the user will be placed on the avatar as this is becoming more and more common a feature. It could be that avatars are exactly copies of the user but this is not envisaged. A user's body changes over time faster than the clothes. For this reason it is more practicable and foreseeable that there will be a finite number of avatars, the one that most closely matches the body measurements provided by the User will be selected for them. The reason for this, again, is not that it is not foreseeable that future 3D technology will solve the complexities of automated generation of perfectly fitting clothes to an exact replica of a User but rather that a User does not remain to be an exact replica of themselves for a duration of time respective to the life-time of the clothes. It is also more attractive for marketing value to have all avatars 'beatified'. For example two different people could both fit exactly the same clothes, but one may be deemed to have an unattractive body and the other a beautiful body. For this reason it is envisaged that there will be a finite set of avatars for which the User is matched to the nearest fitting one according to the set of measurements it is determined necessary or practical to require. Even if a perfect scan of the User was available, still, as stated, an avatar would be selected that best fits the user.

[00124] The terms "a," "an," "the" and similar referents used in the context of describing the disclosure (especially in the context of the following claims) are to be construed to cover both the singular and the plural unless otherwise indicated herein or clearly contradicted by context. Recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., "such as") provided herein is intended merely to better illuminate the disclosure and does not pose a limitation on the scope of the disclosure otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the disclosure.

[00125] It is contemplated that numerical values, as well as other values that are recited herein, are modified by the term "about", whether expressly stated or inherently derived by the discussion of the present disclosure. As used herein, the term "about" defines the numerical boundaries of the modified values so as to include, but not be limited to, tolerances and values up to, and including the numerical value so modified. That is, numerical values can include the actual value that is expressly stated, as well as other values that are, or can be, the decimal, fractional, or another multiple of the actual value indicated, and/or described in the disclosure.

[00126] Groupings of alternative elements or embodiments of the disclosure disclosed herein are not to be construed as limitations. Each group member may be referred to and claimed individually or in any combination with other members of the group or other elements found herein. It is anticipated that one or more members of a group may be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is deemed to contain the group as modified thus fulfilling the written description of all Markus groups used in the appended claims.

[00127] Certain embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Of course, variations on these described embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventors intend for the disclosure to be practiced otherwise than specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.

[00128] Although specific embodiments and certain structural arrangements have been illustrated and described herein, it will be clear to those skilled in the art that various other modifications and embodiments may be made incorporating the spirit and scope of the underlying inventive concepts and that the same is not limited to the particular methods and structure herein shown and described except insofar as determined by the scope of the appended claims.




 
Previous Patent: TYRE FOR VEHICLE WHEELS

Next Patent: CIRCUMCISION DEVICE