Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHODS FOR PROVIDING PRODUCT INFORMATION TO A QUERYING SHOPPER
Document Type and Number:
WIPO Patent Application WO/2015/025320
Kind Code:
A1
Abstract:
A system and method for optimized identification of objects in a familiar retailing environment scene, such as a department store. The method includes the step of providing a products visual DB and identifying the retailing environment scene. Thereby, delimit the scope of the product search, in the products visual DB, to products offered in that retailing environment scene, and thereby achieve a quicker search. The method further includes the step of retrieving in-store location. Thereby, further delimit the scope of the product search in the products visual DB to products offered in the zone surrounding that retrieved in-store location. The method further includes the steps of the user acquiring an image frame containing a selected product using a camera of his/her personal computerized device, identifying the selected product by the system of the present invention, and sending information data associated with the identified product to the user.

More Like This:
Inventors:
DEVORA GIL (IL)
FRIDMAN TAMIR (IL)
Application Number:
PCT/IL2014/050742
Publication Date:
February 26, 2015
Filing Date:
August 18, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SHOP CLOUD LTD (IL)
International Classes:
G06F7/00; G06Q30/02; G06F17/30; G06Q20/12; G06Q20/32
Foreign References:
US20120158482A12012-06-21
US20080279481A12008-11-13
US8165407B12012-04-24
Other References:
DOUG GROSS: "The growing push to track your location indoors", 26 March 2013 (2013-03-26)
Attorney, Agent or Firm:
M. FIRON & CO. ADVOCATES (16 Abba Hillel Silver Road, Ramat Gan, IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A product-information-providing method for optimized identification of objects in a familiar retailing environment scene and providing a user with information data about a selected product, the method comprising the steps of: a) providing a products visual DB;

b) identifying the retailing environment scene, to thereby delimit the scope of the product search in said products visual DB;

c) retrieving in-store location to thereby further delimit the scope of the product search in said products visual DB;

d) acquiring an image frame containing a selected product using a camera of a personal computerized device;

e) identifying said selected product; and

f) sending information data associated with said identified product to the user.

2. A product-information-providing method as in claim 1 further comprising the step of: a) retrieving personal data to thereby further delimit the scope of the product search in said products visual DB, wherein said retrieving of said personal data is performed before said identification of said selected product.

3. A product-information-providing method as in claim 1, wherein said identifying of said selected product comprises the steps of: a) searching for identifiers in said image frame; and

b) if an identifier was found, identifying said selected product using said found identifier.

4. A product-information-providing method as in claim 3, wherein said identifier is selected from the group of features that can be used to uniquely identify said product, including a barcode, a QR code, a model number and a part number.

5. A product-information-providing method as in claim 1, wherein said identifying of said selected product comprises the steps of: a) detecting visual patterns in said image frame; b) matching detected visual patterns with visual patterns of said delimited scope of products in said products visual DB; and

c) if a match was found, identifying said selected product using said found match.

6. A product-information-providing method as in claim 5, wherein said visual patterns are selected from or obtained using image processing techniques of filtering and/or collecting visual patterns, including key points, shape matching, blob finding, contour detections, color filtering such as HSL/HSV/YUV based, shape detection.

7. A product-information-providing method as in claim 1, wherein said identifying of said selected product comprises the steps of: a) searching for identifiers in said image frame; and

b) if an identifier was found, identifying said selected product using said found identifier; and

c) if an identifier was not found, said identifying of said selected product further comprises the steps of:

i. detecting visual patterns in said image frame;

ii. matching detected visual patterns with visual patterns of said delimited scope of products in said products visual DB; and iii. if a match was found, identifying said selected product using said found match.

8. A product-information-providing method as in claim 1 further comprising the step of: a) detecting the image of said selected product in said in the acquired image frame, wherein said detection is performed before said identification of said selected product.

9. A product-information-providing method as in claim 8 further comprising the step of: a) removing background surrounding the image of an object that may represent said image of said selected product in said in the acquired image frame, wherein said background removal is performed before said detection of said selected product.

10. A product-information-providing system for optimized identification of objects in familiar retailing environment scenes and providing a user with information data about a selected product, the system comprising: a) a products-information server comprising a main-processor and a database unit, wherein said database unit comprises a products visual Database (DB), an users DB and a retailers DB; and

b) a smart-mobile-device application activated on a personal computerized device associated with the user, wherein said personal computerized device includes a camera, wherein said personal computerized device is adapted to obtain a global position of said personal computerized device, to thereby identify the retailing environment scene and delimit the scope of the product search in said products visual DB; wherein said personal computerized device is adapted to obtain an in-store location of said personal computerized device and further delimit the scope of the product search in said products visual DB; and wherein upon acquiring at least one image frame of a selected product at said in-store location, said smart-mobile-device application is activated to identify said selected product and provide information data associated with said identified product to the user.

11. A product-information-providing system as in claim 10, wherein said users DB includes personal data of the user, to thereby further delimit the scope of the product search in said products visual DB.

12. A product-information-providing system as in claim 10 further comprising a scene- learning server configured to add identity related information of existing products and/or new products.

13. A scene-recognition method for recognizing the scenery of a specific product in a specific retailing environment scene, the method comprising the steps of: a) acquiring an image frame containing a selected product using a camera of a personal computerized device;

b) retrieving metadata related to said acquired image frame and to the image acquisition conditions, including ; c) searching for other images acquired form similar in-store location and similar direction;

d) comparing said acquired image frame with to other images fetched in said search; and

e) checking if a match was found being substantially similar by preconfigured criteria:

i. if a substantially similar match was found, exit; else

ii. storing said acquired image frame in a products visual DB.

14. A product-information-providing method as in claim 13, wherein metadata relating to said acquired image frame includes: a) a global geographical location of obtained from the GPS of a smart-mobile-device having the camera that acquired said acquired image frame;

b) an in-store location of said camera, retrieved form one or more location finder means, selected from the group including GPS, Wi-Fi tri angulation, sound frequency detection, infrared code detection, in-store sign/feature/identifier detection; and

c) an azimuth of the optical axis of said camera obtained from a magnetometer.

Description:
SYSTEM AND METHODS FOR PROVIDING PRODUCT

INFORMATION TO A QUERYING SHOPPER

CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit under 35 USC 119(e) from US provisional application 61/867, 164, filed on August 19 th , 2013, which is hereby incorporated by reference in its entirety.

FIELD OF THE INVENTION The present invention generally relates to the field of smart shopping and more particularly, to a system and methods facilitates a shopper to obtain quick information on his/her smart mobile device about one or more products on display in a store, while visiting that store.

BACKGROUND OF THE INVENTION

Often, when a shopper enters a department store or any other store, he desires to get information regarding one or more products on display therein. The information may include technical information, nutritional information, prices and sales information, opinion of friends and family members, etc. The options available today include approaching a salesman and ask questions.

Another option is to use a smartphone and do a general search on the Web.

There is therefore a need and it would be advantageous to have a personal device which the shopper can use to get information of the one or more products, being on display in a specific store.

SUMMARY OF THE INVENTION

The principle intentions of the present invention include providing a system and methods for providing product information to a querying shopper, using a personal smart device, such tablet and other computerized devices the like, having or coupled to operate with a camera (herein after referred to, with no limitations, as "smart mobile device"). A dedicated smart-mobile-device application, running on the shopper's smart mobile device, facilitates the shopper to obtain quick information about one or more products on display in a store, while visiting that store.

To obtain the desired information, the shopper acquires at least one image frame of a selected product. The smart-mobile-device application, running on the shopper's smart mobile device, analyzes the at least one image frame of the selected product, identifies the selected product and provides the information to the shopper, for example on the smart mobile device's display.

The process of identifying the product may include obtaining some information from one or more remote servers, such as an in-store server and/or a server of a services provider.

The process of identifying the product may include using metadata of the product, the location and orientation of the product inside the specific store, the scene surrounding the product, the location of the shopper. The metadata of the product may also include product identifiers such as barcode, QR code, model number, part number, etc. Using the available metadata of a specific product may substantially reduce the scope of the search and thereby, substantially reduce the response time to the shopper's query.

According to the teachings of the present invention, there is provided a product-information-providing method for optimized identification of objects in a familiar retailing environment scene, and providing a user with information data about a selected product situated within that retailing environment scene. A familiar retailing environment scene may be a store, a department store, a chain of stores, an exhibition, a museum and the like. The method includes the step of providing a products visual DB and identifying the retailing environment scene. Thereby, delimit the scope of the product search, in the products visual DB, to products offered in that retailing environment scene, and thereby achieve a quicker search. The method further includes the step of retrieving in-store location. Thereby, further delimit the scope of the product search in the products visual DB to products offered in the zone surrounding that retrieved in- store location. Optionally, the product-information-providing method further including the step of retrieving personal data associated with the user, to thereby further delimit the scope of the product search in the products visual DB, wherein the retrieving of the personal data is performed before the identification of the selected product. The method further includes the steps of the user acquiring an image frame containing a selected product using a camera of his/her personal computerized device, identifying the selected product by the system of the present invention, and sending information data associated with the identified product to the user.

The identifying of the selected product may include the steps of searching for identifiers in the image frame, and if an identifier was found, identifying the selected product using the found identifier. The identifier may be selected from the group of features that can be used to uniquely identify the product, including a barcode, a QR code, a model number and a part number.

The identifying of the selected product may include the steps of detecting visual patterns in the image frame, matching detected visual patterns with visual patterns of the delimited scope of products in the products visual DB, and if a match was found, identifying the selected product using the found match. The visual patterns are selected from or obtained using image processing techniques of filtering and/or collecting visual patterns, including key points, shape matching, blob finding, contour detections, color filtering such as HSL/HSV/YUV based, shape detection.

The identifying of the selected product may include the steps of searching for identifiers in the image frame, and if an identifier was found, identifying the selected product using the found identifier. If an identifier was not found, the identifying of the selected product further includes the steps of detecting visual patterns in the image frame, matching detected visual patterns with visual patterns of the delimited scope of products in the products visual DB, and if a match was found, identifying the selected product using the found match.

The product-information-providing method may further include the step of detecting the image of the selected product in the in the acquired image frame, wherein the detection is performed before the identification of the selected product.

The product-information-providing method may further include the step of removing background surrounding the image of an object that may represent the image of the selected product in the in the acquired image frame, wherein the background removal is performed before the detection of the selected product.

An aspect of the present invention is to provide a product-information- providing system for optimized identification of objects in familiar retailing environment scenes and providing a user with information data about a selected product. The system includes a products-information server including a main- processor and a database unit, wherein the database unit includes a products visual DB, an users DB and a retailers DB; and a smart-mobile-device application activated on a personal computerized device associated with the user, wherein the personal computerized device includes a camera.

The personal computerized device is adapted to obtain a global position of the personal computerized device, to thereby identify the retailing environment scene and delimit the scope of the product search in the products visual DB. personal computerized device is further adapted to obtain an in-store location of the personal computerized device and further delimit the scope of the product search in the products visual DB. Upon acquiring at least one image frame of a selected product at the in-store location, the smart-mobile-device application is activated to identify the selected product and provide information data associated with the identified product to the user. Optionally, the users DB includes personal data of the user, to thereby further delimit the scope of the product search in the products visual DB.

Optionally, the system further includes a scene-learning server configured to add identity related information of existing products and/or new products.

Another aspect of the present invention is to provide a product-information- providing method for recognizing the scenery of a specific product in a specific retailing environment scene. The scene recognition method including the steps of acquiring an image frame containing a selected product using a camera of a personal computerized device; retrieving metadata related to the acquired image frame and to the image acquisition conditions; searching for other images acquired form similar in- store location and similar direction; comparing the acquired image frame with to other images fetched in the search; and checking if a match was found, the match being substantially similar by preconfigured criteria. If a substantially similar match was found, exit; else storing the acquired image frame in a products visual DB.

The metadata relating to the acquired image frame may include a global geographical location of obtained from the GPS of a smart-mobile-device having the camera that acquired the acquired image frame; an in-store location of the camera, retrieved form one or more location finder means, selected from the group including GPS, Wi-Fi tri angulation, sound frequency detection, infrared code detection, in-store sign/feature/identifier detection; and an azimuth of the optical axis of the camera obtained from a magnetometer.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will become fully understood from the detailed description given herein below and the accompanying drawings, which are given by way of illustration and example only and thus not limitative of the present invention, and wherein:

Fig. 1 is a general schematic block diagram illustration of the components of a product-information-providing system, according to an embodiment of the present invention.

Fig. 2 is a schematic block diagram illustration of the components of product- information-providing system shown in Fig. 1, showing the major system components as used by a registered retailer, on the one hand, and a registered shopper, on the other hand.

Fig. 3 shows a schematic flowchart diagram of a method for recognizing a specific product, according to an embodiment of the present invention.

Fig. 4 shows a schematic flowchart diagram of a method for recognizing the scenery of a specific product, according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE INVENTION

The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided, so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

An embodiment is an example or implementation of the inventions. The various appearances of "one embodiment," "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments. Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.

Reference in the specification to "one embodiment", "an embodiment", "some embodiments" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiments, but not necessarily all embodiments, of the inventions. It is understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.

Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The order of performing some methods step may vary. The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.

Meanings of technical and scientific terms used herein are to be commonly understood as to which the invention belongs, unless otherwise defined. The present invention can be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.

Reference is now made to the drawings. Fig. 1 is a general schematic block diagram illustration of the components of a product-information-providing system 100, according to an embodiment of the present invention. Product-information- providing system 100 includes a server that may be a products-information server 102 of a provider of services for providing product information. The server of product- information-providing system 100 may be an in-store server 104 that may be a standalone server, or operatively coupled with products-information server 102. In- store server 104 may be integrated into the organizational computerized system of the store (or a store-chain). The present invention will be described, by way of example, with no limitations, in terms of the server of product-information-providing system 100 being products-information server 102.

Products-information server 102 includes a main-processor 110 and a database unit 130. Processor 110 includes a retailers-management unit 112, a users- management unit 114 and a main-product server 116. Database unit 130 includes a retailers DB region 132, a users' DB region 134 and optionally, a products visual DB 136. Users-management unit 114 may include a personalization engine 115 for collecting shopping related personal data of users 20. In particular, Users- management unit 114 uses a "user account" 135i (in users DB 134) to try and eliminate certain search result candidates and thus facilitate a quicker response in identifying the product that the user has currently selected.

A retailer 10, that wishes to use product-information-providing system 100, either owns the system in the form of in-store server 104, or logs into products- information server 102 over a network 50, such as an internet network, a cellular network or any other network, operatively connected to products-information server 102

Reference is also made to Fig. 2, a schematic block diagram illustration of the components of product-information-providing system 100, showing the major system components as used by a registered retailer 10 (shown as Retailers related space 192), on the one hand, and a registered shopper 20 (shown as Users related space 194), on the other hand.

Retailer lO j may upload and/or update his/her/its personal/company data 133 j stored in retailers DB 132, using retailers-management unit 112. The uploading/updating may be performed on-line or off-line. Preferably, retailer lO j also utilizes retailers-management unit 112 to build and update his/her/its database 137 j of products, residing in products visual DB 136, while optionally, using main-product server 116.

Main-product server 116 includes a scene-learning server 118 is configured to control the accumulation of already identified products, and products being identified by other system components while product-information-providing system 100 is operational and being used by shoppers 20. Scene-learning server 118 may reside within products-information server 102 or in-store server 104.

To use product-information-providing system 100, a pre-registered shopper 20i logs into products-information server 102 over a network 50, such as an internet network, a cellular network or any other network, by activating a dedicated smart- mobile-device application 120i, running on his/her smart mobile device 22 i3 while visiting a store of a particular retailer lO j . Shopper 20i acquires at least one image frame of a selected product. Smart-mobile-device application 120i analyzes the at least one image frame of the selected product, identifies the selected product and provides the information to shopper 20i, for example on the display smart mobile device 22i.

Reference is now made to Fig. 3, showing an example product-recognition method 200 of a product selected by a user 20i, according to embodiments of the present invention. Once shopper 20i has activated dedicated smart-mobile-device application 120i, product-recognition method 200 proceeds as follows:

Step 210: retrieve GPS location.

Smart-mobile-device application 120i retrieves the global geographical location of smart mobile device 22i, using the GPS of smart mobile device 22i.

Step 212: retrieve in-store location.

An in-store location may be retrieved form one or more location finder means, selected from the group including GPS, Wi-Fi tri angulation, sound frequency detection, infrared code detection, light frequency detection, in-store sign/feature/identifier detection and azimuth obtained from a magnetometer integrated into smart mobile device 22i. Step 214: retrieve personal data associated with the user.

Image processing engine 122 of smart-mobile-device application 120i searches for data associated with user 20i, such as product purchasing habits of user 20i.

Step 220: acquiring at least one image frame of the desired product.

Shopper 20i acquires at least one image frame of a selected product, using a camera 24i integrated into smart-mobile-device 22i. Step 240: removing background.

Optionally, image processing engine 122 of smart-mobile-device application 120i is configured to remove the background, surrounding the image of an object that may represent the selected product. The removing of the background may be performed by image processing tools such as, with no limitations blurring, histogram analysis, c-mixing and others.

Step 250: detecting the object in the one or more image frames.

Optionally, image processing engine 122 of smart-mobile-device application 120i uses image processing tools to detect the image of the selected object in the acquired image frame, isolate from other objects. The external contours as well as other features of the object are determined.

Step 260: searching for product identifiers of the object.

Image processing engine 122 of smart-mobile-device application 120i searches for product identifiers among the features detected in step 250. Product identifiers are features that can be used to uniquely identify the product, such as, with no limitation, a barcode, a QR code, a model number, a part number, etc.

Step 265: found an identifier?

If one or more identifiers have been found among the features of the detected object, detected in step 250, then:

a) the object is identified as the unique product identified by the one or more identifiers.

b) go to step 350.

Step 270: detecting visual patterns in the detected object.

No match was found in step 260. Image processing engine 122 of smart- mobile-device application 120i uses image processing tools to detect potential visual patterns in the object. A visual pattern may be selected from or obtained by image processing techniques such as key points, shape matching, blob finding, contour detections, color filtering (HSL/HSV/YUV), shape detection (polygon, circle, etc.) and/or other methods to filter and collect visual patterns. Step 280: loading location based products.

Visual search engine 124 loads location based products. Knowing the segment of the store which is captured by the FOV of camera 24i, and knowing the products that are situated within that segment of the store, visual search engine 124 can substantially reduce the number of candidate DB products, to which candidate DB products the object is compared with in order to find the best match.

Step 290: comparing visual patterns.

In this step, visual search engine 124 matches the potential visual patterns detected in step 270 with known visual patterns of the fetched candidate DB products.

Step 295: found a match to the visual patterns?

If visual search engine 124 found a match to the potential visual patterns of the object, as detected in step 270, then:

a) the object is identified as the matching product.

b) go to step 350.

Step 300: sending the image frame to products-information server 102 for object detection.

Not being able to detect the product represented by the detected object, smart- mobile-device application 120i sends the one or more image frames to main- product server 116 to uniquely identify the object.

Step 310: loading stored objects.

Main-product server 116 loads all the known products of retailer lO j . Step 320: load account data.

Main-product server 116 loads all the known products of shopper 20i.

Step 330: comparing visual patterns.

In this step, main-product server 116 matches the potential visual patterns detected in step 270 with known visual patterns of all the products fetched in step 310. Step 335: found a match to the visual patterns?

If main-product server 116 found a match to the potential visual patterns of the object, as detected in step 270, then:

a) the object is identified as the matching product.

b) go to step 350.

Step 340: sending the image frame to manual object detection.

Not being able to detect the product represented by the detected object, main- product server 116 sends the one or more image frames to an off-line, manual identification of the detected object.

Step 345: updating the products visual DB 136.

Manually updating the products visual DB 136.

Go to step 350. Step 350: sending the image frame to scene learning server.

The results of the object identification procedure are scene learning server 118. Scene-learning server 118 updates the appropriate databases with new futures, point of views, etc. to ease on future products searches.

Step 390: sending the results to shopper 20i.

The results of the object identification procedure are presented to the user. If the identification is performed by a server, the results are first sent to smart- mobile-device application 120i.

(end of product-recognition method 200)

Reference is now also made to Fig. 4, a schematic flowchart diagram of an example scene-recognition method 400 for recognizing the scenery of a specific product by scene-learning server 118, according to an embodiment of the present invention. Method 400 may be performed on-line or off-line.

Method 400 is subdivided into two sub-processes. In a first process 401, scene-recognition method 400 compares the whole scenery of a new image frame, including the product, with the scenery in the images in the whole products visual DB 136. If a match is not found, the new image frame is stored in products visual DB 136. In second process 402, scene-recognition method 400 analyzes the new image frame to find all the objects in the scenery presented in the new image frame. Then, for each object found, process 402 extracts visual patterns and compares the set of visual patterns with visual patterns of all objects/products 137 j stored in products visual DB 136. If a match is found, the object is identified as the matched product and the new image frame record is updated accordingly.

Method 400 proceeds as flows:

Step 410: getting a new image of a particular product.

A new image frame of a selected product is acquired and provided to scene- learning server 118 as input to method 400.

Step 420: getting image related metadata.

Metadata related to the new image frame and to the image acquisition conditions, is retrieved, for example by smart-mobile-device application 120i, and provided as additional input to method 400.

Following the example in which the new image frame is acquired by smart- mobile-device application 120i, then among other things, the global geographical location of smart-mobile-device 22i is obtained from the GPS of smart-mobile-device 22i. The in-store location of smart-mobile-device 22i is retrieved form one or more location finder means, selected from the group including GPS, Wi-Fi tri angulation, sound frequency detection, infrared code detection, in-store sign/feature/identifier detection. Azimuth of the optical axis of camera 24i is obtained from a magnetometer integrated into smart-mobile- device 22i. Other metadata information, related to the new image and image acquisition conditions, may also be collected.

Optionally, metadata may also be retrieved from data stored in the profile of shopper 20i.

Step 430: searching the other images of the product by location & angle.

Scene-learning server 118 searches products visual DB 136 and fetches images that were acquired, generally, from similar in-store locations and towards a similar direction. Step 440: compare to other images.

Scene-learning server 118 compares the new image with each of the fetched images.

Step 442: found a match?

If a match is found between the new image and either of the fetched images, go to step 499.

Step 444: storing the new image.

No match was found in step 440.

Scene-learning server 118 stores the new image frame in products visual DB 136.

Step 446: processing the new image frame to detect all objects in the image frame.

Scene-learning server 118 analyzes the new image frame to find all the objects in the scenery presented in that new image frame.

Step 448: check if there are more found objects to be analyzed.

If there are no more objects that were found in step 446 and need to be analyzed, go to step 499.

Step 450: extracting visual patterns.

Scene-learning server 118 extracts visual patterns from the currently analyzed object. Step 460: comparing with all objects.

Scene-learning server 118 compares the extracted set of visual patterns with visual patterns of products in in products visual DB 136.

Step 465: found a match to the set of extracted visual patterns?

If scene-learning server 118 found a match to the set of extracted visual patterns, go to step 480.

Step 470: detecting visual patterns in the detected object.

No match was found in step 460. Scene-learning server 118 sends the object to an off-line, manual identification of that object. Step 480: updating the scene.

Scene-learning server 118 updates the scene data of the new image frame.

Go to step 448. Step 499: Exit

(end of product-recognition method 400)

Although the present invention has been described with reference to the preferred embodiment and examples thereof, it will be understood that the invention is not limited to the details thereof. Various substitutions and modifications have suggested in the foregoing description, and other will occur to those of ordinary skill in the art. Therefore, all such substitutions and modifications are intended to be embraced within the scope of the invention as defined in the following claims.




 
Previous Patent: CERVICAL COLLAR

Next Patent: CLASP