Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COLLAPSIBLE WALL DEVICE AND METHOD FOR DETERMINING BODY MEASUREMENTS OF AN INDIVIDUAL
Document Type and Number:
WIPO Patent Application WO/2020/032843
Kind Code:
A1
Abstract:
The present disclosure generally relates to the field of body measurements, and in particular to methods and devices for determining body measurements of an individual in order to e.g. remotely trying out clothes. According to a first aspect, the disclosure relates to a collapsible wall device (1 ) designed to enable determination of body measurements of an individual positioned on the collapsible wall device. According to a second aspect, the disclosure relates to a computer implemented method for determining body measurements of an individual. The method comprises obtaining (S1) at least two images of the individual and analyzing (S2) a predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern. The computer implemented method further comprises constructing (S3) a model of the body of the individual and determining (S4) the body measurements based on the constructed model, the detected body parts and the determined reference points.

Inventors:
SÖDERSTRÖM PETER (SE)
Application Number:
PCT/SE2019/000012
Publication Date:
February 13, 2020
Filing Date:
August 07, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIZEWALL AB 5569207 7878 (SE)
International Classes:
A41H1/02; A61B5/103; G06Q30/06; G06Q30/0601; G06T7/60
Foreign References:
US5956525A1999-09-21
US3902182A1975-08-26
US20130108121A12013-05-02
US20090062693A12009-03-05
GB855101A1960-11-30
US20070083384A12007-04-12
GB2518931A2015-04-08
US20120165648A12012-06-28
Download PDF:
Claims:
Claims

1. A collapsible wall device (1 ) designed to enable determination of body

measurements of an individual positioned on the collapsible wall device (1 ), the collapsible wall device (1 ) comprising:

- a floor part (11 ) comprising a front side with at least one marking (14) thereon, the at least one marking (14) indicating where the individual shall stand,

- a wall part (12) comprising a lower edge (121 ) foldably attached to one edge (111 ) of the floor part (11 ) and a front side with a predetermined pattern (15) thereon, the predetermined pattern (15) being recognizable by an image processing algorithm and

- an erecting device (13) configured to hold the wall part (12) in an upright position in relation to the floor part, whereby the predetermined pattern (15) on the front side of the wall part (12) faces the individual (100) standing on the at least one marking (14) on the front side of the floor part (12).

2. The collapsible wall device (1 ) according to claim 1 , wherein the erecting

device (13) is configured to raise the wall part (12) to the upright position.

3. The collapsible wall device (1 ) according to claim 1 or 2, wherein the erecting device (13) comprises springs arranged at the outer edges of the wall part (12) and/or the floor part (11 ).

4. The collapsible wall device (1 ) according to any of the preceding claims,

wherein the wall part (11 ) and the floor part (12) are formed by one single piece of material.

5. The collapsible wall device (1 ) according to any of the preceding claims,

wherein the wall part (12) and the floor part (11 ) are made of nylon, plastics, textile, paper, etc.

6. The collapsible wall device (1) according to any of the preceding claims, wherein the at least one marking (14) comprises a pair of foot prints facing away from the wall part (12).

7. The collapsible wall device (1 ) according to any of the preceding claims,

wherein the at least one marking (14) comprises markings for different foot sizes.

8. The collapsible wall device (1 ) according to any of the preceding claims,

wherein the at least one marking indicates a plurality of directions for the individual to face, when capturing images for use in the determination of body measurements.

9. The collapsible wall device (1 ) according to any of the preceding claims,

wherein the predetermined pattern comprises regularly positioned and or sized objects.

10. A computer implemented method for determining body measurements of an individual, the method comprising:

• obtaining (S1 ) at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having a predetermined pattern thereon,

• analyzing (S2) the predetermined pattern pictured in the obtained

images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern,

• constructing (S3) a model of the body of the individual based on the detected parts of the image comprising the body of the individual,

• detecting (S4) body parts in the constructed model and/or in the

obtained images, and

• determining (S6) the body measurements based on the constructed model, the detected body parts and the determined reference points. • recognize the collapsible wall trough imprinted code on the collapsible wall/reference, in order to initiate detection of the object and start measuring the body.

11.The computer implemented method according to claim 10, comprising:

- providing (SO) user output instructing an individual to stand in front of a wall having a predetermined pattern thereon.

12. The computer implemented method according to claim 10 or 11 , comprising:

- providing (S5) user output indicating that one or more of the obtained images has insufficient quality.

13. The computer implemented method according to any one of claims 10 to 12, comprising:

- determining (S7) a size of a garment by comparing the estimated body measurements with a size table.

14. The computer implemented method according to any one of claims 10 to 13, wherein the constructed model comprises a set of binary two-dimensional images of the body of the individual, wherein each binary two-dimensional image corresponds to one of the obtained images.

15. The computer implemented method according to claim 14,

wherein the determining (S6) comprises extracting (S6a) metrics from the individual binary two-dimensional images and combining (S6b) the metrics to obtain the body measurements.

16. The computer implemented method according to claim 15, wherein the

combining (S6b) comprises averaging distance measures from the individual binary two-dimensional images to produce a distance measurement associated with the body.

17. The computer implemented method according to claim 15 or 16, wherein the combining (S6b) comprises combining individual cross-section measurements from the individual binary two-dimensional images to produce a circumference measurement of a body part.

18. The computer implemented method according to any one of claims 10 to 13, wherein the constructed model is a three-dimensional model of body of the individual.

19. The computer implemented method according to claim 18, wherein the three- dimensional model of the body of the individual is constructed using a multiple- view 3D reconstruction algorithm or a structure-from-motion algorithm.

20. The computer implemented method according to claim 18 or 19, wherein the determining (S6) body measurements comprises extracting the body measurements from the three-dimensional model based on the detected body parts.

21. The computer implemented method according to claim 20 wherein the

extracted measurements comprise at least one circumference measurement of a body part and/or a distance measurement associated with the body.

22. The computer implemented method according to any one of claims 18 to 21 , further comprising using the predetermined pattern to resolve projective ambiguity resulting from constructing the three-dimensional model using an uncalibrated camera or only approximatively calibrated camera.

23. The computer implemented method according to any one of claims 10 to 22, wherein the detecting (S4) comprises using a machine learning model trained to detect the body parts.

24. The computer implemented method according to any one of claims 10 to 23, wherein the obtained images picture the individual from predetermined angles and/or in predetermined poses.

25. The computer implemented method according to any one of claims 10 to 24, wherein the predetermined pattern comprises regularly spaced shapes with predetermined dimensions.

26. The computer implemented method according to any one of claims 10 to 25, wherein the constructing (S3) comprises using the predetermined pattern to compensate for optical or perspective distortion in the obtained images.

27. A computer program comprising instructions which, when the program is

executed by a control unit, cause the control unit to carry out the method of any one of the claims 10 to 26.

28. A computer-readable storage medium comprising instructions which, when executed by a control unit, cause the control unit to carry out the method of any one of the claims 10 to 26.

29. An electronic device (2) comprising:

- a camera assembly (21 ) configured to capture images,

- a control unit (22) configured to:

• capture, using the camera assembly, at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having a

predetermined pattern thereon,

• analyze the predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the predetermined pattern,

• construct a model of the body of the individual based on the detected parts of the image comprising the body of the individual,

• detect body parts in the constructed model and/or in the

obtained images, and • determine the body measurements based on the constructed model, the detected body parts and the determined reference points.

• recognize the collapsible wall trough imprinted code on the collapsible wall/reference, in order to initiate detection of the object and start hieasuring the body.

30. The mobile device of claim 29, wherein the control unit is configured to

execute the method according to any one of claims 11 to 26.

31. A system comprising the collapsible wall according to any one of claims 1- 10 and the mobile device according to claim 29 or 30.

Description:
Collapsible wall device and method for determining body measurements of an individual

Technical field

The present disclosure generally relates to the field of body measurements, and in particular to methods and devices for determining body measurements of an individual in order to e.g. remotely trying out clothes.

Background

As technology makes fast progress, daily lifestyle is changing. For example, online shopping markets are growing strongly all over the world.

When a customer wants to purchase a garment from an online store, he or she contacts an online store through an internet application using an electronic device (e.g. a computer or a smartphone). When the consumer contacts the online store using the electronic device, the online store shows products of the online store to the electronic device. Then the consumer searches the product considering e.g. a specification, a function, a price, a condition of sale of the product from a

database constructed at the online store.

The online stores selling clothes may provide an easy way of purchase. However, information regarding sizes of the clothes and the shoes may be insufficient in the online shopping stores. One problem is that a certain size (e.g. 10) does not correspond to the same body size for all manufacturers. Hence, one individual might need size 8 in some clothes, size 10 in others and sometimes even a size 12. This is in most cases impossible for the customer to know, without trying the clothes on. Hence, a person shopping from online stores will in the worst case have to return more than 50 percent of the purchases. Consequently, companies selling clothes over the internet spend a lot of money in handling returned clothes. In the worst case, the goods then need to be shipped back and replaced several times before the customer finds the size and model that suits, if ever.

Furthermore, the difficulty in finding the right size and model that the customer experiences, may prevent the customer from making more purchases in the future.

Hence, there is a need for way to select clothes that suit the customer that does not require presence of the customer in the store. Thus, there is a need for a simple and user-friendly way of determining body measurements of an individual. A plurality of methods for solving this have been proposed. However, they are typically either not accurate enough or very complicated.

Summary

It is an object of the disclosure to alleviate at least some of the drawbacks with the prior art. Thus, it is an object to provide a simple and robust method for determining body measurements. It is a further object to provide a method that is easy to perform, such that a customer can determine the body measurements on his or her own at home, without too much effort. It is a further object to provide a method that can be used for different manufacturers.

According to a first aspect, the disclosure relates to a collapsible wall device designed to enable determination of body measurements of an individual positioned on the collapsible wall device. The collapsible wall device comprises a floor part comprising a front side with at least one marking thereon, the at least one marking indicating where the individual shall stand. Furthermore, the collapsible wall device comprises wall part comprising a lower edge foldably attached to one edge of the floor part and a front side with a predetermined pattern thereon, the predetermined pattern being recognizable by an image processing algorithm. The collapsible wall device also comprises an erecting device configured to hold the wall part in an upright position in relation to the floor part, whereby the predetermined pattern on the front side of the wall part faces the individual standing on the at least one marking on the front side of the floor part. The collapsible wall device will facilitate for a user to take images that can be used for determining body measurements. The collapsible wall device provides a background in the images that will make image processing needed to determine the body measurements less complex and more accurate. This makes the determination of body measurements user friendly and more correct than when performed on images captured with a random background.

Furthermore, the collapsible wall device will be small when in a collapsed state. Hence, it is easy to deliver and to store. In addition, one single collapsible wall device can then be used by family and friends. It is also not dependent on the size of the body. Thus, an individual that gains (or loses) weight or grows can still use the same collapsible wall device.

In some embodiments, the erecting device is configured to raise the wall part to the upright position. Thus, the collapsible wall device is easy to erect and does not require any hooks or similar to stand up.

In some embodiments, the erecting device comprises springs arranged at the outer edges of the wall part and/or the floor part. Thereby, a solid and user- friendly construction is provided in a simple way.

In some embodiments, the wall part and the floor part are formed by one single piece of material. Hence, the collapsible wall device is easy to manufacture. In some embodiments, the wall part and the floor part are made of nylon, plastics, textile, paper, etc.

In some embodiments, the at least one marking comprises a pair of foot prints facing away from the wall part. In some embodiments, the at least one marking comprises markings for different foot sizes. In some embodiments, the at least one marking indicates a plurality of directions for the individual to face, when capturing images for use in the determination of body measurements. Thus, it is easy for a user to understand where to stand when capturing images for use when determining body measurements. Hence, the likelihood that the individual will stand in a way suitable for determining measurements is increased.

In some embodiments, the predetermined pattern comprises regularly positioned and/or sized objects. Thus, the distance between the camera and the body to be measured does not need to be fixed or known, as the predetermined pattern can be used to estimate the distance. According to a second aspect, the disclosure relates to a computer implemented method for determining body measurements of an individual. The method comprises obtaining at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having a predetermined pattern thereon and analyzing the predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern. The computer implemented method further comprises constructing a model of the body of the individual based on the detected parts of the image comprising the body of the individual, detecting body parts in the constructed model and/or in the obtained images and determining the body measurements based on the constructed model, the detected body parts and the determined reference points. The proposed method enables determining body measurements in an accurate and user-friendly way. The body measurements may be determined with limited processing effort, as the predetermined pattern may be used to filter out the individual from the background. The pattern also makes the distance between the camera and the individual less important.

In some embodiments, the computer implemented method comprises providing user output instructing an individual to stand in front of a wall having a

predetermined pattern thereon. Thereby, the individual is more likely to be correctly positioned.

In some embodiments, the computer implemented method comprises providing user output indicating that one or more of the obtained images has insufficient quality. Hence, images with insufficient quality may be re-captured.

In some embodiments, the computer implemented method comprises determining a size of a garment by comparing the estimated body measurements with a size table. Thus, the correct size of a garment may be selected without the individual trying it on. In some embodiments, the constructed model comprises a set of binary two- dimensional images of the body of the individual, wherein each binary two- dimensional image corresponds to one of the obtained images.

In some embodiments, the determining comprises extracting metrics from the individual binary two-dimensional images and combining the metrics to obtain the body measurements. In some embodiments, the combining comprises averaging distance measures from the individual binary two-dimensional images to produce a distance measurement associated with the body. In some embodiments, the combining comprises combining individual cross-section measurements from the individual binary two-dimensional images to produce a circumference

measurement of a body part.

In some embodiments, the constructed model is a three-dimensional model of body of the individual. In some embodiments, the three-dimensional model of the body of the individual is constructed using a multiple-view 3D reconstruction algorithm or a structure-from-motion algorithm. In some embodiments, the determining body measurements comprises extracting the body measurements from the three-dimensional model based on the detected body parts. In some embodiments, the extracted measurements comprise at least one circumference measurement of a body part and/or a distance measurement associated with the body.

In some embodiments, the computer implemented method comprises using the predetermined pattern to resolve projective ambiguity resulting from constructing the three-dimensional model using an uncalibrated camera. Hence, the method may be performed even if the camera is imperfect or uncalibrated as the predetermined pattern may be used to correct deficiencies.

In some embodiments, the detecting comprises using a machine learning model trained to detect the body parts.

In some embodiments, the obtained images picture the individual from

predetermined angles and/or in predetermined poses. In some embodiments, the predetermined pattern comprises regularly spaced shapes with predetermined dimensions. Thus, the distance between the camera and the body to be measured does not need to be fixed or known, as the predetermined pattern can be used to estimate the distance.

According to a third aspect, the disclosure relates to an electronic device comprising a camera assembly configured to capture images and a control unit. The control unit is configured to capture, using the camera assembly, at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having a predetermined pattern thereon, to analyze the predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern. The control unit is further configured to construct a model of the body of the individual based on the detected parts of the image comprising the body of the individual, to detect body parts in the constructed model and/or in the obtained images and to determine the body measurements based on the constructed model, the detected body parts and the determined reference points.

According to a fourth aspect, the disclosure relates to a system comprising the collapsible wall according to the first aspect and a mobile device configured to execute the method according to the second aspect.

According to a fifth aspect, the disclosure relates to a control unit configured to perform the method according to the second aspect.

According to a sixth aspect, the disclosure relates to a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method according to the second aspect.

According to a seventh aspect, the disclosure relates to a computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method according to the second aspect.

Brief description of the drawings Fig. 1 illustrates an individual standing on a collapsible wall device.

Fig. 2 illustrates the collapsible wall device when rolled.

Fig. 3a-3c illustrates the collapsible wall device seen from the front side.

Fig 4a-4c illustrates examples of rods that can be used in an erecting device of the collapsible wall device.

Fig 5a-5f illustrates an erecting device according to a first embodiment.

Fig 6a-6e illustrates an erecting device according to a second embodiment.

Fig 7 illustrates an erecting device according to a third embodiment.

Fig 8a-8d illustrates examples of markings indicating where the individual shall stand.

Fig. 9 is a flowchart of the proposed method for determining body measurements of an individual.

Fig. 10 illustrates step S6 of the method of Fig. 9 according to one example embodiment.

Fig. 11 and Fig. 12 illustrates cross-section and circumference measurements according to the example embodiment of Fig. 10.

Fig. 13a and 13b illustrates circumference measurements on a limb cross-section according to the second example embodiment.

Fig. 14 illustrates an electronic device according to some embodiments.

Fig. 15 illustrates a control unit of the electronic device according to some embodiments.

Detailed description

This disclosure proposes a simple way of determining body measurements using a background with a graphical pattern thereon and image processing. More specifically this disclosure proposes a system comprising a collapsible wall device and a computer implemented method performed by an electronic device e.g. a smartphone.

To determine body measurements a user first erects the collapsible wall device. The collapsible wall device is typically so small (when in a collapsed state) that it can be sent to the customer e.g. by post. In the erected state one part of the collapsible wall device is placed on the floor and one part stands upright in relation to the floor. The upright part, and possibly also the floor part, will serve as a background when capturing images for use when determining body

measurements. The user is then instructed about where on the collapsible wall device the individual to be measured shall stand, either by the software

application or by an instruction printed on the collapsible wall device. The individual to be measured (e.g. the user) then stands directly on the part of the collapsible wall device that lies on the floor, on a marked position. Because of the predetermined pattern on the wall part of the collapsible wall device, the distance between the individual to be measured and the camera is not so important.

Images or video of the body are then captured from different angles, for example by slowly rotating the body or by capturing from different angles. The software application then determines the individual's specific measurements by performing image processing.

The proposed technique can be used in different situations where body

measurements are needed, e.g. to try out clothes or other wearable equipment. The method might also be used in medical applications or for any application where body measurements are needed.

Fig. 1 illustrates an individual 100 standing on a collapsible wall device 1 designed to enable determination of body measurements of an individual positioned on the collapsible wall device 1. In Fig. 1 the collapsible wall device 1 is raised. In other words, the collapsible wall device 1 is in an erected state, in which it can be used for determining body measurements. An electronic device 2, here a camera, is held in front of the individual 100 in order to capture images (for use when determining body measurements of the individual) with the collapsible wall device 1 serving as a background.

The collapsible wall device 1 comprises a floor part 11 , a wall part 12 and an erecting device 13. The wall part 12 and the floor part 1 1 are for example made of nylon, plastics, textile, paper, etc. The floor part 11 and the wall part 12 may be designed as one foldable sheet. In other words, according to some embodiments the wall part 11 and the floor part 12 are formed by one single piece of material. The collapsible wall device 1 need to be big enough to be usable for an individual of any size. For example, the wall part is about 2x2 metres and the floor part is about 1x2 metres.

The collapsible wall device 1 can be collapsed. For example, the collapsible wall device 1 can be rolled or folded. Fig. 2 illustrates the collapsible wall device 1 when rolled. Fig. 3 illustrates a collapsible wall device seen from the front side i.e. from above, when unrolled on a surface, (but not raised).

The floor part 11 comprises a front side 112. In other words, a side that will face the individual 100 when using the collapsible wall device 1. On the front side 112 there is at least one marking 14 that indicates where the individual shall stand.

The wall part 12 comprises a lower edge 121 and an upper edge 123. When in use, the lower edge 121 of the wall part 12 is foldably (or articulately) attached to one edge of the floor part 11 , herein referred to as the back edge 111. The edge opposite the back edge 111 is herein referred to as a front edge 113. In other words, the floor part 11 and the wall part 12 may be assembled in a way, such that it is possible to raise the wall part 12 in relation to the floor part 11 to an erected state. In the erected state the angle between the floor part 11 and the wall part 12 is approximately 90 degrees, e.g. between 85 and 95 degrees.

The wall part 12 also comprises a front side 122. The front side 122 has a predetermined pattern thereon. The predetermined pattern covers the entire front side 122 of the wall part 12 or a major part of the front side 122. The

predetermined pattern comprises e.g. squares, dots or other geometric shapes. The predetermined pattern should typically have a colour that contrasts the colour of the individual i.e. it should differ from the individual’s skin and clothes. For example, a squared pattern in blue and green (like blue-screen/green-screen technology) may be used. The size of the squares or dots is typically around 1-2 cm in cross-section. In other words, the predetermined pattern comprises regularly positioned and or sized objects. The predetermined pattern 15 is recognizable by an image processing algorithm. In some embodiment, the floor part also has a predetermined pattern thereon. In other words, in some embodiments the floor part 11 also has a predetermined pattern thereon. The predetermined pattern covers the entire front side 1 12 of the floor part 11 or a major part of the front side 112. For example, the same predetermined pattern covers the floor part 1 1 and the wall part 12. The predetermined patterns may then be aligned at the transition between the floor part 11 and the wall part 12.

The erecting device 13 configured to hold the wall part 12 in an upright position in relation to the floor part. Thereby, the predetermined pattern 15 on the front side of the wall part 12 faces the individual 100 standing on the at least one marking 14 on the front side of the floor part 12, as illustrated in Fig. 1.

The collapsible wall device 1 facilitates determination of body measurements in different ways. Firstly, it properly shields the body of the individual 100 from disturbing objects in the background. Thus, it facilitates detection of different parts of the body. More, specifically, when picturing the individual 100 standing on the collapsible wall device 1 , it is possible to determine how many objects are fully or partially covered by the body. An image processing algorithm can then be used to determine body measurements of the individual 100, as will be further explained in Fig. 9.

The at least one marking 14 on the floor part 1 1 will also guide the individual to face in different directions, that are beneficial for determining body parts. Flence, by using the collapsible wall device, the user is guided to capture images that are well suited for body part determination.

Furthermore, because the body will be in a fixed position against markings on the floor part 1 1 of the collapsible wall device 1 (which an application software or other readable instruction clearly describes) the distance between the individual and the front side 122 is optimized and known. Typically, it is desirable if the individual is positioned as closed as possible to the wall. The known distance between the individual 100 and the wall part 12 may also be used by an image processing algorithm, e.g. to calibrate extracted measurements. For example, objects closer to the camera may appear to be larger in the images due to a perspective effect. This means that if the individual is standing a certain distance from of the wall part 12, the covered parts of the predetermined pattern on the wall part 12 will be larger than the actual individual, due to this perspective effect. Knowing the distance between the individual and the wall, and a set of known distances between points on the predetermined pattern in both the floor part 11 and wall part, it is possible to compute a correction factor that translates measurements extracted from the covered parts on the predetermined patterns to measurements on the individual.

Hence, there are several advantages achieved by using the collapsible wall device in comparison to other types of reference objects, such as a reference object (e.g. a ruler or disc) positioned in front of the body or in other parts of the image. Such a reference object may in addition cause the arms of the individual to disappear or parts of the body to be occluded, e.g. due to the individual holding the reference object. This problem is typically even more significant when picturing the individual 100 from the side or similar. The benefits of using the collapsible wall device 1 for determining body measurements will be described in further detail in relation to Fig. 9 describing a method for determining body measurements, which can be used together with the collapsible wall device 1.

Hence, the collapsible wall device 1 may have different forms and be designed and constructed in different ways. In some embodiments, the floor part 11 and the wall part 12 are two (or more) separate pieces, as illustrated in Fig. 3b and 3c. Then, the floor part 11 and the wall part 12 may be dis-connectable from each other. Alternatively, the floor part 11 and the wall part 12 are two pieces that are non-releasably fixated to each other. The collapsible wall device 1 or the separate pieces may have different shapes. For example, they may be rectangular (Fig.

3a), rounded (Fig. 3b) or elliptic (Fig. 3b). In some embodiments, a wedge part 17 is arranged between the parts to assure that the predetermined pattern covers the entire background and of the individual 1 standing on the collapsible wall device 1. In some embodiments, the predetermined pattern is aligned at the transitions between the wall part 12, the floor part and (if present) the wedge part 17.

The erecting device 13 typically comprises rods configured to erect the wall part 12 in an upright position and possibly also to tense the floor part 11. In some embodiments, the collapsible wall device 1 , is self-standing i.e. independent of walls and hooks.

Fig 4a-4c illustrates examples of rods 32 that can be used in the erecting device 13. The rods 32 can be made of different materials e.g. plastic, fiberglass, metal, wood or other suitable material. In some embodiments the rods 32 are bendable or flexible. In some embodiments the rods are made from rod parts. If the rods 32 are arranged in channels, then the channels may then have openings to facilitate assembling the parts of the rods. Fig. 4a shows a spring rod 32’, e.g. a steel spring strip. Fig. 4b illustrates a telescopic rod 32”. Fig. 4c illustrates a rod 32’” formed by a plurality of hollow parts that are held together by elastic twists. The hollow parts are e.g. assembled using sleeves, e.g. metallic sleeves.

The erecting device 13 may be constructed in different ways. For example, the rods 32 are arranged as a frame or as a cross. Some possible designs will now be described with reference to Fig. 5 to 7. For simplicity, the predetermined pattern 15 and the at least one marking 14 are not shown in Fig 5-7.

Fig 5a-5e illustrates an erecting device 13 according to a first embodiment. In this embodiment the wall part 12 and the floor part 1 1 are designed as one sheet. In Fig 5a-5e the sheet is rectangular. However, it must be appreciated that it might also have other shapes, e.g. it may be elliptic or have rounded corners.

Fig. 5a illustrates the collapsible wall device 1 in an erected state. The erecting device 13 comprises springs arranged at the outer edges of the wall part 12 and the floor part 1 1 . More specifically, in this embodiment the erecting device comprises a rod pocket and at least one rod 532 e.g. a spring steel strip. A rod pocket defines the channel 531 that extends along the edges of the rectangular sheet. The at least one rod 532 in positioned in the channel 531 along the edge of the rectangular sheet, as illustrated in Fig. 5b which shows the channel 531 seen from the side along the channel 531. In some embodiments the least one rod 532 is bendable or flexible. Thus, the collapsible wall device 1 may be rolled without removing the at least one rod 532. In the erected state, the wall part 12 is raised to an upright in relation to the floor part 11. This is achieved by folding the rectangular sheet such that the angle a between the floor part 11 and the wall part 12 is about 90°. An angle of 90° is usually desirable. However, in reality the angle a might deviate a few degrees.

The at least one rod 532 is angled in the transition 533 between the floor part 11 and the wall part 12. In other words, the at least one rod 532 is shaped like a corner in the transition between the wall part 12 and the floor part 11. In some embodiments, the at least one rod 532 is also angled in the corners 534 of the rectangular sheet. Another possibility is that the corners of the rectangular sheet are rounded and that the at least one rod is simply bent in the corners.

If the at least one rod 532 is stiff enough, the corners can be formed by bending the steel spring, as illustrated in Fig. 5c. Another possibility is to use a sleeve 537 made in a solid material to create the corner (Fig. 5d).

In some embodiments, illustrated in Fig. 5e, the erecting device 13 comprises a plurality of rods 532 that can be assembled by coupling means. For example, the rod parts are connected by angle coupling means 535 e.g. at the transitions 533 between the floor part 11 and the wall part 12. In some embodiments, openings 536 are formed in the channels 531 at the transitions, such that a user can easily connect the parts to each other when erecting the collapsible wall device 1. The angle coupling means 535 are e.g. plastic blocks with holes on two adjacent sides where the springs may be inserted and fixated, such that the angle of about 90° is formed between the rods 532 of the wall part 12 and the floor part 11 when the collapsible wall device 1 is erected.

In some embodiments, the rods 532 are also split into rod parts along the sides of the collapsible wall device 1. For example, the rods 532 are split in the middle of the upper edge 123 (Fig. 3a) of the wall part 12 and in the middle of the front edge 113 of the floor part 11. In this way, the collapsible wall device 1 may be folded along a middle 51 , before rolling it, which means that the collapsible wall device 1 will be shorter (about half length) in the collapsed state. Coupling means 535 are then arranged to fasten the rod parts to each other when erecting the collapsible wall device 1. The coupling means 535 are e.g. plastic blocks or sleeves with holes on opposite sides where the rods can be inserted and fixated. The coupling means 535 ang the angular coupling means 535 may be fixed e.g. stitched to the collapsible wall device 1. In some embodiments, openings 536 are formed in the channels 531 at the couplings, such that a user can connect the parts to each other. Alternatively, the coupling means 535 are hinges (Fig. 5f) that can be locked in a desired angle with e.g. a“click” mechanism.

Fig 6a illustrates a collapsible wall device 1 with an erecting device 13 according to a second embodiment. In this embodiment the floor part 11 and the wall part 12 have rounded corners. However, it might as well be elliptic. The erecting device 13 comprises springs 632 arranged at the outer edges of the wall part 12 and the floor part 1 1. In other words, a spring 632, e.g. a steel spring strip, is arranged in a channel 631 along the wall part 12 (Fig 6b). In the same way another steel spring strip is arranged in a channel along the floor part 11. In the erected state the floor part 11 is attached to the wall part 12 by a coupling device 16, which holds the wall part 12 in an upright position in relation to the floor part 11 , see Fig. 6c which pictures the coupling device (and the collapsible wall device 1 ) seen from the side. The coupling device 16 is e.g. a hinge that can be locked at an angle of 90° or it is a fixed coupling. In some embodiments, the wall part 12 may be detached from the floor part 11 when the collapsible wall device 1 is in a collapsed state.

The collapsible wall device 1 can then be folded by twisting the springs 632 (Fig. 6d), such that the flexible rod forms two or more loops (Fig. 6e) instead of one.

The collapsible wall device 1 can then be fit in a flat case or box. When taking out the collapsible wall device 1 from the case or box, the steel spring strip will force the collapsible wall device 1 to automatically“pop-up". In other words, in some embodiments, the erecting device 13 is configured to raise the wall part 12 to the upright position.

Fig. 7 illustrates an erecting device 13 according to a third embodiment. In this embodiment the erecting device 13 comprises (stiff) rods 731 that are arranged as a cross at the back of the wall part and attached to the wall part 12 in the corners 734. Click joints 733 are arranged along on rods 732 such that the rods can be collapsed (left most drawing). The click joints make it possible to unfold the collapsible wall device 1 (like an umbrella). The drawing second closest to the left illustrates the collapsible wall device 1 when erected and the two right figures illustrated the click joint 733 in further detail, when open (second right most) and closed (right most). In Fig 7, the floor part is only supported by the floor. However, it may alternatively be supported by at least one rod.

It must be appreciated that the collapsible wall device 1 and the erecting device

13 are not limited to examples above. In principle, any suitable construction may be used as long as the wall is erectable to a state where the at least one marking

14 and the predetermined pattern 15 are arranged as described above and as long as it is also collapsible and portable. Even though a stand-alone construction is generally desirable a wall mounted design is not to be excluded.

In other words, in some embodiments, the floor part is only supported by the surface. In some embodiments the floor part is also supported by rods, in order to assure that it is correctly unfolded.

Fig 8a-8d illustrates examples of markings indicating where the individual shall stand. The markings indicate to a user an approximate position for his or her feet when capturing images for use when determining body measurements.

The markings are for example foot prints facing away from the wall part 12, as illustrated in Fig. 8a. The foot prints show where the individual shall put its feet. The markings are typically matched to an image processing algorithm, such that the images of the individual are captured from angles beneficial for determining body measurements.

For simplicity Fig. 8a to 8d only illustrate two pairs of markings. One marking (here denoted 14a) positioned close to the wall part 12 that indicates a direction facing straight out from the wall part 12 and one marking (here denoted 14b) a bit further away from the wall part 12 that indicates an angle facing along the wall. However, the markings may indicate further directions, such that the individual may be pictured from different angles. For example, the markings may comprise 6 pairs of foot prints with 15° in-between, such that the individual may be pictured from 6 different angles. In other words, in some embodiment, the at least one marking indicates a plurality of directions for the individual to face, when capturing images for use in the determination of body measurements.

In some embodiments the markings comprise lines 141 with different style (e.g. colour or pattern) for different foot sizes. In other words, the at least one marking 14 may also comprise markings for different foot sizes. For example, different colours or styles that corresponds to different shoe sizes (for instance size 8, 12, 14 etc.). Then it will be even more clear for the user where to put his or her feet.

The markings may be designed in other ways as illustrated in Fig. 8b to 8d. In some embodiments, the markings are shaped as arrows (Fig.8b, Fig. 8c) indicating where the individual shall place his or her feet and in which direction.

In some embodiments, the markings comprise circles (Fig. 8d) showing a centre and an area where the user shall place his or her feet.

All the different types of markings can be numbered, such that a user may be instructed (e.g. via a software application used for determining the body measurements) about the position and direction he or she should place his or her feet in one particular image.

Fig. 9 is a flowchart of a computer implemented method for determining body measurements of an individual 100. The method may be implemented as a computer program comprising instructions which, when the program is executed by a computer (e.g. a processor in an electronic device 2) cause the computer to carry out the method. According to some embodiments the computer program is stored in a computer-readable medium (e.g. a memory or a compact disc) that comprises instructions which, when executed by a computer, cause the computer to carry out the method.

The proposed method may be performed by an individual 100 being a customer who is shopping in an internet store. The individual 100 may be at home or in principle anywhere else. The computer implemented method may be executed locally i.e. where the individual is. For example, the computer implemented method may be a software application installed on the customers smartphone. Alternatively, the computer implemented method may be executed remotely in a server. In some embodiments the method is performed jointly by a local device and a remote device, that communicate with each other.

The method is typically performed using the collapsible wall device 1 described in Fig. 1 to Fig. 8 However, another background may also be used, as long as it has a predetermined pattern thereon. In other words, in some embodiments, the predetermined pattern comprises regularly spaced shapes with predetermined dimensions.

The user, e.g. the individual, initiates the method for example by starting a software application installed on a smart phone. The user may then be instructed to fold out the collapsible wall device 1 and to take a series of images while standing in different directions indicated e.g. by the at least one marking 14. In other words, in some embodiments, the method comprises providing SO user output instructing an individual to stand in front of a wall having a predetermined pattern thereon. The instruction may also instruct the user how to pose. For example, the individual may be instructed to hold his/her hands on his hips or straight out from the body, while standing on the at least one marking 14 on the collapsible wall device 1 .

The proposed method comprises obtaining S1 at least two images of the individual picturing the individual facing in different directions while standing in front of a wall having a predetermined pattern thereon. In other words, the individual is pictured from different angles or directions in the at least two images. If the method is performed using a mobile device comprising a camera (e.g. a smartphone), the camera may be used to capture the images. In other words, in some embodiments the obtaining S1 comprises capturing the images or reading the captured images from a memory of the electronic device performing the method. If the method is implemented in a server, the obtaining S1 typically comprises receiving the images from a camera e.g. via a web interface or via email. It is typically important that the individual stands 100 in a desirable way in relation to the wall. For example, the individual shall stand close to the wall and face different predetermined directions in the different images. This might easily be achieved using the collapsible wall device 1 , as it will guide the user to the right positions. For example, 3 to 5 images are captured. Between the capturing of the images the individual typically turns slightly, such that the next image pictures the user from a different angle. The images are typically captured right from the front in relation to the wall. In some embodiments, the individual has a predefined pose in the images. For example, the user holds his hands at his/her hips or straight out from the body. In other words, in some embodiments, the obtained images picture the individual from predetermined angles and/or in predetermined poses.

The shooting can be done as follows. The individual 100 to be measured stands on the at least one marking 14 on the floor part 11 of the collapsible wall device 1 , with his back facing the front side 122 of the wall part 12. Another person may need to assist in capturing the images using e.g. a smartphone comprising a camera. Alternatively, a self-timer is used. A first image is captured right from the front. The individual then turns e.g. 20 degrees, whereby a second image is captured and so on. This continues until a certain number of images have been captured. The images are then stored in the smartphone for further processing. Alternatively, the images are sent to a server for storage.

In some embodiments, a video is captured and then the obtaining comprises selecting the images from the captured video. The video may picture the individual when rotating, while standing on the collapsible wall device 1.

In some embodiments, the individual stands still during the shooting, while the camera is moved around the individual to picture the individual from different directions. In some embodiments, there are further markings (e.g. arrows) on the floor part indicating such angles. Then it can be assured that the individual stands at the same position and with the same pose in all images. This may be beneficial for some types of image processing. A combination of rotating the individual and moving the camera is also possible. For example, images from 3 different angles are captured where the individual faces the camera (i.e. standing with his or her feet in a 90° angle from the wall). Then the individual turns 180 degrees, and three images are captured from the same angles, to picture the individual from the back. In some embodiments, the floor part comprises a rotating part e.g. a pedestal, such that the individual can be automatically rotated.

In some embodiments, at least one image is taking from above to picture the individual’s feet. Such an image enables determining measurements of the individual’s foot and may consequently be used to try out shoes or similar.

In some embodiments, at least one image of the background pattern without the individual present is also obtained. Such an image can be used to calibrate optical parameters of the camera such as lens distortion, focal length and principal point or to compensate for angular distortion or imperfection in the installation of the collapsible wall device 1.

The method further comprises analysing S2 the predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the predetermined pattern. More specifically, the predetermined pattern is analysed to (i) find which parts of the images contain background and the individual respectively, and (ii) determine the correspondence between pixel measurements and physical metric

measurements. The is done by analysing the predetermined pattern 15, which is basically a background reference pattern. In some embodiments, the pattern comprises a plurality of regularly spaced squares with predetermined dimensions. The system then first detects these squares. This can be done in a number of ways. One possible approach is (i) to apply an image segmentation algorithm to detect connected regions, (ii) to analyze each region to determine whether it is a square of the expected colour, (iii) to create a list of the sizes of each detected square, and (iv) to determine the square size that fits best with a majority of the detected squares. The determination of whether a segmented region is a square can be done in a number of ways, e.g. by fitting a polygon to the square boundary. The comparison with the expected reference colour should typically be done in a liberal way, to allow for photographic differences due to lighting conditions and individual differences between cameras. The determination of a majority square size can be done e.g. by (i) taking the median of all square sizes or, (ii) by selecting the square size S where the number of squares with a size within [S - e, S + e] is as large as possible, using a predetermined e parameter. Once the majority square size has been decided, detected squares with a size that differ too much from the majority size are rejected, and a list of accepted squares is kept. The precise colours of the predetermined pattern, as manifested in the current photographic setting, can then be determined by computing a mean value of colours within and in between the reference squares.

The detected predetermined pattern 15 is then used to determine known reference points of the predetermined pattern. If the pattern comprises squares, the reference points are for example corners of one or more of the squares. In this embodiment, the determined reference points are typically used to compute a pixels-to-meter ratio for each image, which is used in later processing to convert body measurements to physical, metric measurements.

The method further comprises constructing S3 a model of the body of the individual based on the detected parts of the image comprising the body of the individual. There are different ways of implementing this.

In some embodiments, the constructed model comprises a set of binary two- dimensional (also referred to as 2D) images of the body of the individual, wherein each binary two-dimensional image corresponds to one of the obtained images. More specifically, for each image, a foreground/background mask is obtained. The rest of the method will now be explained with reference to this embodiment.

The foreground/background mask is for example obtained using a’’green-screen- style” analysis, which comprises comparing all pixel values with the

predetermined pattern and applying a threshold on the colour difference. The output from this step is a set of binary images with a predetermined value for pixels belonging to the predetermined pattern and another predetermined value for pixels belonging to the body of the individual.

The predetermined pattern brings about further advantages, that may be used e.g. when constructing S3 the model. For example, the predetermined pattern can be used to compensate for optical or perspective distortion in the obtained images. If the predetermined pattern continues on the floor part, then it may be used for further correction and calibration if the images. Furthermore, the predetermined pattern on the floor part may be used to verify the camera’s position in relation to the wall.

The method further comprises detecting S4 body parts in the constructed model and/or in the obtained images. In other words, the set of binary images (i.e. the two-dimensional model) are analysed to detect key body parts such as head, torso, legs, feet, arms, knees, shoulders, etc. The output is e.g. a list of key body part positions, expressed as image pixel coordinates. There are several published methods for human pose estimation in the scientific computer vision literature that can be used, and this invention is not restricted to any specific such method. One possibility is to use deep Convolutional Neural Networks (deep CNNs). In other words, in some embodiments, the detecting S4 comprises using a machine learning model trained to detect the body parts.

As an example, the method described in [Newell, Yang, Deng.’’Stacked hourglass networks for human pose estimation”. ECCV 2016] can be used. In this approach, a deep CNN is trained on images of humans in different environments. At runtime, the CNN produces one heatmap for each key point of the human body (knee, elbow, ankle, etc.). By finding maxima in these heatmaps, the location of each such key point in the image can be obtained. In the context of this disclosure, better results can be achieved by constructing a deep CNN that is trained only on images representative of the intended use case. A training dataset can be constructed containing images of different individuals standing at the designated position in front of the predetermined pattern. Ground truth can be provided by manual annotation, where the precise location of each key body part is annotated by a human. A neural network can then be trained on this dataset, or by using a combination of this dataset and publicly available larger and more general datasets. The neural network layout can be constructed in a number of ways, e.g. according to [Newell, Yang, Deng] as mentioned above. The binary

foreground/background mask can also be used as a separate input to the neural network, included in both training and runtime. At runtime, the

foreground/background mask can be used to cut out an image patch to feed to the network where the individual is well-centred.

If the analysing, constructing S3 or detecting S4 fails in one or more of the images, then the user might need to recapture one or more of the images. Hence, in some embodiments, the method comprises providing S5 user output indicating that one or more of the obtained images has insufficient quality. The instruction is e.g. provided via a user interface in the smartphone.

The method further comprises determining S6 the body measurements based on the constructed model, the detected body parts and the determined reference points. More specifically, the body measurements are determined based on the constructed model and a relationship between image coordinates of the reference points in the constructed model and known metric distances between the determined reference points of the predetermined pattern. For the two- dimensional model this is done by extracting S6a metrics from the individual binary two-dimensional images and combining S6b the metrics to obtain the body measurements, see Fig. 10. In other words, the measured key body part positions are used to guide the extraction of the desired output body measurements.

There are different types of measurements; distance measurements and cross- section measurements. Distance measurements are for example body height or distance between shoulders. Cross section measurements are e.g. a

circumference of an arm, a leg or a head.

In one example embodiment, to extract S6a measurement expressing a

circumference of a certain limb 104, two key points 101 representing the beginning and end of that limb 104 are used to construct a limb axis 103. Across this limb axis 103, a number of cross-section measurements 102 are made, excluding cross-sections that are closer to the key points 101 than a

predetermined margin. The largest cross-section measurement is then used as a cross-section measurement. This procedure is illustrated for the upper arm cross- section measurement in Fig. 11.

For distance measurements expressing lengths (e.g. shoulder-to-shoulder), the measurements can be extracted S6a by measuring the distance between key pose points directly.

Then the measurements from individual images are combined S6b. First, all pixel measurements are converted to metric measurements using the reference points (or rather the pixels-to-meter ratio) computed in the analysing S2 step. In some embodiments the distance between the wall and the individual 100 is known.

Then the measurements may also be calibrated to compensate for this. To determine this compensation, the camera focal length can be used. The focal length can either be preconfigured to a nominal value related to the specific camera model or be computed in a camera calibration with the predetermined pattern.

The distance measurements from the different input images are simply combined S6b by averaging the distance measurements, to minimize the measurement error. In other words, in some embodiments the combining S6b comprises averaging distance measures from the individual binary two-dimensional images to produce a distance measurement associated with the body. In some

embodiments, any individual measurement that stands out from the other measurements is disregarded.

For circumference measurements, the combining S6b comprises combining individual cross-section measurements from the individual binary two-dimensional images to produce a circumference measurement of a body part. This can be done in a number of ways, e.g. by fitting a smooth curve consistent with the cross- section measurements and measuring the length of this curve. Prior knowledge about the typical shape of different body part cross-sections can be built-in by assuming an initial outline curve and finding a new curve that is consistent with the measurements while being as close as possible to the original initial curve. An alternative, simpler method is to compute the circumference c as

where d1 and d2 are orthogonal cross-section measurements, and is a predetermined constant.

Note that by selecting differently, this formula can be used for computing the circumference of ellipses, rectangles, and everything in between. A constant can be selected that best captures the typical shape of each body part, and different constants can be used for different body parts. This is illustrated in Fig. 12.

If the individual stands on a surface that also has a predetermined pattern thereon, then measurements may also be performed on the individual’s feet. This may be done in a similar manner as for the rest of the body. For example, the feet’s contour may be determined by analysing the part of the predetermined pattern 15 that is covered by the feet. However, this may require that further images are captured, e.g. from above.

In some embodiments, the method comprises determining S7 a size of a garment by comparing the estimated body measurements with a size table. The

manufacturers typically provide tables, with arm length, forearm circumference, size of the collar, etc. for specific sizes. When the body measurements have been determined, then the body measurements may be compared to the tables to find the right size for the individual. It is also possible to determine which body measurements differ most from the standard sizes e.g. if an individual has relatively large forearms.

A plurality of size tables provided by different manufacturers may be stored in a database, e.g. in a server. Each manufacturer may provide one or more size tables. Then if a customer has selected a particular garment, then the

corresponding size table is retrieved from the server and the matching S7 is made for that particular size table. Hence, one set of body measurement may typically result in different sizes for different garments. Thus, a person that would normally buy size 10 for all garments, would in some cases get an 8 or 12.

In general, it is desirable to select the garment such that all body measurements are smaller than the numbers in the table. However, in particular for garments with a loose fit, a small deviation (<1-2 cm) above the value in the size table may be acceptable.

The matching may also take other parameters into account, such as desires presented by the customer. The customer might e.g. want a tight fit.

The proposed method has now been described with reference to a model comprising two-dimensional images. Another approach is to instead use a full three-dimensional reconstruction, 3D reconstruction, of the human body to extract the measurements. In other words, in some embodiments, the constructed model is a three-dimensional model of body of the individual. The steps of obtaining S1 and analysing S2 images works basically in the same way as in the embodiment using 2D imaging, and the output from the analysing S2 is thus plurality of reference points of the pre-determined pattern and a binary mask defining which part of the obtained images that contains the individual. However, in this embodiment it may be beneficial to let the individual stand still and to instead rotate the camera around the individual while capturing the images. Then it can be assured that the individual has the same pose in all images. In some embodiments, the analysing S2 step is performed for only one of the images e.g. an image comprising a frontal view. However, it is generally good to perform the analysing for several images, to increase robustness and average out error.

The following steps (S3 to S6) are slightly different when using a three- dimensional reconstruction. In this embodiment the constructing step takes multiple images picturing the individual as input and produces a 3D model of the body of the individual as output. The binary mask is used to determine where in the images the individual is located, so that the 3D reconstruction can focus on reconstructing the individual and not irrelevant stuff around (furniture, etc.). The predetermined pattern makes the 3D reconstruction more robust, as it ensures a high contrast between the individual and the background, as no details of the background have similar colour and pattern as the individual, etc.

In some embodiments, the three-dimensional model of the body of the individual is constructed using a multiple-view 3D reconstruction algorithm or a structure- from-motion algorithm. The 3D model can be represented as a (i) depth map defined on a pixel grid, (ii) a point cloud, (iii) a polygonal mesh, or (iv) a voxel grid. Such methods are very well-studied in the scientific computer vision literature, and any combination of algorithms found in the literature can be used. One example is described in [Vu, Labatut, Pons, Keriven.’’High Accuracy and Visibility-Consistent Dense Multiview Stereo”. PAMI 2012]. Most basic theory is well-described in the book [Hartley, Zisserman.’’Multiple View Geometry in Computer Vision”.

Cambridge University Press 2000]. If the individual stands close to the wall, the 3D reconstruction will only be able to accurately reconstruct the frontal side of the individual (i.e. the side facing the camera). To get accurate measurements, two sets of images can be obtained; one set where the individual is facing the camera, and one set where the human faces the wall. For each of these image sets, a reconstruction can be produced that accurately reconstructs half of the body. The two halves can then be combined, either by producing a joint 3D model of the entire body, or by producing half-circumference measurements from each partial body model and combining these measurements. The reference points of the predetermined pattern obtained by the reference pattern analysis is used to get an absolute metric scale of the 3D body model.

In some embodiments the distance between the wall and the individual 100 is known. Then this distance can be included as a metric constraint in the 3D reconstruction, contributing to resolving projective reconstruction errors and thereby increasing the accuracy of the metric reconstruction.

In the detecting S4 step, key body parts are then localized, in a manner similar to in the embodiment using 2D imaging. However, in contrast to in the embodiment using 2D imaging, in this embodiment, 3D information can be included in the body part localization. For example, the 3D information can be represented as a depth map which is fed to a deep learning model as an additional input channel. The output from the detecting S4 is a list of positions of key body parts, relative to the 3D body model.

In the determining S5 step, the actual body measurements are then extracted from the 3D model. In other words, in some embodiments, the determining S5 body measurements comprises extracting the body measurements from the three- dimensional model based on the detected body parts. As in the embodiment using 2D imaging, the extracted measurements may comprise circumference measurements of a body part and/or distance measurements associated with the body.

For circumference measurements, the procedure is illustrated in Fig. 13a and Fig. 13b. The 3D model is analysed along an axis 1301 going through two key body parts, e.g. shoulder 1302 and elbow 1303. A number of cross-sections 1304 of the model orthogonal to this axis are analysed. For each such cross-section 1304, the limb (e.g. arm) contour 1305 is extracted by computing the intersection between the 3D model and the cross-section plane. The circumference of each such contour is measured, and the largest circumference over a certain number of cross-sections is taken as the output circumference measurement. If only a partial body contour is available, e.g. due to occlusion of the back-side of the limb, the contour can be approximated using a minimum-energy continuation of the contour curve 1306, using an optimization that minimizes an energy function expressed by an integral over the (first and n-th order) curve derivatives.

For distance measurements, i.e. measurements representing lengths (e.g.

shoulder-to-shoulder), the distance measurements can be made by directly measuring the distance between points on the 3D model that are closest to the key pose positions.

The pre-determined pattern may also be used in other ways. For example, if the intrinsic calibration of the camera is not known or only known approximately, the background pattern can be used to resolve the projective ambiguity resulting from a 3D reconstruction using an uncalibrated camera. A reference coordinate system can be constructed using the pattern, and the reconstruction can be constrained to respect the known coordinates of points on the background pattern. Methods for handling projective ambiguities using additional constraints are well-known in the computer vision literature, see e.g. [Hartely, Zisserman] as referenced above". In other words, in some embodiments, the computer implemented method comprises using the predetermined pattern to resolve projective ambiguity resulting from constructing the three-dimensional model using an uncalibrated camera.

Fig. 14 illustrates an electronic device 2 configured to implement the proposed method. The illustrated electronic device 2 is a mobile telephone. The electronic device 2 includes a camera assembly 21 for capturing digital still pictures and/or digital video clips. It is emphasized that the electronic device 2 need not be a mobile telephone but could alternatively be a dedicated camera or some other device. Other exemplary types of electronic devices 2 include, but are not limited to, a camera, a tablet computing device, a PDA and a personal computer. In some embodiments, the electronic device is a server arrangement.

In some embodiments, the electronic device 2 comprises a communication interface, e.g. wireless communication interface, configured for communicating with a backend server.

The electronic device 2 also comprises a control unit 22 configured to implement the method described in connection with Fig. 9 to 13. Fig. 15 illustrates the control unit 22 in more detail. The control unit 22 comprises hardware and software. The hardware is for example various electronic components on a for example a Printed Circuit Board, PCB. The most important of those components is typically a processor 401 e.g. a microprocessor, along with a memory 22 e.g. EPROM or a Flash memory chip. The software (also called firmware) is typically lower-level software code that runs in the microcontroller.

The control unit 22, or more specifically a processor 221 of the control unit 22, is configured to cause the control unit 22 to perform any or all of the aspects of the method illustrated in Fig. 9 and described in connection thereto. For example, the determination of body measurements i.e. steps S1 to S5 are performed in the electronic device 2. The obtained images are then stored in the memory 222 and need not to be exposed outside the electronic device 2, which increases integrity of the individual. Size tables are typically stored in a backend server. Hence, it might be desirable to perform the size matching (step S7), in the backend server. Alternatively, relevant size tables could be downloaded to the electronic device 2. Then the entire method may be performed in the electronic device 2.

In particular, the control unit 22 is configured to capture, using the camera assembly 21 , at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having a predetermined pattern thereon and to analyse the predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern.

The control unit 22 is also configured to construct a model of the body of the individual based on the detected parts of the image comprising the body of the individual, to detect body parts in the constructed model and/or in the obtained images, and to determine the body measurements based on the constructed model, the detected body parts and the determined reference points.

The disclosure also relates to a system comprising the collapsible wall described in Fig. 1 to 9 and the electronic device 2 described in relation to Fig. 14 and 15.

The terminology used in the description of the embodiments as illustrated in the accompanying drawings is not intended to be limiting of the described method; control arrangement or computer program. Various changes, substitutions and/or alterations may be made, without departing from disclosure embodiments as defined by the appended claims.

The term“or” as used herein, is to be interpreted as a mathematical OR, i.e., as an inclusive disjunction; not as a mathematical exclusive OR (XOR), unless expressly stated otherwise. In addition, the singular forms "a", "an" and "the" are to be interpreted as“at least one”, thus also possibly comprising a plurality of entities of the same kind, unless expressly stated otherwise. It will be further understood that the terms "includes", "comprises", "including" and/ or

"comprising", specifies the presence of stated features, actions, integers, steps, operations, elements, and/ or components, but do not preclude the presence or addition of one or more other features, actions, integers, steps, operations, elements, components, and/ or groups thereof. A single unit such as e.g. a processor may fulfil the functions of several items recited in the claims.