Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS, STORAGE MEDIA, AND SYSTEMS FOR AUGMENTING DATA OR MODELS
Document Type and Number:
WIPO Patent Application WO/2023/283377
Kind Code:
A1
Abstract:
Methods, storage media, and systems for augmenting two-dimensional (2D) data, three-dimensional (3D) data, 2D models, or 3D models are disclosed. Exemplary implementations may: receive a first plurality of images; generate a first 3D model based on the first plurality of images; receive a second plurality of images; generate a second 3D model based on the second plurality of images; and augment the first 3D model with the second 3D model.

Inventors:
THOMAS MATTHEW (US)
BURKHART JACOB (US)
DZITSIUK YEVHENIIA (US)
SOMMERS JEFFREY (US)
BARBHAIYA HARSH (US)
UPENDRAN MANISH (US)
GOULD KERRY (US)
CLIFTON ANNA MARIE (US)
Application Number:
PCT/US2022/036416
Publication Date:
January 12, 2023
Filing Date:
July 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HOVER INC (US)
International Classes:
G06T17/00; G06T15/20
Foreign References:
US10621779B12020-04-14
US8818768B12014-08-26
US20200372708A12020-11-26
US20110181589A12011-07-28
US20120314924A12012-12-13
US20160378887A12016-12-29
Attorney, Agent or Firm:
BARBHAIYA, Harsh (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of augmenting 3D models, the method comprising: receiving a first plurality of images; generating a first 3d model based on the first plurality of images; receiving a second plurality of images; generating a second 3D model based on the second plurality of images; augmenting the first 3D model with the second 3D model.

2. The method of claim 1, wherein the first plurality of images and the second plurality of images comprise at least one of visual data or depth data.

3. The method of claim 2, wherein the visual data comprises at least one of image data or video data; and wherein the depth data comprises at least one of point clouds, line clouds, meshes, or points.

4. The method of claim 1, wherein the first plurality of images and second plurality of images are captured by one or more of a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, and an aerial platform.

5. The method of claim 1, wherein each image of the first plurality of images and the second plurality of images comprise a building object.

6. The method of claim 5, wherein each image of the first plurality of images comprises an interior of the building object; and wherein each image of the second plurality of images comprises an exterior of the building object.

7. The method of claim 1, wherein the first 3D model and the second 3D model comprise at least one of a polygon-based model or a primitive-based model.

8. The method of claim 1, wherein the first 3D model and the second 3D model correspond to a building object.

9. The method of claim 8, wherein the first 3D model corresponds to an interior of the building object; and wherein the second 3D model corresponds to an exterior of the building object.

10. The method of claim 1, wherein augmenting the first 3D model with the second 3D model is relative to a common coordinate system.

11. The method of claim 10, further comprising generating the common coordinate system.

12. The method of claim 11, wherein the first 3D model is associated with a first coordinate system; wherein the second 3D model is associated with a second coordinate system; and wherein generating the common coordinate system is based on the first coordinate system and the second coordinate system.

13. The method of claim 12, wherein generating the common coordinate system comprises matching the first coordinate system with the second coordinate system.

14. The method of claim 1, wherein augmenting the first 3D model with the second 3D model is based on location information associated with associated with the first 3D model and the second 3D model.

15. The method of claim 14, wherein the location information comprises latitude and longitude information.

16. The method of claim 1, further comprising: identifying a first plurality of sides of the first 3D model; and identifying a second plurality of sides of the second 3D model.

17. The method of claim 16, wherein each side of the first plurality of sides and the second plurality of sides corresponds to a side of a building object.

18. The method of claim 16, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first plurality of sides with the second plurality of sides in a common coordinate system.

19. The method of claim 1, wherein augmenting the first 3D model with the second 3D model is based on a first outline of the first 3D model and a second outline of the second 3D model.

20. The method of claim 19, further comprising: generating the first outline of the first 3D model; and generating the second outline of the second 3D model.

21. The method of claim 20, wherein generating the first outline of the first 3D model is based on a top-down view of the first 3D model; and wherein generating the second outline of the second 3D model is based on a top-down view of the second 3D model.

22. The method of claim 19, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first outline of the first 3D model with the second outline of the second 3D model.

23. The method of claim 22, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more architectural elements.

24. The method of claim 23, further comprising: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

25. The method of claim 23, further comprising: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

26. The method of claim 22, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more values derived from one or more architectural elements.

27. The method of claim 26, further comprising: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

28. The method of claim 26, further comprising: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

29. The method of claim 1, further comprising: identifying a first plurality of elements of the first 3D model; and identifying a second plurality of elements of the second 3D model.

30. The method of claim 29, wherein identifying the first plurality of elements of the first 3D model comprises semantically segmenting the first 3D model; and wherein identifying the second plurality of elements of the second 3D model comprises semantically segmenting the second 3D model.

31. The method of claim 30, wherein identifying the first plurality of elements of the first 3D model further comprises labeling the semantically segmented first 3D model, and wherein identifying the second plurality of elements of the second 3D model further comprises labeling the semantically segmented second 3D model.

32. The method of claim 29, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

33. The method of claim 32, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

34. The method of claim 29, further comprising: correlating the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

35. The method of claim 29, further comprising: identifying a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

36. The method of claim 29, further comprising: identifying an aspect of an element of the first plurality of elements; identifying a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

37. The method of claim 36, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

38. The method of claim 1, further comprising: identifying a first plurality of elements of the first plurality of images; and identifying a second plurality of elements of the second plurality of images.

39. The method of claim 38, wherein identifying the first plurality of elements of the first plurality of images comprises semantically segmenting each image of the first plurality of images; and wherein identifying the second plurality of elements of the second plurality of images comprises semantically segmenting each image of the second plurality of images.

40. The method of claim 39, wherein identifying the first plurality of elements of the first plurality of images further comprises labeling the semantically segmented first plurality of images, and wherein identifying the second plurality of elements of the second plurality of elements further comprises labeling the semantically segmented second plurality of elements.

41. The method of claim 38, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

42. The method of claim 41, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

43. The method of claim 38, wherein the first plurality of elements and the second plurality of elements are not associated with a building object.

44. The method of claim 38, further comprising: correlating the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

45. The method of claim 38, further comprising: identifying a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

46. The method of claim 38, further comprising: identifying an aspect of an element of the first plurality of elements; identifying a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

47. The method of claim 46, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

48. The method of claim 1, wherein augmenting the first 3D model with the second 3D model comprises correlating the first 3D model with the second 3D model.

49. The method of claim 48, further comprising: identifying a first plurality of elements of the first 3D model; and identifying a second plurality of elements of the second 3D model; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

50. The method of claim 49, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

51. The method of claim 49, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

52. The method of claim 48, further comprising: identifying a first plurality of elements of the first plurality of images; and identifying a second plurality of elements of the second plurality of images; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

53. The method of claim 52, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

54. The method of claim 52, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

55. The method of claim 1, wherein augmenting the first 3D model with the second 3D model comprises offsetting the first 3D model from the second 3D model.

56. The method of claim 55, wherein offsetting the first 3D model from the second 3D model is based on one or more architectural elements.

57. The method of claim 56, further comprising: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

58. The method of claim 56, further comprising: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

59. The method of claim 55, wherein offsetting the first 3D model from the second 3D model is based on one or more values derived from one or more architectural elements.

60. The method of claim 59, further comprising: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

61. The method of claim 59, further comprising: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

62. The method of claim 1, wherein augmenting the first 3D model with the second 3D model comprises dilating the first 3D model based on the second 3D model.

63. The method of claim 1, further comprising: deriving a scaling factor based on at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model; and scaling at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model based on the derived scaling factor.

64. The method of claim 1, wherein the first plurality of images comprises a first plurality of anchor poses, wherein the second plurality of images comprises a second plurality of anchor poses, and wherein augmenting the first 3D model with the second 3D model is based on anchor poses common to the first plurality of anchor poses and the second plurality of anchor poses.

65. The method of claim 1, further comprising: selecting a first subset of images of the first plurality of images based on at least one of translation data associated with the first plurality of images or rotation data associated with the first plurality of images; selecting a second subset of images of the second plurality of images based on at least one of translation data associated with the second plurality of images or rotation data associated with the second plurality of images; wherein generating the first 3D model is based on the first subset of images; and wherein generating the second 3D model is based on the second subset of images.

66. The method of claim 1, further comprising: calculating a first angular error of a first capture path associated with the first plurality of images; determining a suggested rotation based on the first angular error of the first capture path, and displaying the suggested rotation.

67. A non-transient computer-readable storage medium having instructions embodied thereon, the instructions being executable by one or more processors to perform a method for augmenting 3D models, the method comprising: receiving a first plurality of images; generating a first 3D model based on the first plurality of images; receiving a second plurality of images; generating a second 3D model based on the second plurality of images; augmenting the first 3D model with the second 3D model.

68. The computer-readable storage medium of claim 67, wherein the first plurality of images and the second plurality of images comprise at least one of visual data or depth data.

69. The computer-readable storage medium of claim 68, wherein the visual data comprises at least one of image data or video data; and wherein the depth data comprises at least one of point clouds, line clouds, meshes, or points.

70. The computer-readable storage medium of claim 67, wherein the first plurality of images and second plurality of images are captured by one or more of a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, and an aerial platform.

71. The computer-readable storage medium of claim 67, wherein each image of the first plurality of images and the second plurality of images comprise a building object.

72. The computer-readable storage medium of claim 71, wherein each image of the first plurality of images comprises an interior of the building object; and wherein each image of the second plurality of images comprises an exterior of the building object.

73. The computer-readable storage medium of claim 67, wherein the first 3D model and the second 3D model comprise at least one of a polygon-based model or a primitive-based model.

74. The computer-readable storage medium of claim 67, wherein the first 3D model and the second 3D model correspond to a building object.

75. The computer-readable storage medium of claim 74, wherein the first 3D model corresponds to an interior of the building object; and wherein the second 3D model corresponds to an exterior of the building object.

76. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model is relative to a common coordinate system.

77. The computer-readable storage medium of claim 76, wherein the method further comprises generating the common coordinate system.

78. The computer-readable storage medium of claim 77, wherein the first 3D model is associated with a first coordinate system; wherein the second 3D model is associated with a second coordinate system; and wherein generating the common coordinate system is based on the first coordinate system and the second coordinate system.

79. The computer-readable storage medium of claim 78, wherein generating the common coordinate system comprises matching the first coordinate system with the second coordinate system.

80. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model is based on location information associated with associated with the first 3D model and the second 3D model.

81. The computer-readable storage medium of claim 80, wherein the location information comprises latitude and longitude information.

82. The computer-readable storage medium of claim 67, wherein the method further comprises: identifying a first plurality of sides of the first 3D model; and identifying a second plurality of sides of the second 3D model.

83. The computer-readable storage medium of claim 82, wherein each side of the first plurality of sides and the second plurality of sides corresponds to a side of a building object.

84. The computer-readable storage medium of claim 82, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first plurality of sides with the second plurality of sides in a common coordinate system.

85. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model is based on a first outline of the first 3D model and a second outline of the second 3D model.

86. The computer-readable storage medium of claim 85, wherein the method further comprises: generating the first outline of the first 3D model; and generating the second outline of the second 3D model.

87. The computer-readable storage medium of claim 86, wherein generating the first outline of the first 3D model is based on a top-down view of the first 3D model; and wherein generating the second outline of the second 3D model is based on a top-down view of the second 3D model.

88. The computer-readable storage medium of claim 85, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first outline of the first 3D model with the second outline of the second 3D model.

89. The computer-readable storage medium of claim 88, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more architectural elements.

90. The computer-readable storage medium of claim 89, wherein the method further comprises: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

91. The computer-readable storage medium of claim 89, wherein the method further comprises: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

92. The computer-readable storage medium of claim 88, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more values derived from one or more architectural elements.

93. The computer-readable storage medium of claim 92, wherein the method further comprises: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

94. The computer-readable storage medium of claim 92, wherein the method further comprises: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

95. The computer-readable storage medium of claim 67, wherein the method further comprises: identifying a first plurality of elements of the first 3D model; and identifying a second plurality of elements of the second 3D model.

96. The computer-readable storage medium of claim 95, wherein identifying the first plurality of elements of the first 3D model comprises semantically segmenting the first 3D model; and wherein identifying the second plurality of elements of the second 3D model comprises semantically segmenting the second 3D model.

97. The computer-readable storage medium of claim 96, wherein identifying the first plurality of elements of the first 3D model further comprises labeling the semantically segmented first 3D model, and wherein identifying the second plurality of elements of the second 3D model further comprises labeling the semantically segmented second 3D model.

98. The computer-readable storage medium of claim 95, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

99. The computer-readable storage medium of claim 98, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

100. The computer-readable storage medium of claim 95, wherein the method further comprises: correlating the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

101. The computer-readable storage medium of claim 95, wherein the method further comprises: identifying a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

102. The computer-readable storage medium of claim 95, wherein the method further comprises: identifying an aspect of an element of the first plurality of elements; identifying a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

103. The computer-readable storage medium of claim 102, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

104. The computer-readable storage medium of claim 67, wherein the method further comprises: identifying a first plurality of elements of the first plurality of images; and identifying a second plurality of elements of the second plurality of images.

105. The computer-readable storage medium of claim 104, wherein identifying the first plurality of elements of the first plurality of images comprises semantically segmenting each image of the first plurality of images; and wherein identifying the second plurality of elements of the second plurality of images comprises semantically segmenting each image of the second plurality of images.

106. The computer-readable storage medium of claim 105, wherein identifying the first plurality of elements of the first plurality of images further comprises labeling the semantically segmented first plurality of images, and wherein identifying the second plurality of elements of the second plurality of elements further comprises labeling the semantically segmented second plurality of elements.

107. The computer-readable storage medium of claim 104, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

108. The computer-readable storage medium of claim 107, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

109. The computer-readable storage medium of claim 104, wherein the first plurality of elements and the second plurality of elements are not associated with a building object.

110. The computer-readable storage medium of claim 104, wherein the method further comprises: correlating the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

111. The computer-readable storage medium of claim 104, wherein the method further comprises: identifying a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

112. The computer-readable storage medium of claim 104, wherein the method further comprises: identifying an aspect of an element of the first plurality of elements; identifying a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

113. The computer-readable storage medium of claim 112, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

114. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model comprises correlating the first 3D model with the second 3D model.

115. The computer-readable storage medium of claim 114, wherein the method further comprises: identifying a first plurality of elements of the first 3D model; and identifying a second plurality of elements of the second 3D model; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

116. The computer-readable storage medium of claim 115, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

117. The computer-readable storage medium of claim 115, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

118. The computer-readable storage medium of claim 114, wherein the method further comprises: identifying a first plurality of elements of the first plurality of images; and identifying a second plurality of elements of the second plurality of images; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

119. The computer-readable storage medium of claim 118, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

120. The computer-readable storage medium of claim 118, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

121. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model comprises offsetting the first 3D model from the second 3D model.

122. The computer-readable storage medium of claim 121, wherein offsetting the first 3D model from the second 3D model is based on one or more architectural elements.

123. The computer-readable storage medium of claim 122, wherein the method further comprises: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

124. The computer-readable storage medium of claim 122, wherein the method further comprises: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

125. The computer-readable storage medium of claim 121, wherein offsetting the first 3D model from the second 3D model is based on one or more values derived from one or more architectural elements.

126. The computer-readable storage medium of claim 125, wherein the method further comprises: matching an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

127. The computer-readable storage medium of claim 125, wherein the method further comprises: matching an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially aligning the first 3D model with the second 3D model based on the matched architectural element; deriving a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

128. The computer-readable storage medium of claim 67, wherein augmenting the first 3D model with the second 3D model comprises dilating the first 3D model based on the second 3D model.

129. The computer-readable storage medium of claim 67, wherein the method further comprises: deriving a scaling factor based on at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model; and scaling at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model based on the derived scaling factor.

130. The computer-readable storage medium of claim 67, wherein the first plurality of images comprises a first plurality of anchor poses, wherein the second plurality of images comprises a second plurality of anchor poses, and wherein augmenting the first 3D model with the second 3D model is based on anchor poses common to the first plurality of anchor poses and the second plurality of anchor poses.

131. The computer-readable storage medium of claim 67, wherein the method further comprises: selecting a first subset of images of the first plurality of images based on at least one of translation data associated with the first plurality of images or rotation data associated with the first plurality of images; selecting a second subset of images of the second plurality of images based on at least one of translation data associated with the second plurality of images or rotation data associated with the second plurality of images; wherein generating the first 3D model is based on the first subset of images; and wherein generating the second 3D model is based on the second subset of images.

132. The computer-readable storage medium of claim 67, wherein the method further comprises: calculating a first angular error of a first capture path associated with the first plurality of images; determining a suggested rotation based on the first angular error of the first capture path, and displaying the suggested rotation.

133. A system configured for augmenting 3D models, the system comprising: one or more hardware processors configured by machine-readable instructions to: receive a first plurality of images; generate a first 3D model based on the first plurality of images; receive a second plurality of images; generate a second 3D model based on the second plurality of images; and augment the first 3D model with the second 3D model.

134. The system of claim 133, wherein the first plurality of images and the second plurality of images comprise at least one of visual data or depth data.

135. The system of claim 134, wherein the visual data comprises at least one of image data or video data; and wherein the depth data comprises at least one of point clouds, line clouds, meshes, or points.

136. The system of claim 133, wherein the first plurality of images and second plurality of images are captured by one or more of a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, and an aerial platform.

137. The system of claim 133, wherein each image of the first plurality of images and the second plurality of images comprise a building object.

138. The system of claim 137, wherein each image of the first plurality of images comprises an interior of the building object; and wherein each image of the second plurality of images comprises an exterior of the building object.

139. The system of claim 133, wherein the first 3D model and the second 3D model comprise at least one of a polygon-based model or a primitive-based model.

140. The system of claim 133, wherein the first 3D model and the second 3D model correspond to a building object.

141. The system of claim 140, wherein the first 3D model corresponds to an interior of the building object; and wherein the second 3D model corresponds to an exterior of the building object.

142. The system of claim 133, wherein augmenting the first 3D model with the second 3D model is relative to a common coordinate system.

143. The system of claim 142, wherein the one or more hardware processors are further configured by machine-readable instructions to generate the common coordinate system.

144. The system of claim 143, wherein the first 3D model is associated with a first coordinate system; wherein the second 3D model is associated with a second coordinate system; and wherein generating the common coordinate system is based on the first coordinate system and the second coordinate system.

145. The system of claim 144, wherein generating the common coordinate system comprises matching the first coordinate system with the second coordinate system.

146. The system of claim 133, wherein augmenting the first 3D model with the second 3D model is based on location information associated with associated with the first 3D model and the second 3D model.

147. The system of claim 146, wherein the location information comprises latitude and longitude information.

148. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a first plurality of sides of the first 3D model; identify a second plurality of sides of the second 3D model.

149. The system of claim 148, wherein each side of the first plurality of sides and the second plurality of sides corresponds to a side of a building object.

150. The system of claim 148, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first plurality of sides with the second plurality of sides in a common coordinate system.

151. The system of claim 133, wherein augmenting the first 3D model with the second 3D model is based on a first outline of the first 3D model and a second outline of the second 3D model.

152. The system of claim 151, wherein the one or more hardware processors are further configured by machine-readable instructions to: generate the first outline of the first 3D model; generate the second outline of the second 3D model.

153. The system of claim 152, wherein generating the first outline of the first 3D model is based on a top-down view of the first 3D model; and wherein generating the second outline of the second 3D model is based on a top-down view of the second 3D model.

154. The system of claim 151, wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the first outline of the first 3D model with the second outline of the second 3D model.

155. The system of claim 154, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more architectural elements.

156. The system of claim 155, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

157. The system of claim 155, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the matched architectural element.

158. The system of claim 154, wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on one or more values derived from one or more architectural elements.

159. The system of claim 158, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially align the first 3D model with the second 3D model based on the matched architectural element; derive a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

160. The system of claim 158, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially align the first 3D model with the second 3D model based on the matched architectural element; derive a value based on the substantial alignment of the first 3D model with the second 3D model; wherein substantially aligning the first outline of the first 3D model with the second outline of the second 3D model is based on the derived value.

161. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a first plurality of elements of the first 3D model; identify a second plurality of elements of the second 3D model.

162. The system of claim 161, wherein identifying the first plurality of elements of the first 3D model comprises semantically segmenting the first 3D model; and wherein identifying the second plurality of elements of the second 3D model comprises semantically segmenting the second 3D model.

163. The system of claim 162, wherein identifying the first plurality of elements of the first 3D model further comprises labeling the semantically segmented first 3D model, and wherein identifying the second plurality of elements of the second 3D model further comprises labeling the semantically segmented second 3D model.

164. The system of claim 161, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

165. The system of claim 164, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

166. The system of claim 161, wherein the one or more hardware processors are further configured by machine-readable instructions to: correlate the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

167. The system of claim 161, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

168. The system of claim 161, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify an aspect of an element of the first plurality of elements; identify a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

169. The system of claim 168, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

170. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a first plurality of elements of the first plurality of images; identify a second plurality of elements of the second plurality of images.

171. The system of claim 170, wherein identifying the first plurality of elements of the first plurality of images comprises semantically segmenting each image of the first plurality of images; and wherein identifying the second plurality of elements of the second plurality of images comprises semantically segmenting each image of the second plurality of images.

172. The system of claim 171, wherein identifying the first plurality of elements of the first plurality of images further comprises labeling the semantically segmented first plurality of images, and wherein identifying the second plurality of elements of the second plurality of elements further comprises labeling the semantically segmented second plurality of elements.

173. The system of claim 170, wherein the first plurality of elements and the second plurality of elements are associated with a building object.

174. The system of claim 173, wherein the first plurality of elements and the second plurality of elements are associated with a structure of interest of the building object.

175. The system of claim 170, wherein the first plurality of elements and the second plurality of elements are not associated with a building object.

176. The system of claim 170, wherein the one or more hardware processors are further configured by machine-readable instructions to: correlate the first plurality of elements with the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the correlated plurality of elements.

177. The system of claim 170, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a third plurality of elements, wherein the third plurality of elements comprises elements common to the first plurality of elements and the second plurality of elements; wherein augmenting the first 3D model with the second 3D model is based on the third plurality of elements.

178. The system of claim 170, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify an aspect of an element of the first plurality of elements; identify a corresponding aspect of a corresponding element of the second plurality of elements; wherein augmenting the first 3D model with the second 3D model comprises substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements.

179. The system of claim 178, wherein the aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements is a plane.

180. The system of claim 133, wherein augmenting the first 3D model with the second 3D model comprises correlating the first 3D model with the second 3D model.

181. The system of claim 180, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a first plurality of elements of the first 3D model; identify a second plurality of elements of the second 3D model; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

182. The system of claim 181, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

183. The system of claim 181, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

184. The system of claim 180, wherein the one or more hardware processors are further configured by machine-readable instructions to: identify a first plurality of elements of the first plurality of images; identify a second plurality of elements of the second plurality of images; wherein correlating the first 3D model with the second 3D model comprises assigning a confidence value to each element of the first plurality of elements and the second plurality of elements.

185. The system of claim 184, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on co-visibility of the first plurality of elements and the second plurality of elements.

186. The system of claim 184, wherein assigning the confidence value to each element of the first plurality of elements and the second plurality of elements is based on commonality of the first plurality of elements and the second plurality of elements.

187. The system of claim 133, wherein augmenting the first 3D model with the second 3D model comprises offsetting the first 3D model from the second 3D model.

188. The system of claim 187, wherein offsetting the first 3D model from the second 3D model is based on one or more architectural elements.

189. The system of claim 188, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

190. The system of claim 188, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; wherein offsetting the first 3D model from the second 3D model is based on the matched architectural element.

191. The system of claim 187, wherein offsetting the first 3D model from the second 3D model is based on one or more values derived from one or more architectural elements.

192. The system of claim 191, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model; substantially align the first 3D model with the second 3D model based on the matched architectural element; derive a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

193. The system of claim 191, wherein the one or more hardware processors are further configured by machine-readable instructions to: match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images; substantially align the first 3D model with the second 3D model based on the matched architectural element; derive a value based on the substantial alignment of the first 3D model with the second 3D model; wherein offsetting the first 3D model from the second 3D model is based on the derived value.

194. The system of claim 133, wherein augmenting the first 3D model with the second 3D model comprises dilating the first 3D model based on the second 3D model.

195. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: derive a scaling factor based on at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model; and scale at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model based on the derived scaling factor.

196. The system of claim 133, wherein the first plurality of images comprises a first plurality of anchor poses, wherein the second plurality of images comprises a second plurality of anchor poses, and wherein augmenting the first 3D model with the second 3D model is based on anchor poses common to the first plurality of anchor poses and the second plurality of anchor poses.

197. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: select a first subset of images of the first plurality of images based on at least one of translation data associated with the first plurality of images or rotation data associated with the first plurality of images; select a second subset of images of the second plurality of images based on at least one of translation data associated with the second plurality of images or rotation data associated with the second plurality of images; wherein generating the first 3D model is based on the first subset of images; and wherein generating the second 3D model is based on the second subset of images.

198. The system of claim 133, wherein the one or more hardware processors are further configured by machine-readable instructions to: calculate a first angular error of a first capture path associated with the first plurality of images; determine a suggested rotation based on the first angular error of the first capture path; display the suggested rotation.

Description:
METHODS, STORAGE MEDIA, AND SYSTEMS FOR AUGMENTING DATA OR MODELS

CROSS-REFERENCE TO RELATED APPLICATIONS [0001] The present application claims priority to U.S. Provisional Application No. 63/219,804 filed on July 8, 2021 entitled “INTERIORS”, and U.S. Provisional Application No. 63/358,716 filed on July 6, 2022 entitled “METHODS, STORAGE MEDIA, AND SYSTEMS FOR AUGMENTING DATA OR MODELS”, which are hereby incorporated by reference in their entirety.

BACKGROUND

FIELD OF THE INVENTION

[0002] The present disclosure relates to methods, storage media, and systems for augmenting two- dimensional and/or three-dimensional data or models.

DESCRIPTION OF RELATED ART

[0003] Data, such as two-dimensional (2D) data (e.g., visual data), three-dimensional (3D) data (e.g., depth data), or both, can be captured. Models, such as 2D models (e.g., digital representations in 2D space), 3D models (e.g., digital representations in 3D space), or both, can be generated. Different data capture techniques and reconstruction techniques can result in varying degrees of inaccuracies in the data, and different modeling techniques can result in varying degrees of inaccuracies in the models. In embodiments where the models are generated based on the data, inaccuracies in the data can propagate and result in inaccuracies in the models. While data capture or scanning techniques, reconstruction techniques, and modeling techniques continue to improve, these various techniques can result in inaccuracies which limit the scope of any one data set or model.

BRIEF SUMMARY

[0004] Described herein are various methods, storage media, and systems for augmenting data, such as two-dimensional (2D) data (e.g., visual data), three-dimensional data (3D) data (e.g., depth data), or both, models, such as 2D models (e.g., digital representations in 2D space), 3D models (e.g., 3D representations in 3D space), or both. [0005] Augmenting one set of data or one model with another set of data or another model can address issues related to different capture techniques, reconstruction techniques, and modeling techniques. Augmenting one set of data or one model with another set of data or another model can be used to revise, refine, or complete, the data or the models. In some embodiments, one set of data or one model can be leveraged to improve another set of data or another model.

[0006] One aspect of the present disclosure relates to a method for augmenting 3D models. The method may include receiving a first plurality of images. The method may include generating a first 3D model based on the first plurality of images. The method may include receiving a second plurality of images. The method may include generating a second 3D model based on the second plurality of images. The method may include augmenting the first 3D model with the second 3D model.

[0007] Another aspect of the present disclosure relates to a non-transient computer-readable storage medium having instructions embodied thereon, the instructions being executable by one or more processors to perform a method for augmenting 3D models. The method may include receiving a first plurality of images. The method may include generating a first 3D model based on the first plurality of images. The method may include receiving a second plurality of images. The method may include generating a second 3D model based on the second plurality of images. The method may include augmenting the first 3D model with the second 3D model.

[0008] Yet another aspect of the present disclosure relates to a system configured for augmenting 3D models. The system may include one or more hardware processors configured by machine- readable instructions. The processor(s) may be configured to receive a first plurality of images. The processor(s) may be configured to generate a first 3D model based on the first plurality of images. The processor(s) may be configured to receive a second plurality of images. The processor(s) may be configured to generate a second 3D model based on the second plurality of images. The processor(s) may be configured to augment the first 3D model with the second 3D model.

[0009] These and other features, and characteristics of the present technology, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of 'a', 'an', and 'the' include plural referents unless the context clearly dictates otherwise.

[0010] These and other embodiments, and the benefits they provide, are described more fully with reference to the figures and detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] Figure (FIG.) 1 illustrates 3D data of an interior environment, according to some embodiments.

[0012] Figures (FIGS.) 2A-2D illustrate 3D data of an interior environment, according to some embodiments.

[0013] FIG. 3 A illustrates 3D data of an interior environment, according to some embodiments. [0014] FIG. 3B illustrates visual data of a portion of the interior environment of FIG. 3A, according to some embodiments.

[0015] FIG. 4A illustrates a top down view of a capture (e.g., scan) subprocess of a 3D reconstruction process of an exterior, according to some embodiments.

[0016] FIGS. 4B-4E illustrate images captured by a capture device at poses illustrated in FIG. 4A, according to some embodiments.

[0017] FIG. 4F illustrates a front-left view of a model, according to some embodiments.

[0018] FIG. 4G illustrates a back-right view of a model, according to some embodiments.

[0019] FIG. 5A illustrates interior 3D data augmented with an exterior 3D model, according to some embodiments.

[0020] FIG. 5B illustrates a magnified view of a portion of FIG. 5A, according to some embodiments.

[0021] FIG. 6A illustrates a top-down view of a floorplan representation generated based on 3D data, according to some embodiments.

[0022] FIG. 6B illustrates a top-down view of a floorplan representation and augmented 3D data, according to some embodiments.

[0023] FIG. 6C illustrates a perspective view of a floorplan representation and augmented 3D data, according to some embodiments.

[0024] FIG. 7A illustrates a floorplan with a capture path, according to some embodiments. [0025] FIG. 7B illustrates a floorplan with a capture path, according to some embodiments. [0026] FIG. 8 illustrates a block diagram of a computer system that may be used to implement the techniques described herein, according to some embodiments.

[0027] FIG. 9 illustrates a system configured for augmenting 3D models, according to some embodiments.

[0028] FIG. 10 illustrates a method for augmenting 3D models, according to some embodiments. [0029] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be appreciated, however, that the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure. Like reference numbers and designations in the various drawing indicate like elements.

DETAILED DESCRIPTION

[0030] A 3D reconstruction process can use 3D capturing or scanning techniques implemented on a 3D scanner to capture or receive 3D data (sometimes referred to as “images” generally, including depth data) of an environment that is used to generate a 2D or 3D model that can be displayed. The 2D or the 3D model can be a polygon-based model (e.g., a mesh model), a primitive-based model, and the like. In some embodiments, the 3D reconstruction process can use 2D capturing or scanning techniques implemented on a 2D scanner to capture or receive 2D data (sometimes referred to as “images” generally, including visual data) of an environment that is used to generate a 2D or 3D model that can be displayed. The 2D or 3D model can be a polygon-based model (e.g., a mesh model), a primitive-based model, and the like. The 3D reconstruction process can include one or more subprocesses such as, for example, a capture subprocess, a reconstruction subprocess, a display subprocess, and the like.

[0031] Examples of 3D capturing or scanning techniques include time-of-flight, tri angulation, structured light, modulated light, stereoscopic, photometric, photogrammetry, and the like. Examples of 3D data include depth data such as, for example, 3D point clouds, 3D line clouds, 3D meshes, 3D points, and the like. Examples of 3D models include mesh models (e.g., polygon models), surface models, wire-frame models, computer-aided-design (CAD) models, and the like. [0032] Examples of 2D capturing or scanning techniques include global shutter capture, rolling shutter capture, panoramic capture, wide-angle capture (e.g., 180 degree camera capture, 360 degree camera capture, etc.), image capture, video capture, and the like. Examples of 2D data include visual data such as, for example, image data, video data, and the like.

[0033] In some embodiments, the 3D reconstruction process can capture the 3D data and the 2D data synchronously or asynchronously. In some embodiments, the 3D reconstruction process can capture data (e.g., the 3D data or the 2D data) at a fixed interval or as a function of movement of a scanner (e.g., the 3D scanner or the 2D scanner). The 3D reconstruction process can capture data based on translation thresholds, rotation thresholds, or both. For example, the 3D reconstruction process can capture data if the scanner translates more than a translation threshold, rotates more than a rotation threshold, or both.

[0034] The 2D data, the 3D data, or both, can be captured by a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, an aerial platform, and the like, or a combination thereof. The 2D data, the 3D data, or both, can include a building object, for example an interior of the building object, an exterior of the building object, or both.

[0035] In some embodiments, the 2D data can be used to augment the 3D data. For example, the 3D data can be textured based on the 2D data.

[0036] A model (e.g., the 3D model or the 2D model) can be a floorplan representation of the environment. The floorplan can be an envelope representation including an outline of the environment, or a detailed representation including an outline of the environment and elements such as portals (e.g., doors, windows, space-to-space openings, and the like), interior walls, fixed furniture / appliances, and the like. The floorplan representation can include measurements, labels for the different spaces and elements within the environment, and the like. Examples of labels for the different spaces include entryway, reception, foyer, living room, family room, kitchen, bedroom, bathroom, closet, hallway, corridor, staircase, balcony, terrace, and the like. Examples of labels for the different elements include refrigerator, washer/dryer, dishwasher, range, microwave, range hood, wall oven, cooktop, toilet, sink, bath, exhaust fans, countertops, cabinets, and the like.

[0037] Limitations of 3D capturing or scanning techniques can cause lines that are straight in the environment to appear distorted in the 3D data. As a distance between the 3D scanner and a surface in the environment increases, the likelihood of distortion artifacts, such as wavy, broken, or disjointed geometry, in the 3D data increases which can lead to an inaccurate 3D model. The presence and magnitude of the distortion artifacts can depend on the 3D capturing or scanning techniques implemented on the 3D scanner.

[0038] FIG. 1 illustrates 3D data of an interior environment, according to some embodiments. Window frame artifact 1002 and fridge artifact 1004 are examples of wavy, broken, or disjointed geometry due to sensor drift, false positives in feature matching, or noisy scene information. [0039] FIGS. 2A-2D illustrate 3D data of an interior environment, according to some embodiments. Wall artifact 2002 and wall artifact 2014 are examples of wavy, broken, or disjointed geometry due to sensor drift, false positives in feature matching, or noisy scene information.

[0040] The likelihood of wavy, broken, or disjointed geometry in the 3D data can be mitigated by decreasing the distance between the 3D scanner and a surface in the environment. Decreasing the distance between the 3D scanner and a surface in the environment may be difficult in certain circumstances. For example, in an environment with a high vaulted ceiling, it may not be possible to decrease the distance between the 3D scanner and the ceiling as the 3D scanner may not be able to get close to the ceiling.

[0041] The environment can include potentially problematic surfaces, such as reflective surfaces, dark surfaces, and clear or transparent surfaces, which can lead to artifacts, such as duplicative elements, missing data referred to as holes, or additional data, in the 3D data. Examples of reflective surfaces include mirrors, and the like. Examples of dark surfaces include television screens, dark tabletops or countertops, and the like. Examples of clear surfaces include glass, clear plastics, glass tabletops or countertops, and the like.

[0042] Referring briefly to FIGS. 2A-2D, first mirror 2008 and second mirror 2012 are examples of reflective surfaces that lead to duplicative elements in the 3D data referred to as first mirror artifact 2006 and second mirror artifact 2010, respectively.

[0043] Manually adjusting settings of the 3D scanner to take into account the potentially problematic surfaces before or during 3D data capture, manually adjusting the 3D scanner’s pose (e.g., position and orientation) relative to the potentially problematic surfaces during 3D data capture, manually identifying the potentially problematic surfaces in the environment or in the 3D data, or manually identifying the artifacts in the 3D data caused by the potentially problematic surfaces can be an indirect, cumbersome, or resource intensive processes. [0044] 3D capturing or scanning techniques that do not observe all portions of all surfaces of an environment can lead to artifacts, such as missing data referred to as holes, in the 3D data at the surfaces or portions thereof that are not observed.

[0045] Referring briefly to FIG. 1, hole 1006 is an example of an area where there is no 3D data from capturing or scanning. Referring briefly to FIGS. 2A-2D, hole 2004 is an example of an area where there is no 3D data from capturing or scanning.

[0046] Known hole filling techniques may work well for holes that are mostly flat but may not work as well for holes that have an irregular shape or curvature. Regardless of situations in which they may or may not work well, known hole filling techniques can be resource intensive or computationally expensive.

[0047] FIG. 3A illustrates 3D data of an interior environment, according to some embodiments. FIG. 3B illustrates visual data of a portion of the interior environment of FIG. 3 A, according to some embodiments. Visual data in FIG. 3B indicates that 3004 should be depicted as a solid wall, however, the reconstructed 3D data in FIG. 3A indicates that 3002, which is the same portion as 3004 of FIG. 3B, is a void. 3D reconstruction of the interior environment based on the 3D data of the interior environment illustrated in FIG. 3 A would result in a model including a void, whereas 3D reconstruction of the interior environment based on the visual data in FIG 3B would result in a wall. This is an example of different inputs producing different outputs.

[0048] FIG. 4A illustrates a top down view of a capture (e.g., scan) process or phase of a 3D reconstruction process of an exterior, according to some embodiments. Structure 4000 is captured by a capture device at poses 4002-4008. FIGS. 4B-4E illustrate images captured by the capture device at poses 4002-4008, respectively. As illustrated in FIG. 4A, the capture device captures structure 4000 from the left and the front.

[0049] In some embodiments, there may be no images captured by the capture device from the right and the back of structure 4000 because those portions are simply not captured, inaccessible, occluded by elements such as foliage or vehicles, adjacent to other structures, and the like.

[0050] The images illustrated in FIGS. 4B-4E captured by the capture device are used to generate 3D model 4010 illustrated in FIGS. 4F-4G. Since the capture device captured images of structure 4000 from the left and the front, model 4010 constructed from the captured images will be complete from the left and the front. Since the capture device did not capture images of structure 4000 from the right and the back, model 4010 constructed from the captured images will be incomplete from the right and the back.

[0051] FIG. 4F illustrates a front-left view of model 4010 and FIG. 4G illustrates a back-right view of model 4010. Modeled portions 4012 are portions of 3D model 4010 that are modeled based on the images captured by the capture device from the left and the front of structure 4000. Unmodeled portions 4014 are portions of 3D model 4010 that are unmodeled as there are no images captured by the capture device from the right and the back of structure 4000. In some embodiments, for example as illustrated in FIG. 4G, unmodeled portions 4014 are predicted geometries of surfaces.

[0052] 3D capturing or scanning techniques can be prone to errors such as tracking error and drift. Tracking error can manifest when a scanner (e.g., 3D scanner or 2D scanner) implementing capturing or scanning techniques (e.g., 3D scanning techniques or 2D scanning techniques) loses track of its location in the environment. For example, the scanner can lose track of its location in an environment that lacks features, such as in a hallway. All sensors produce measurement errors. The measurement errors can be amplified in capturing or scanning techniques that rely on previous sensor values to determine current sensor values. Drift is the deviation of sensor values over time due to accumulated measurement errors.

[0053] Errors such as tracking error and drift can be minimized by capturing or scanning the environment one space at a time and combining the scans of each space into an aggregate scan that represents the environment. One space at a time capturing or scanning may minimize errors such as tracking error and drift, but may not maintain the relationship between the spaces and thus lead to an inaccurate aggregate scan.

[0054] Augmenting one set of data (e.g., 3D data or 2D data) or one model (e.g., 3D model or 2D model) with another set of data (e.g., 3D data or 2D data) or another model (e.g., 3D model or 2D model) can address some of the aforementioned issues. Augmenting one set of data or one model with another set of data or another model can be used to revise, refine, or complete, the data or the models. The disclosure primarily relates to augmenting 3D data with a 3D model. One of ordinary skill in the art will appreciate that the principles disclosed herein can apply to various other combinations of augmentations between 2D data, 2D models, 3D data, and 3D models.

[0055] In some examples, 3D data of an interior environment can be augmented with 3D data of an exterior environment, 3D data of an interior environment can be augmented with a 3D model of an exterior environment, a 3D model of an interior environment can be augmented with 3D data of an exterior environment, a 3D model of an interior environment can be augmented with a 3D model of an exterior environment, and the like. One of ordinary skill in the art will appreciate various other combinations of augmentations between 2D data, 2D models, 3D data, and 3D models.

[0056] Augmenting one set of data or one model with another set of data or another model can include correlating or mapping, aligning, deforming, scaling, cropping, hole filling (e.g., completing), and the like. In some embodiments, augmenting one set of data or model with another set of data or another model includes solving an optimization problem that includes finding the optimal solution from all feasible or possible solutions, for example given one or more constraints. [0057] FIG. 5A illustrates interior 3D data 5002 augmented with exterior 3D model 5004, according to some embodiments. FIG. 5B illustrates a magnified view of a portion of FIG. 5 A, according to some embodiments. In some embodiments, interior 3D data 5002 and exterior 3D model 5004 can be captured or generated using a single 3D reconstruction process. In some embodiments, interior 3D data 5002 and exterior 3D model 5004 can be captured or generated using multiple, separate 3D reconstruction processes. In one example, interior 3D data 5002 can be captured using one 3D reconstruction process and exterior 3D model 5004 can be generated using another 3D reconstruction process.

[0058] Although the example illustrated in FIGS. 5A-5B and the disclosure herein is in relation to interior 3D data and an exterior 3D model, one of ordinary skill in the art will appreciate the principles described herein apply to other configurations (e.g., 3D data and 3D data, 3D data and 3D model, 3D model to 3D model, etc.).

[0059] Interior 3D data 5002 and exterior 3D model 5004 include one or more elements. In some embodiments, the elements are associated with a building object. In some embodiments, the elements are associated with a structure of interest, for example of the building object. Examples of elements associated with a structure of interest include portals (e.g., doors, windows, openings), interior walls, exterior walls, surfaces of the structure, and the like. In some embodiments, the elements are not associated with a structure of interest, for example of the building object. Examples of elements not associated with a structure of interest include vehicles, utility poles, trees, foliage, other structures, and the like, that are not associated with the building object. [0060] Elements of interior 3D data 5002, exterior 3D model 5004, or both can be identified. Identifying the elements can be a manual, semi-automatic, or fully automatic process. Identifying the elements can include semantic segmentation and labeling or object recognition.

[0061] Interior 3D data 5002, or portions thereof, can be augmented with exterior 3D model 5004, or portions thereof. Augmenting interior 3D data 5002 with exterior 3D model 5004 can include correlating or mapping, aligning, deforming, scaling, cropping, hole filling (e.g., completing), and the like.

[0062] In some embodiments, the augmenting can include generating a common coordinate system for interior 3D data 5002 and exterior 3D model 5004. Interior 3D data 5002 can have an associated coordinate system (e.g., an interior coordinate system) and exterior 3D model 5004 can have an associated coordinate system (e.g., an exterior coordinate system). The common coordinate system can be generated based on the interior coordinate and the exterior coordinate system. In some embodiments, the common coordinate system can be generated by matching the interior coordinate system with the exterior coordinate system, or vice versa.

[0063] In some embodiments, the augmenting can be based on location information associated with interior 3D data 5002 and exterior 3D model 5004. Examples of location information include latitude, longitude, elevation, and the like. Interior 3D data 5002 can be augmented with exterior 3D model 5004 relative to a common coordinate system based at least in part on location information.

[0064] In some embodiments, the augmenting can be based on one or more sides associated with interior 3D data 5002 and exterior 3D model 5004 where the sides correspond to the sides of the underlying building structure. For example, interior 3D data 5002 can have a front side and exterior 3D model 5004 can have a front side. In these embodiments, interior 3D data 5002 and exterior 3D model 5004 can be augmented by substantially aligning the front side of interior 3D data 5002 and the front side of exterior 3D model 5004 in a common coordinate system. In some embodiments, the sides can be established or identified based on identified elements and their generally associated sides. In some examples, a building structure may have several exterior doors, where a hinged door may be associated with a front side, and where a sliding door may be associated with a back side. In some examples, a building structure may have several exterior windows, where a bay window may be associated with a front side. [0065] In some embodiments, the augmenting can be based on an outline of interior 3D data 5002 and an outline of exterior 3D model 5004. In these embodiments, the outline of interior 3D data 5002 is substantially aligned with the outline of exterior 3D model 5004, for example, based on one or more common architectural elements such as windows, and preferably those with industry standard attributes, such as doors, or based on one or more values derived from the architectural elements. In some examples, a door of the outline of interior 3D data 5002 can be matched to a corresponding door of the outline of exterior 3D model 5004, and the outline of interior 3D data 5002 can be substantially aligned with the outline of exterior 3D model 5004 based on the matched door. In some examples, a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, interior 3D data 5002 can be substantially aligned with exterior 3D model 5004 based on the matched doors, an exterior wall thickness (i.e., thickness of the wall between interior 3D data 5002 and exterior 3D model 5004) can be derived based on the substantial alignment of interior 3D data 5002 with exterior 3D model 5004, and the outline of interior 3D data 5002 can be substantially aligned with the outline of exterior 3D data 5004 based on the derived exterior wall thickness.

[0066] In some embodiments, one or more architectural elements are substantially aligned according to axis alignment between the architectural elements of the two data sources. In some embodiments, this occurs after generating a common coordinate system. In some embodiments, axis alignment of matching architectural elements, or features thereof, generates the common coordinate system. For example, a window of 3D data 5002 having a planar orientation in an x-y plane is substantially aligned with the borders with a window of 3D model 5004 having matching planar orientation according to axes orientations. For planar architectural elements, in some examples this means two axes of the matching architectural elements are at least parallel to each other with corresponding points or features of the architectural element falling on the third orthogonal axis. For example, the x-axis of a window in 3D data 5002 is parallel to the x-axis of a window in 3D model 5004, and the y- axis of a window in 3D data 5002 is parallel to the y-axis of a window in 3D model 5004, with the corners of the window each falling on the z-axis. Though architectural elements may substantially align with one another in such way, they are unlikely to perfect overlay one another due to distal surface separation. While axis alignment is discussed, point alignment or generation of lines between points may follow similar steps. In some embodiments, the one or more substantially aligned architectural elements are orthogonal to one another.

[0067] Outline of interior 3D data 5002 can be generated based on a top-down view of interior 3D data 5002. Referring briefly to FIG. 6A, it illustrates a top-down view of interior 3D data, according to some embodiments. Interior 3D data 5002 of FIGS. 5A-5B are of a different interior than interior 3D data of FIGS. 6A-6C. Outline of exterior 3D model 5004 can be generated based on a top-down view of exterior 3D model 5004.

[0068] In some embodiments, the augmenting can be based on one or more elements common to interior 3D data 5002 and exterior 3D model 5004. In some embodiments, the elements are associated with a building object. In some embodiments, the elements are associated with a structure of interest, for example of the building object. For example, interior 3D data 5002 can be augmented with exterior 3D model 5004 based on doors or windows that are common to the building object. In some embodiments, the elements are not associated with a structure of interest, for example of the building object. For example, interior 3D data 5002 can be augmented with exterior 3D model 5004 based on vehicles, utility poles, trees, foliage, other structures, and the like, that are not associated with the building object.

[0069] In some embodiments, the augmenting based on elements common to interior 3D data 5002 and exterior 3D model 5004 can include identifying an aspect (e.g., plane) of an element of interior 3D data 5002 (such as by semantic segmentation or object recognition), identifying corresponding aspect (e.g., plane) of a corresponding element of exterior 3D model 5004 (such as by semantic segmentation or object recognition), and substantially aligning the aspect of the element of interior 3D data 5002 with the corresponding aspect of the corresponding element of exterior 3D model 5004. In some embodiments, substantially aligning the aspect of the element of interior 3D data 5002 with the corresponding aspect of the corresponding element of exterior 3D data 5004 can be based on one or more assumptions. Examples of assumptions include door thickness, window thickness, wall thickness, and other anchoring aspects common to interior 3D data 5002 and exterior 3D model 5004.

[0070] In some embodiments, the augmenting can be used to compensate for limitations of different capturing or scanning techniques, different reconstructions techniques, different modeling techniques, or a combination thereof. In some embodiments, the capturing or scanning technique, the reconstruction technique, the modeling technique, or a combination thereof, related to a first data capture may cause lines that are straight in the environment to appear wavy, broken, or disjointed in the resultant reconstructed output (such as interior 3D data 5002); however, the capturing scanning technique, the reconstruction technique, the modeling technique, or a combination thereof, related to a second data capture may cause the corresponding lines in the resultant modeled output (such as exterior 3D model 5004) to appear be straight. For example, an interior 3D model generated from interior 3D data 5002, which can be a mesh input, may include lines that are wavy, broken, or disjointed; however, exterior 3D model 5004, which can be generated from primitive based modeling, may include straight lines. In some embodiments, the augmenting can be used to compensate for potentially problematic surfaces in the environment. For example, potentially problematic surfaces in an interior portion of the environment can lead to duplicative elements, missing data, or additional data in interior 3D data 5002; however, an exterior portion of the environment may not include the same potentially problematic surfaces and therefore exterior 3D model 5004 may not include the same duplicative elements, missing data, or additional data. Augmenting interior 3D data 5002 with exterior 3D data 5004 can correct the distortions in interior 3D data 5002 based on exterior 3D data 5004.

[0071] In some embodiments, the correlating or mapping can include assigning confidence values to elements in interior 3D data 5002, exterior 3D model 5004, or both. Elements can include points, surfaces, and the like. In some embodiments, high confidence values can be assigned to elements that are common to both interior 3D data 5002 and exterior 3D model 5004, and, in some embodiments, low confidence values can be assigned to all other elements. In some embodiments, the confidence values are based on co-visibility of the elements in interior 3D data 5002 and exterior 3D model 5004. For example, high confidence values can be assigned to doors and windows that are visible in both interior 3D data 5002 and exterior 3D model 5004. In some embodiments, the confidence values are based on commonality of the elements in interior 3D data 5002 and exterior 3D model 5004, which may not necessarily be co-visible in interior 3D data 5002 and exterior 3D model 5004. For example, high confidence values can be assigned to peripheral surfaces (e.g., interior walls) of interior 3D data 5002 that are common to surfaces (e.g., exterior walls) of exterior 3D model 5004.

[0072] In some embodiments, the augmenting can include identifying, correlating, and substantially aligning common elements in interior 3D data 5002 and exterior 3D model 5004, and revising interior 3D data 5002, exterior 3D model 5004, or both, based on the correlation or alignment.

[0073] In one example referring to FIGS. 5A and 5B, interior 3D data 5002 can include a portion of window 5008 and headboard 5010 directly below the portion of window 5008, and exterior 3D model 5004 can include all of window 5008. Window 5008 in interior 3D data 5002 and window 5008 in exterior 3D model 5004 can be correlated and aligned, and the portion of window 5008 in interior 3D data 5002 can be revised (e.g., filled in) based on window 5008 in exterior 3D model 5004.

[0074] Referring briefly to FIGS. 4A-4G, as described here, unmodeled portions 4014 are predicted geometries of surfaces. In this example, unmodeled portions 4014 are predicted geometries based on images illustrated in FIGS. 4B-4E. In some embodiments, unmodeled portions 4014 are predicted geometries based on images illustrated in FIGS. 4B-4E in further view of interior 3D data, an interior 3D model, or both, of structure 4000. For example, elements such as windows and doors that are common to both (exterior) model 4010 and the interior model can be used to correlate and align model 4010 and the interior model. The common elements may be those that are at the front and the left of structure 4000. Model 4010 can be revised based on the correlation or alignment of the common elements. For example, unmodeled portions 4014 can be generated or filled in, for example with windows, doors, and the like, that are in the interior model. [0075] Interior 3D data 5002 can be offset from exterior 3D model 5004, for example, based on one or more common architectural elements such as windows, and preferably those with industry standard attributes, such as doors, or based on one or more values derived from the architectural elements. In some examples, a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, and interior 3D data 5002 can be offset from exterior 3D model 5004 based on an assumed thickness of the matched door (as doors are typically set to manufacturer and industry standards for consistency). In some examples, a door of interior 3D data 5002 can be matched to a corresponding door of exterior 3D model 5004, interior 3D data 5002 can be substantially aligned with exterior 3D model 5004 based on the matched doors, an exterior wall thickness (i.e., thickness of the wall between interior 3D data 5004 and exterior 3D model 5004) can be derived based on the substantial alignment of interior 3D data 5002 with exterior 3D model 5004, and interior 3D data 5002 can be offset from exterior 3D data 5004 based on the derived exterior wall thickness. Wall offset 5006 is an example of an offset of interior 3D data 5002 relative to exterior 3D model 5004.

[0076] In some embodiments, interior 3D data 5002 can be captured following a one space at a time approach in which each space (e.g., room) is captured or scanned one at a time and the scans of each space are combined into an aggregate scan that represents the environment. As mentioned above, capturing or scanning in this way may not maintain the relationship between the spaces. One way to reintroduce the relationship between the spaces in interior 3D data 5002 can be my leveraging exterior 3D model 5004. In some embodiments, the spaces in interior 3D data 5002 can be pulled apart or dilated based on exterior 3D model 5004 and, for example, aligning common one or more architectural elements. Pulling apart or dilating interior 3D data 5002 based on exterior 3D model 5004 in this manner can introduce interior wall offsets that are not present in the interior 3D data 5002 at the time of capture/aggregation.

[0077] In some embodiments, interior 3D data 5002, exterior 3D model 5004, both, or portions thereof, a coordinate system associated with interior 3D data 5002 (e.g., an interior coordinate system), a coordinate system associated with exterior 3D model 5004 (e.g., exterior coordinate system), or both, or a combination thereof, can be scaled based on one or more derived scaling factors. In some embodiments, an interior scaling factor can be derived from interior 3D data 5002, interior coordinate system, or both, and interior 3D data 5002, interior coordinate system, exterior 3D model 5004, exterior coordinate system, or a combination thereof can be scaled based on the derived interior scaling factor. Similarly, in some embodiments, an exterior scaling factor can be derived from exterior 3D model 5004, exterior coordinate system, or both, and exterior 3D model 5004, exterior coordinate system, interior 3D data 5002, interior coordinate system, or a combination thereof can be scaled based on the derived exterior scaling factor. In some embodiments, an interior scaling factor can be derived from interior 3D data 5002, interior coordinate system, or both, interior 3D data 5002, interior coordinate system, or both can be scaled based on the derived interior scaling factor, an exterior scaling factor can be derived based on the interior scaling factor, the scaled interior 3D data 5002, the scaled interior coordinate system, or a combination thereof, and exterior 3D model, exterior coordinate system, or both can be scaled based on the derived exterior scaling factor. Similarly, in some embodiments, an exterior scaling factor can be derived from exterior 3D model 5004, exterior coordinate system, or both, exterior 3D model 5004, exterior coordinate system, or both can be scaled based on the derived exterior scaling factor, an interior scaling factor can be derived from the exterior scaling factor, the scaled exterior 3D model 5004, the scaled exterior coordinate system, or both, and interior 3D data 5002, interior coordinate system, or both can be scaled based on the derived interior scaling factor. [0078] In some embodiments, a quality metric, a confidence value, or both, can be derived for and associated with interior 3D data 5002 and exterior 3D model 5004. The quality metric or confidence value can be based on the capturing or scanning technique, the reconstruction technique, the modeling technique, or a combination thereof. Different capturing or scanning techniques, reconstruction techniques, modeling techniques, or combinations thereof, can introduce different artifacts which can contribute to the quality metric or confidence value. In some examples, a visual data based capturing or scanning technique may have a low quality metric or confidence value if the visual data is blurry, for example due to motion blur. In some examples, depth data based capturing or scanning techniques may have a low quality metric or confidence value if the depth data includes artifacts, for example due to reflective surfaces, dark surfaces, or clear or transparent surfaces. In some examples, visual data, depth data, or both from a ground- level imager may have a relatively high quality metric or high confidence value, and visual data, depth data, or both, from an aerial imager may have a relatively low quality metric or a low confidence value. In these examples, the quality metric or the confidence value may be a function of distance from imager to subject. In some examples, a 2D model such as an architectural plan may have a relatively high quality metric or high confidence value and a 2D model such as a floor planed generated from visual data, depth data, or both, may have a relatively low quality metric or a low confidence value. In some examples, a 3D model generated from an architectural plan may have a relatively high quality metric or high confidence value and a 3D model generated from visual data, depth data, or both, may have a relatively low quality metric or low confidence value. In these embodiments, the data/model with a higher quality metric or confidence value can be used as the base data/model. For example, if interior 3D data 5002 has a higher quality metric or confidence value, interior 3D data 5002 can be the base data/model and in this example, an interior scaling factor can be derived from interior 3D data 5002, interior 3D data 5002 can be scaled based on the derived interior scaling factor, an exterior scaling factor can be derived based on the interior scaling factor, and exterior 3D model can be scaled based on the derived exterior scaling factor. Similarly, if exterior 3D model 5004 has a higher quality metric or confidence value, exterior 3D model 5004 can be the base data/model and in this example, an exterior scaling factor can be derived from exterior 3D model 5004, exterior 3D model 5004 can be scaled based on the derived exterior scaling factor, an interior scaling factor can be derived from the exterior scaling factor, and interior 3D data 5002 can be scaled based on the derived interior scaling factor.

[0079] In some embodiments, deriving one scaling factor from another (e.g., deriving an exterior scaling factor from an interior scaling factor or deriving an interior scaling factor from an exterior scaling factor) can include calculating a conversion factor to be applied to one scaling factor to derive another. In one example, deriving an exterior scaling factor from an interior scaling factor can include calculating a conversion factor to be applied to the interior scaling factor to derive the exterior scaling factor. Similarly, in one example, deriving an interior scaling factor from an exterior scaling factor can include calculating a conversion factor to be applied to the exterior scaling factor to derive the interior scaling factor.

[0080] In some embodiments, deriving one scaling factor from another can be based on common elements. For example, an interior scaling factor can be derived for interior 3D data 5002, interior 3D data 5002 can be scaled based on the interior scaling factor, an exterior scaling factor can be derived from the interior scaling factor based on window 5008 which is common to both interior 3D data 5002 and exterior 3D model 5004, and exterior 3D model 5004 can be scaled based on the exterior scaling factor. Deriving the exterior scaling factor from the interior scaling factor based on window 5008 which is common to both interior 3D data 5002 and exterior 3D model 5004 can include scaling window 5008 of exterior 3D model 5004 until its dimensions match that of window 5008 of the scaled interior 3D data 5002.

[0081] In some embodiments, deriving one scaling factor from another can be based on one or more industry standards. For example, an interior scaling factor can be derived from interior 3D data 5002, interior 3D data 5002 can be scaled based on the interior scaling factor, an exterior scaling factor can be derived from the interior scaling factor such that exterior 3D model 5004 scaled based on the exterior scaling factor satisfies an industry standard exterior wall width/depth, and exterior 3D model 5004 can be scaled based on the exterior scaling factor.

[0082] In some embodiments, interior anchor poses for interior 3D data 5002 and exterior anchor poses for exterior 3D model 5004 are determined. A set of common anchor poses including anchor poses that are common to the interior anchor poses and the exterior anchor poses are determined. [0083] As described herein, the 3D reconstruction process can include one or more subprocesses such as, for example, a reconstruction subprocess. The reconstruction subprocess can be manual, semi-automatic, or fully automatic. One or more tools may be used, for example by a human, in the reconstruction subprocess.

[0084] One example tool is an illuminating cursor. The illuminating cursor can be used to identify rooms or areas in 3D data. FIG. 6A illustrates a top-down view of a floorplan representation generated based on 3D data (sometimes referred to as raw, unprocessed, or unstructured 3D data), according to some embodiments. The floorplan representation includes first bedroom 6002A, second bedroom 6002B, first bathroom 6004A, second bathroom 6004B, kitchen 6006, dining area 6008, living room 6010, and home office 6012.

[0085] FIG. 6B illustrates a top-down view of a floorplan representation and augmented 3D data, according to some embodiments. FIG. 6C illustrates a perspective view of a floorplan representation and augmented 3D data, according to some embodiments. In some embodiments, augmentation of the 3D data illustrated in FIG. 6A is a function of distance from cursor 6020. In some embodiments, cursor 6020 has a 3D position (X, Y, Z). Rays can be casted in all directions from a position of cursor 6020. In some embodiments, the rays that are casted from the position of cursor 6020 can be of a predetermined length. In some embodiments, the first 3D data that the ray intersects can be augmented. In other words, if rays casted from cursor 6020 intersect 3D data, then that 3D data can be augmented. In some embodiments, the 3D data that is not the first 3D data can also be augmented. The augmentation can include, for example, brightness, opacity, and the like. Augmenting the 3D data in this manner can be a useful tool in assisting a human to identify and label the 3D data.

[0086] IMU can include, among other components, one or more gyroscopes. A gyroscope measures rotation about a known point. Gyroscope measurements can drift over time due to integration of imperfections and noise within the gyroscope or, more generally, the IMU. Of the row axis, the pitch axis, and the yaw axis, it is the yaw axis that is most sensitive to drift. The drift can cause angular error. The angular error can be measured in degrees of rotation per unit of time. [0087] FIGS. 7A and 7B illustrate floorplan 700 with capture path 702 and capture path 752, respectively, according to some embodiments. Each capture path can include one or more rotations (sometimes referred to as scan directions).

[0088] An angular error of a capture path can be related to or a function of an angular error of each rotation of the capture path. For example, the angular error of a capture path can be an accumulation of angular errors the rotations of the capture path. The angular error of each rotation can have an associated magnitude and direction. Similarly, the angular error of the capture path can also have an associated magnitude and direction.

[0089] The capture path can include clockwise rotations and counterclockwise rotations. Each clockwise rotation can result in a positive angular error and each counterclockwise rotation can result in a negative angular error.

[0090] Capture path 702 includes clockwise rotations 704, 706, 708, 712, and 714, and counterclockwise rotations 710 and 716. For the sake of simplicity, assuming the angular error of each rotation is of equal magnitude, the angular error of capture path 702 can be very positive. [0091] Capture path 752 includes clockwise rotations 754, 756, and 764, and counterclockwise rotations 758, 760, 762, and 766. For the sake of simplicity, assuming the angular error of each rotation is of equal magnitude, the angular error of capture path 752 can be slightly negative. [0092] In some embodiments, the 3D reconstruction process can include determining and displaying a recommended or suggested rotation during 3D scanning, for example, in an effort to minimize drift or angular error. In some embodiments, determining a recommended or suggested rotation can be based on one or more previous rotations. For example, determining a recommended or suggested rotation can be based on the magnitude, the direction, or both, of one or more previous rotations. In some embodiments, determining recommended or suggested rotations can be based on an angular error of one or more previous rotations. For example, determining a recommended or suggested rotation can be based on the magnitude, the direction, or both, of the angular error of one or more previous rotations.

[0093] For example, with reference to capture paths 702 and 752, at the start of capture paths 702 and 752, the angular error is zero. At clockwise rotation 704 of capture path 702 and clockwise rotation 754 of capture path 752, the angular error is slightly positive. At this point, a counterclockwise recommended or suggested rotation can be determined and displayed. The counterclockwise recommended or suggested rotation is in the opposite direction of the clockwise rotation 704 of capture path 702 and the clockwise rotation 754 of capture path 752 in an effort to lower the angular error from slightly positive to closer to zero. At clockwise rotation 706 of capture path 702 and clockwise rotation 756 of capture path 756, the angular error is slightly more positive. The counterclockwise recommended or suggested rotation is not followed. At this point, a counterclockwise recommended or suggested rotation can be determined and displayed. The counterclockwise recommended or suggested rotation is in the opposite direction of the clockwise rotations 704 and 706 of capture path 702 and clockwise rotations 754 and 756 of capture path 702 in an effort to lower the angular error from slightly more positive to closer to zero. If the counterclockwise recommended or suggested rotation is not followed, the next rotation is a clockwise rotation as illustrated by clockwise rotation 708 of capture path 702. If the counterclockwise recommended or suggested rotation is followed, the next rotation is a counterclockwise rotation as illustrated by counterclockwise rotation 758 of capture path 752. [0094] The 3D data can include private information in the environment. Examples of private information include personally identifiable information, pictures, medications, assistive devices or equipment, and the like. The 3D data can be filtered to obfuscate the private information in the environment. Filtering can include identifying, blurring, distorting, pixelating, and the like. [0095] FIG. 8 illustrates a computer system 800 configured to perform any of the steps described herein. The computer system 800 includes an input/output (I/O) Subsystem 802 or other communication mechanism for communicating information, and a hardware processor, or multiple processors, 804 coupled with the I/O Subsystem 802 for processing information. The processor(s) 804 may be, for example, one or more general purpose microprocessors.

[0096] The computer system 800 also includes a main memory 806, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to the I/O Subsystem 802 for storing information and instructions to be executed by processor 804. The main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor 804. Such instructions, when stored in storage media accessible to the processor 804, render the computer system 800 into a special purpose machine that is customized to perform the operations specified in the instructions.

[0097] The computer system 800 further includes a read only memory (ROM) 808 or other static storage device coupled to the I/O Subsystem 802 for storing static information and instructions for the processor 804. A storage device 810, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to the I/O Subsystem 802 for storing information and instructions.

[0098] The computer system 800 may be coupled via the I/O Subsystem 802 to an output device 812, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a user. An input device 814, including alphanumeric and other keys, is coupled to the I/O Subsystem 802 for communicating information and command selections to the processor 804. Another type of user input device is control device 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processor 804 and for controlling cursor movement on the output device 812. This input/control device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.

[0099] The computing system 800 may include a user interface module to implement a GUI that may be stored in a mass storage device as computer executable program instructions that are executed by the computing device(s). The computer system 800 may further, as described below, implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs the computer system 800 to be a special-purpose machine. According to some embodiment, the techniques herein are performed by the computer system 800 in response to the processor(s) 804 executing one or more sequences of one or more computer readable program instructions contained in the main memory 806. Such instructions may be read into the main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in the main memory 806 causes the processor(s) 804 to perform the process steps described herein. In some embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.

[0100] Various forms of computer readable storage media may be involved in carrying one or more sequences of one or more computer readable program instructions to the processor 804 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line, cable, using a modem (or optical network unit with respect to fiber). A modem local to the computer system 800 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on the I/O Subsystem 802. The I/O Subsystem 802 carries the data to the main memory 806, from which the processor 804 retrieves and executes the instructions. The instructions received by the main memory 806 may optionally be stored on the storage device 810 either before or after execution by the processor 804.

[0101] The computer system 800 also includes a communication interface 818 coupled to the I/O Subsystem 802. The communication interface 818 provides a two-way data communication coupling to a network link 820 that is connected to a local network 822. For example, the communication interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, the communication interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.

[0102] The network link 820 typically provides data communication through one or more networks to other data devices. For example, the network link 820 may provide a connection through the local network 822 to a host computer 824 or to data equipment operated by an Internet Service Provider (ISP) 826. The ISP 826 in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the "Internet" 828. The local network 822 and the Internet 828 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on the network link 820 and through the communication interface 818, which carry the digital data to and from the computer system 800, are example forms of transmission media.

[0103] The computer system 800 can send messages and receive data, including program code, through the network(s), the network link 820 and the communication interface 818. In the Internet example, a server 830 might transmit a requested code for an application program through the Internet 828, the ISP 826, the local network 822 and communication interface 818.

[0104] The received code may be executed by the processor 804 as it is received, and/or stored in the storage device 810, or other non-volatile storage for later execution.

[0105] FIG. 9 illustrates a system 900 configured for augmenting 3D models, in accordance with one or more implementations. In some implementations, system 900 may include one or more computing platforms 902. Computing platform(s) 902 may be configured to communicate with one or more remote platforms 904 according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Remote platform(s) 904 may be configured to communicate with other remote platforms via computing platform(s) 902 and/or according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Users may access system 900 via remote platform(s) 904.

[0106] Computing platform(s) 902 may be configured by machine-readable instructions 906. Machine-readable instructions 906 may include one or more instruction modules. The instruction modules may include computer program modules. The instruction modules may include one or more of image receiving module 908, model generating module 910, model augmentation module 912, system generating module 914, side identifying module 916, outline generating module 918, element match module 920, model alignment module 922, value derivation module 924, element identifying module 926, element correlation module 928, aspect identifying module 930, factor derivation module 932, image scaling module 934, subset selection module 936, angular error calculation module 938, rotation determination module 940, rotation display module 942, and/or other instruction modules.

[0107] Image receiving module 908 may be configured to receive a first plurality of images. Image receiving module 908 may be configured to receive a second plurality of images. The first plurality of images and the second plurality of images may include at least one of visual data or depth data. The visual data may include at least one of image data or video data. By way of non-limiting example, the depth data may include at least one of point clouds, line clouds, meshes, or points. By way of non-limiting example, the first plurality of images and second plurality of images may be captured by one or more of a smartphone, a tablet computer, an augmented reality headset, a virtual reality headset, a drone, and an aerial platform.

[0108] Each image of the first plurality of images and the second plurality of images may include a building object. Each image of the first plurality of images may include an interior of the building object. Each image of the second plurality of images may include an exterior of the building object. [0109] Model generating module 910 may be configured to generate a first 3D model based on the first plurality of images. Model generating module 910 may be configured to generate a second 3D model based on the second plurality of images. The first 3D model and the second 3D model may include at least one of a polygon-based model or a primitive-based model. The first 3D model and the second 3D model correspond to a building object. The first 3D model may correspond to an interior of the building object. The second 3D model may correspond to an exterior of the building object.

[0110] Model augmentation module 912 may be configured to augment the first 3D model with the second 3D model.

[0111] Augmenting the first 3D model with the second 3D model may be based on location information associated with associated with the first 3D model or the first plurality of images and the second 3D model or the second plurality of images. The location information may include latitude and longitude information.

[0112] System generating module 914 may be configured to generate the common coordinate system. Augmenting the first 3D model with the second 3D model may be relative to the common coordinate system. The first 3D model may be associated with a first coordinate system. The second 3D model may be associated with a second coordinate system. Generating the common coordinate system may be based on the first coordinate system and the second coordinate system. Generating the common coordinate system may include matching the first coordinate system with the second coordinate system.

[0113] Side identifying module 916 may be configured to identify a first plurality of sides of the first 3D model. Side identifying module 916 may be configured to identify a second plurality of sides of the second 3D model. Each side of the first plurality of sides and the second plurality of sides may correspond to a side of a building object. Augmenting the first 3D model with the second 3D model may include substantially aligning the first plurality of sides with the second plurality of sides in a common coordinate system.

[0114] Outline generating module 918 may be configured to generate the first outline of the first 3D model. Outline generating module 918 may be configured to generate the second outline of the second 3D model. Augmenting the first 3D model with the second 3D model may be based on a first outline of the first 3D model and a second outline of the second 3D model. Generating the first outline of the first 3D model may be based on a top-down view of the first 3D model. Generating the second outline of the second 3D model may be based on a top-down view of the second 3D model.

[0115] Augmenting the first 3D model with the second 3D model may include substantially aligning the first outline of the first 3D model with the second outline of the second 3D model. Model alignment module 922 may be configured to substantially align the first outline of the first 3D model with the second outline of the second 3D model. Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on one or more architectural elements. Substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the matched architectural element. Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on one or more values derived from one or more architectural elements. [0116] Element match module 920 may be configured to match an architectural element of the first 3D model with a corresponding architectural element of the second 3D model. Model alignment module 922 may be configured to substantially align the first 3D model with the second 3D model based on the matched architectural element. Value derivation module 924 may be configured to derive a value based on the substantial alignment of the first 3D model with the second 3D model. Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the derived value.

[0117] Element match module 920 may be configured to match an architectural element of the first plurality of images with a corresponding architectural element of the second plurality of images. Model alignment module 922 may be configured to substantially align the first 3D model with the second 3D model based on the matched architectural element. Value derivation module 924 may be configured to derive a value based on the substantial alignment of the first 3D model with the second 3D model. Model alignment module 922 may be configured to substantially aligning the first outline of the first 3D model with the second outline of the second 3D model may be based on the derived value.

[0118] Element identifying module 926 may be configured to identify a first plurality of elements of the first 3D model. Element identifying module 926 may be configured to identify a second plurality of elements of the second 3D model. Identifying the first plurality of elements of the first 3D model may include semantically segmenting the first 3D model. Identifying the second plurality of elements of the second 3D model may include semantically segmenting the second 3D model. Identifying the first plurality of elements of the first 3D model may further include labeling the semantically segmented first 3D model. Identifying the second plurality of elements of the second 3D model may further include labeling the semantically segmented second 3D model. The first plurality of elements and the second plurality of elements may be associated with a building object. The first plurality of elements and the second plurality of elements may be associated with a structure of interest of the building object. The first plurality of elements and the second plurality of elements may be not associated with a building object. Element correlation module 928 may be configured to correlate the first plurality of elements with the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the correlated plurality of elements.

[0119] Element identifying module 926 may be configured to identify a third plurality of elements. The third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the third plurality of elements. The third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements.

[0120] Aspect identifying module 930 may be configured to identify an aspect of an element of the first plurality of elements. Aspect identifying module 930 may be configured to identify a corresponding aspect of a corresponding element of the second plurality of elements. Augmenting the first 3D model with the second 3D model may include substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements. The aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements may be a plane.

[0121] Element identifying module 926 may be configured to identify a first plurality of elements of the first plurality of images. Element identifying module 926 may be configured to identify a second plurality of elements of the second plurality of images. Identifying the first plurality of elements of the first plurality of images may include semantically segmenting each image of the first plurality of images. Identifying the first plurality of elements of the first plurality of images may further include labeling the semantically segmented first plurality of images. Identifying the second plurality of elements of the second plurality of images may include semantically segmenting each image of the second plurality of images. Identifying the second plurality of elements of the second plurality of elements may further include labeling the semantically segmented second plurality of elements. The first plurality of elements and the second plurality of elements may be associated with a building object. The first plurality of elements and the second plurality of elements may be associated with a structure of interest of the building object. The first plurality of elements and the second plurality of elements may be not associated with a building object. Element correlation module 928 may be configured to correlate the first plurality of elements with the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the correlated plurality of elements.

[0122] Element identifying module 926 may be configured to identify a third plurality of elements. The third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements. Augmenting the first 3D model with the second 3D model may be based on the third plurality of elements. The third plurality of elements may include elements common to the first plurality of elements and the second plurality of elements.

[0123] Aspect identifying module 930 may be configured to identify an aspect of an element of the first plurality of elements. Aspect identifying module 930 may be configured to identify a corresponding aspect of a corresponding element of the second plurality of elements. Augmenting the first 3D model with the second 3D model may include substantially aligning the aspect of the element of the first plurality of elements with the corresponding aspect of the corresponding element of the second plurality of elements. The aspect of the element of the first plurality of elements and the corresponding aspect of the corresponding element of the second plurality of elements may be a plane.

[0124] Augmenting the first 3D model with the second 3D model may include correlating the first 3D model with the second 3D model. Correlating the first 3D model with the second 3D model may include assigning a confidence value to each element of the first plurality of elements and the second plurality of elements. Assigning the confidence value to each element of the first plurality of elements and the second plurality of elements may be based on co-visibility of the first plurality of elements and the second plurality of elements. Assigning the confidence value to each element of the first plurality of elements and the second plurality of elements may be based on commonality of the first plurality of elements and the second plurality of elements.

[0125] Augmenting the first 3D model with the second 3D model may include offsetting the first 3D model from the second 3D model. Offsetting the first 3D model from the second 3D model may be based on one or more architectural elements. Offsetting the first 3D model from the second 3D model may be based on the matched architectural element. Offsetting the first 3D model from the second 3D model may be based on the matched architectural element. Offsetting the first 3D model from the second 3D model may be based on one or more values derived from one or more architectural elements. Offsetting the first 3D model from the second 3D model may be based on the derived value. Augmenting the first 3D model with the second 3D model may include dilating the first 3D model based on the second 3D model.

[0126] The first plurality of images may include a first plurality of anchor poses, wherein the second plurality of images includes a second plurality of anchor poses. Augmenting the first 3D model with the second 3D model may be based on anchor poses common to the first plurality of anchor poses and the second plurality of anchor poses.

[0127] Factor derivation module 932 may be configured to derive a scaling factor based on at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model. Image scaling module 934 may be configured to scale at least one of the first plurality of images, the second plurality of images, the first 3D model, a first coordinate system of the first 3D model, from the second 3D model, or a second coordinate system of the second 3D model based on the derived scale factor.

[0128] Subset selection module 936 may be configured to select a first subset of images of the first plurality of images based on at least one of translation data associated with the first plurality of images or rotation data associated with the first plurality of images. Generating the first 3D model may be based on the first subset of images. Subset selection module 936 may be configured to select a second subset of images of the second plurality of images based on at least one of translation data associated with the second plurality of images or rotation data associated with the second plurality of images. Generating the second 3D model may be based on the second subset of images.

[0129] Angular error calculation module 938 may be configured to calculate a first angular error of a first capture path associated with the first plurality of images. Rotation determination module 940 may be configured to determine a suggested rotation based on the first angular error of the first capture path. Rotation display module 942 may be configured to display the suggested rotation.

[0130] In some implementations, computing platform(s) 902, remote platform(s) 904, and/or external resources 944 may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which computing platform(s) 902, remote platform(s) 904, and/or external resources 944 may be operatively linked via some other communication media.

[0131] A given remote platform 904 may include one or more processors configured to execute computer program modules. The computer program modules may be configured to enable an expert or user associated with the given remote platform 904 to interface with system 900 and/or external resources 944, and/or provide other functionality attributed herein to remote platform(s) 904. By way of non-limiting example, a given remote platform 904 and/or a given computing platform 902 may include one or more of a server, a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms.

[0132] External resources 944 may include sources of information outside of system 900, external entities participating with system 900, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 944 may be provided by resources included in system 900.

[0133] Computing platform(s) 902 may include electronic storage 946, one or more processors 948, and/or other components. Computing platform(s) 902 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of computing platform(s) 902 in FIG. 9 is not intended to be limiting. Computing platform(s) 902 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to computing platform(s) 902. For example, computing platform(s) 902 may be implemented by a cloud of computing platforms operating together as computing platform(s) 902.

[0134] Electronic storage 946 may comprise non-transitory storage media that electronically stores information. The electronic storage media of electronic storage 946 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with computing platform(s) 902 and/or removable storage that is removably connectable to computing platform(s) 902 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 946 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage 946 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage 946 may store software algorithms, information determined by processor(s) 948, information received from computing platform(s) 902, information received from remote platform(s) 904, and/or other information that enables computing platform(s) 902 to function as described herein.

[0135] Processor(s) 948 may be configured to provide information processing capabilities in computing platform(s) 902. As such, processor(s) 948 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 948 is shown in FIG. 9 as a single entity, this is for illustrative purposes only. In some implementations, processor(s) 948 may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s) 948 may represent processing functionality of a plurality of devices operating in coordination. Processor(s) 948 may be configured to execute modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942, and/or other modules. Processor(s) 948 may be configured to execute modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942, and/or other modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 948. As used herein, the term "module" may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.

[0136] It should be appreciated that although modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 are illustrated in FIG. 9 as being implemented within a single processing unit, in implementations in which processor(s) 948 includes multiple processing units, one or more of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may be implemented remotely from the other modules. The description of the functionality provided by the different modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 described below is for illustrative purposes, and is not intended to be limiting, as any of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may provide more or less functionality than is described. For example, one or more of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942 may be eliminated, and some or all of its functionality may be provided by other ones of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942. As another example, processor(s) 948 may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules 908, 910, 912, 914, 916, 918, 920, 922, 924, 926, 928, 930, 932, 934, 936, 938, 940, and/or 942.

[0137] FIG. 10 illustrates a method 1000 for augmenting 3D models, in accordance with one or more implementations. The operations of method 1000 presented below are intended to be illustrative. In some implementations, method 1000 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1000 are illustrated in FIG. 10 and described below is not intended to be limiting.

[0138] In some implementations, method 1000 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 1000 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1000.

[0139] An operation 1002 may include receiving a first plurality of images. Operation 1002 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to image receiving module 908, in accordance with one or more implementations.

[0140] An operation 1004 may include generating a first 3D model based on the first plurality of images. Operation 1004 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to model generating module 910, in accordance with one or more implementations.

[0141] An operation 1006 may include receiving a second plurality of images. Operation 1006 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to image receiving module 908, in accordance with one or more implementations.

[0142] An operation 1008 may include generating a second 3D model based on the second plurality of images. Operation 1008 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to model generating module 910, in accordance with one or more implementations.

[0143] An operation 1010 may include augmenting the first 3D model with the second 3D model. Operation 1010 may be performed by one or more hardware processors configured by machine- readable instructions including a module that is the same as or similar to model augmentation module 912, in accordance with one or more implementations.

[0144] All of the processes described herein may be embodied in, and fully automated, via software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.

[0145] Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence or can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, for example, through multi -threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.

[0146] The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In some embodiments, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, one or more microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.

[0147] Conditional language such as, among others, "can," "could," "might" or "may," unless specifically stated otherwise, are understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.

[0148] Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. [0149] Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.

[0150] Unless otherwise explicitly stated, articles such as "a" or "an" should generally be interpreted to include one or more described items. Accordingly, phrases such as "a device configured to" are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, "a processor configured to carry out recitations A, B and C" can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.

[0151] The technology as described herein may have also been described, at least in part, in terms of one or more embodiments, none of which is deemed exclusive to the other. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, or combined with other steps, or omitted altogether. This disclosure is further non limiting and the examples and embodiments described herein does not limit the scope of the invention.

[0152] It should be emphasized that many variations and modifications may be made to the above- described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure.