Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE TRANSFORMATIONS TO ENHANCE EDGE AND SURFACE CLARITY IN ADDITIVE MANUFACTURING
Document Type and Number:
WIPO Patent Application WO/2023/193006
Kind Code:
A2
Abstract:
Systems and methods for producing more accurate, intricate structures via digital light processing additive manufacturing are provided. One or more digital transformations, or filters, are applied to 2D-images or 3D matrices comprised of 2D images used to print the part to help eliminate the effects of light and matter interaction, as well as limitations imposed by the printing resolution of an additive manufacturing printer. The digital transformations can also be used in a diagnostic manner to help provide feedback on performance.

Inventors:
KAUFMAN TYLER ALEXANDER (US)
ERB RANDALL M (US)
SHORES DANIEL T (US)
CRAMER ALAN CHARLES (US)
BYRNE CHRISTOPHER (US)
Application Number:
PCT/US2023/065236
Publication Date:
October 05, 2023
Filing Date:
March 31, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3DFORTIFY INC (US)
International Classes:
B29C64/386
Attorney, Agent or Firm:
PHEIFFER, RORY P. (US)
Download PDF:
Claims:
What is claimed is:

1. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first layer of an image of the build file; defining a second layer of the image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an unsupported region in the second layer; and removing the unsupported region in the second layer.

2. The method of claim 1, wherein comparing the first layer to the second layer to establish a location of an unsupported region in the second layer further comprises: eroding the image by an image kernel.

3. The method of claim 2, wherein eroding the image by an image kernel further comprises: shrinking all edges of the image.

4. The method of claim 2, wherein eroding the image by an image kernel further comprises: applying a matrix of a pre-determined size to the image; checking that a designated grid pattern of pixels exists in a designated area; and if the designated grid pattern of pixels does not exist in the designated area, marking all pixels in the designated area.

5. The method of claim 2, wherein removing the unsupported region in the second layer further comprises the image having at least one of one or more borders or one or more islands removed where there had been thin spans before having been removed by eroding the image.

6. The method of claim 1 , further comprising: comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

7. The method of claim 6, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

8. The method of claim 7, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

9. The method of claim 7, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

10. The method of claim 7, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

11. The method of claim 1, wherein applying one or more digital transformations to a build file further comprises: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

12. The method of claim 11, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in aZ direction.

13. The method of claim 11, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

14. The method of claim 11, wherein the compression parameter comprises a multiple of two.

15. The method of claim 14, wherein the compression parameter comprises a multiple of four.

16. The method of claim 1, wherein applying one or more digital transformations to a build file further comprises: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

17. The method of claim 16, wherein the banding parameter comprises one pixel.

18. The method of claim 17, wherein the banding parameter comprises at least two pixels.

19. The method of claim 1, wherein applying one or more digital transformations to a build file further comprises: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

20. The method of claim 19, wherein the expansion parameter comprises one pixel.

21. The method of claim 1, wherein applying one or more digital transformations to a build file further comprises: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

22. The method of claim 21, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

23. The method of claim 22, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

24. The method of claim 21, wherein the members of the first collection of pixels are defined by a low-pass filter.

25. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first layer of an image of the build file; defining a second layer of the image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

26. The method of claim 25, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

27. The method of claim 26, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

28. The method of claim 26, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

29. The method of claim 26, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

30. The method of claim 25, wherein applying one or more digital transformations to a build file further comprises: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

31. The method of claim 30, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

32. The method of claim 30, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

33. The method of claim 30, wherein the compression parameter comprises a multiple of two.

34. The method of claim 33, wherein the compression parameter comprises a multiple of four.

35. The method of claim 25, wherein applying one or more digital transformations to a build file further comprises: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

36. The method of claim 35, wherein the banding parameter comprises one pixel.

37. The method of claim 36, wherein the banding parameter comprises at least two pixels.

38. The method of claim 25, wherein applying one or more digital transformations to a build file further comprises: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

39. The method of claim 38, wherein the expansion parameter comprises one pixel.

40. The method of claim 25, wherein applying one or more digital transformations to a build file further comprises: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

41. The method of claim 40, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

42. The method of claim 41, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

43. The method of claim 40, wherein the members of the first collection of pixels are defined by a low-pass filter.

44. A method for building a printed part by additive manufacturing, the method comprising: applying one or more digital transformations to a build file by: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

45. The method of claim 44, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

46. The method of claim 44, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

47. The method of claim 44, wherein the compression parameter comprises a multiple of two.

48. The method of claim 47, wherein the compression parameter comprises a multiple of four.

49. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

50. The method of claim 49, wherein the banding parameter comprises one pixel.

51. The method of claim 50, wherein the banding parameter comprises at least two pixels.

52. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

53. The method of claim 52, wherein the expansion parameter comprises one pixel.

54. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

55. The method of claim 54, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

56. The method of claim 55, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

57. The method of claim 54, wherein the members of the first collection of pixels are defined by a low-pass filter.

58. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first layer; defining a second layer, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an unsupported region in the second layer; removing the unsupported region in the second layer.

59. The additive manufacturing printer of claim 58, wherein comparing the first layer to the second layer to establish a location of an unsupported region in the second layer further comprises: eroding the image by an image kernel.

60. The additive manufacturing printer of claim 59, wherein eroding the image by an image kernel further comprises: shrinking all edges of the image.

61. The additive manufacturing printer of claim 59, wherein eroding the image by an image kernel further comprises: applying a matrix of a pre-determined size to the image; checking that a designated grid pattern of pixels exists in a designated area; and if the designated grid pattern of pixels does not exist in the designated area, marking all pixels in the designated area.

62. The additive manufacturing printer of claim 59, wherein removing the unsupported region in the second layer further comprises the image having at least one of one or more borders or one or more islands removed where there had been thin spans before having been removed by eroding the image.

63. The additive manufacturing printer of claim 58, further comprising: comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

64. The additive manufacturing printer of claim 63, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

65. The additive manufacturing printer of claim 64, wherein flooding the build file to identify' at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

66. The additive manufacturing printer of claim 64, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

67. The additive manufacturing printer of claim 64, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

68. The additive manufacturing printer of claim 58, wherein the light source comprises a digital light projector.

69. The additive manufacturing printer of claim 58, wherein the light source comprises a laser.

70. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first layer of an image of the build file; defining a second layer of an image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

71. The additive manufacturing printer of claim 70, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

72. The additive manufacturing printer of claim 71, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

73. The additive manufacturing printer of claim 71, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

74. The additive manufacturing printer of claim 70, wherein the one or more digital transformations further comprises: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

75. The additive manufacturing printer of claim 70, wherein the light source comprises a digital light projector.

76. The additive manufacturing printer of claim 70, wherein the light source comprises a laser.

77. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

78. The additive manufacturing printer of claim 77, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

79. The additive manufacturing printer of claim 77, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

80. The additive manufacturing printer of claim 77, wherein the compression parameter comprises a multiple of two.

81. The additive manufacturing printer of claim 80, wherein the compression parameter comprises a multiple of four.

82. The additive manufacturing printer of claim 77, wherein the light source comprises a digital light projector.

83. The additive manufacturing printer of claim 77, wherein the light source comprises at least one laser.

84. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

85. The additive manufacturing printer of claim 84, wherein the banding parameter comprises one pixel.

86. The additive manufacturing printer of claim 85, wherein the banding parameter comprises at least two pixels.

87. The additive manufacturing printer of claim 84, wherein the light source comprises a digital light projector.

88. The additive manufacturing printer of claim 84, wherein the light source comprises a laser.

89. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

90. The additive manufacturing printer of claim 89, wherein the expansion parameter comprises one pixel.

91. The additive manufacturing printer of claim 89, wherein the wherein the light source comprises a digital light projector.

92. The additive manufacturing printer of claim 89, wherein the light source comprises at least one laser.

93. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

94. The additive manufacturing printer of claim 93, wherein the digital transformations further comprise performing a grey-scale on the second collection of pixels. 95. The additive manufacturing printer of claim 94, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

96. The additive manufacturing printer of claim 93, wherein the members of the first collection of pixels are defined by a low-pass filter.

97. The additive manufacturing printer of claim 93, wherein the light source comprises a digital light projector.

98. The additive manufacturing printer of claim 93, wherein the light source comprises a laser.

Description:
IMAGE TRANSFORMATIONS TO ENHANCE EDGE AND SURFACE CLARITY IN ADDITIVE MANUFACTURING

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of and priority from U.S. Provisional Application No. 63/326,272, filed March 31 , 2022, and U.S. Provisional Application No. 63/415,650, filed October 12, 2022, the disclosures of each which is hereby incorporated by reference herein in its entirety.

FIELD

[0002] The present disclosure relates to systems and methods for improving edge and surface clarity in additive manufacturing through photopolymerization, and more particularly relates to transforming 2D images used in conjunction with such manufacturing to provide stratified dosing of light or photons when printing a slice of the final 3D part characterized by such 2D image.

BACKGROUND

[0003] In additive manufacturing processes, including digital light processing (DLP) printing methods, 3D models are sliced into stacks of images that have an aliased pattern. An image rendering process, known as a slicer, cuts a tessellated geometry into layers that are then rendered as an image using a ray tracing function in a graphics processing unit (GPU). In order to print a part from this stack of images, DLP photopolymer additive manufacturing printers project 2D images with a selected slice thickness that represent slices of a 3D model. Light then cures the photopolymer resin in the region that the image is in, changing the state of the photopolymer resin from liquid to solid. As this process involves interactions between light and matter in the form of a fiber filled resin, effects such as scattering and interference can impact smaller features, edges, and thinner layers of printed parts.

[0004] While attempts have been made to address the adverse effects of scatter and interference, current practices in digital light processing (DLP) printing do not completely address these issues that occur by projecting light into at least certain photoresins, including particle-filled resins used at least with respect to tooling and radio frequency (RF) applications. Scatter can, for example, significantly reduce the degree of accuracy for curing photoresins across high-precision components of printed features, most notably edge boundaries and small features. Similarly, interference patterns can result in rougher edges of parts. These and other effects result in the inability of a printer to project a perfectly perpendicular column of light to the resin as would be optimal, and accordingly there is a final roughness associated with 3D- printed parts using known printing techniques.

[0005] In addition to practical effects caused by the properties of interactions between light and matter, even an ideal case where these effects could be completely mitigated will still result in an imperfect part. This is at least because the conversion of organic lines to positions and intensity values of pixels and associated voxels of a rectilinear matrix is inherently limited by the resolution of that pixel array. A pixel array having higher resolution is able to more accurately trace curves. Practical concerns such as the precision lighting capabilities of a printing device, sizes of image files, and complexity of calculations necessary to transform such files, result in limited sizes of pixel arrays.

[0006] Overhangs and spans represent challenges for additive manufacturing systems, especially fused filament fabrication (FFF), DLP, and SLA style 3D printers. An overhang is an area unsupported material needs to be deposited as part of the additive process. Additive manufacturing processes, such as 3D printing, often involve building a printed part in a series of layers. However, because each layer builds upon the previously deposited layer, this can mean having to cure resin in a layer that is not attached to any cured resin in the previous layer. In a first example, an overhang could be the flat ceiling of model house, where the house is largely empty until the flat ceiling layer is printed, and thus is unsupported by cured resin below. In another example, such an overhang could be the outstretched arm of an action figure. A span is a thin connecting region of a part that connects tw o bulkier regions of the same part, like a small bridge connecting two land masses. The result of printing an overhang or span without proper support is, at minimum, a malformed part, but further implications include clogging or damage to a printer itself.

[0007] In the art, a known solution to this problem involves adding support structures. This treatment can be complicated and impractical in cases such as lattices where there are many (often hundreds) of small, tooth-like overhangs on the boundary edge of a part. These overhangs are often not properly supported. Further, spans connect sections of parts with precariously thin elements to aid in the printing process. In instances in which these thin elements are, for example, small, tooth-like overhangs and/or are thin spans, it can be challenging to be removed from the part design itself. [0008] Accordingly, there is a need for systems and methods to regulate light application during a photopolymerization additive manufacturing process that reduces and/or eliminates the detrimental effects of scatter and interference, and similarly related systems and methods for improving the smoothness of curves in printers with fixed pixel array densities. Likewise, there is a need for systems and methods that can help remove or eliminate overhangs and spans (e.g., thin spans) from a geometry prior to printing.

SUMMARY

[0009] The example embodiments disclosed herein relate to the mitigation of scattering effects in additive manufacturing systems and methods utilizing photopolymerization. In at least one example embodiment, an additive manufacturing printer includes a tank, a build plate, a light source, and a processor. The tank is configured to house a photopolymer resin material. The build plate is disposed above the tank and is configured to at least move along a vertical axis, away from the tank. The light source, which can be a digital light projector and/or a laser, is configured to project an image of a part to be printed towards the tank. The processor is configured to apply one or more digital transformations to a build file. A number of these digital transformations are disclosed herein and they can be used as standalone transformations and/or they can be combined with other digital transformations.

[0010] The digital transformations can provide an adjusted light intensity at one or more designated pixels of the image projected by the light source. The adjusted light intensity is based, at least in part, on an intended light intensity that is a light intensity to be supplied prior to application of the one or more digital transformations to one or more nearby pixels of the one or more designated pixels. The adjusted light intensity for the one or more designated pixels can be inversely proportional to the intended light intensity for the one or more nearby pixels.

[0011] One example of a digital transformation applied to a build file by the processor is sometimes referred to herein as XY sampling or an XY sampling technique. The transformation includes defining a compression parameter and at least one dimension of an image to be printed, and rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter. The transformation also includes sampling the first scaled image into a plurality of evenly- distributed segments, with the total number of segments being equal to the dimension of an image to be printed. Still further, the transformation includes calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments, and rendering a first sampled image, the sampled image having an image dimension equivalent to the first dimension of an image to be printed, and the first sampled image including the average intensity values.

[0012] In at least some instances in which the processor employs the XY sampling technique, the at least one dimension of an image to be printed can include a sum of at least one voxel in aZ direction. Additionally, or alternatively, the at least one dimension of an image to be printed can include a sum of the total number of pixels in an array defined by an X direction and a Y direction. The compression parameter can include a multiple of two, and in some such embodiments, the compression parameter can include a multiple of four.

[0013] Another example of a digital transformation applied to a build file by the processor is sometimes referred to herein as sheathing or a sheathing technique. The transformation includes defining an expansion parameter and at least one dimension of an image to be printed, and rendering a first original image in the first dimension of an image to be printed. The transformation also includes rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter, and subtracting the original image from the scaled image to create an image sheath. Still further, the transformation includes dyeing the sheath a subcritical shade of grey to create a dyed sheath, and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

[0014] Still another example of a digital transformation applied to a build file by the processor is sometimes referred to herein as banding or a banding technique. The transformation includes defining a banding parameter and at least one dimension of an image to be printed, and rendering a first original image in a first dimension of an image to be printed. The transformation also includes identifying edge pixels within the banding parameter of the first original image, and dyeing the identified band pixels a shade of grey to create a dyed band image.

[0015] In at least some instances in which the processor employs the sheathing technique, the expansion parameter can include one pixel. In at least some instances in which the processor employs the banding technique, the banding parameter can include one pixel, and in at least some such embodiments, the banding parameter can include at least two pixels.

[0016] Another example of a digital transformation applied to a build file by the processor is sometimes referred to as skin and bones or a skin and bones technique. The transformation includes defining a first collection of pixels, with the first collection of pixels being defined as within a first distance of void pixels of a build image. The transformation further includes defining a second collection of pixels, with the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixel. Still further, the transformation includes performing anti-density operations on the first collection of pixels.

[0017] In at least some instances in which the processor employs the skin and bones technique, the digital transformations can include performing a grey-scale on the second collection of pixels, and in at least some such embodiments, the grey-scale can be substantially uniform across all pixels in the second collection of pixels. The members of the first collection of pixels can be defined, for example, by a low-pass filter.

[0018] Y et another example of a digital transformation applied to a build file by the processor is sometimes referred to as an island or an island technique, which can be used, for example, to address overhangs. The transformation includes defining a first layer of an image of a build file and defining a second layer of an image of the build file. The second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer. The transformation further includes comparing the first layer to the second layer to establish a location of an overhang in the second layer, and removing the overhang in the second layer.

[0019] In at least some instances in which the processor employs the island technique, comparing the first layer to the second layer to establish a location of an overhang in the second layer can include flooding the build file to identify at least two distinct bodies of pixels. The at least two distinct bodies of pixels can include one or more pixels present in the first layer and one or more pixels present in the second layer. Further, the one or more pixels in the first layer can include at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer. Flooding the build file to identify at least two distinct bodies of pixels can include defining each white pixel on the first layer as a seed point and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels. Alternatively, or additionally, flooding the build file to identify at least two distinct bodies of pixels can include identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer, and removing any white pixel of the second layer that is not identified as being within the designated distance. The one or more digital transformations can include comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

[0020] Another example of a digital transformation applied to a build file by the processor is sometimes referred to as a bridge or a bridge technique. The transformation includes defining a first layer of an image of a build file and defining a second layer of an image of the build file. The second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer. The transformation further includes comparing the first layer to the second layer to establish a location of an unsupported region in the second layer, and removing the unsupported region in the second layer.

[0021] In at least some instances in which the processor employs the bridge technique, comparing the first layer to the second layer to establish a location of an unsupported region in the second layer can include eroding the image by an image kernel. In at least some such embodiments, eroding the image by an image kernel can include shrinking all edges of the image. Alternatively, or additionally, eroding the image by an image kernel can include applying a matrix of a pre-determined size to the image and checking that a designated grid pattern of pixels exists in a designated area. If the designated grid pattern of pixels does not exist in the designated area, eroding the image by an image kernel can further include marking all pixels in the designated area. Removing the unsupported region in the second layer can further include the image having at least one of one or more borders or one or more islands removed where there had been thin spans before having been removed by eroding the image.

[0022] The various digital transformations provided for herein (e.g, sampling, sheathing, banding, skin and bones, island, bridge) can be mixed and matched as desired in any combination. That is, the processor can employ any one or any combination of the digital transformations disclosed. [0023] Methods for implementing the provided techniques are also disclosed. In various embodiments, the methods are for building a printed part by additive manufacturing, and the methods include the action of applying one or more digital transformation to a build file.

[0024] In instances in which the applied digital transformation(s) employs the XY sampling technique, the digital transformation(s) applying action includes defining a compression parameter and at least one dimension of an image to be printed, and rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter. The applying action also includes sampling the first scaled image into a plurality of evenly -distributed segments, with the total number of segments being equal to the dimension of an image to be printed, and calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments. Still further, the applying action includes rendering a first sampled image, the first sampled image having an image dimension equivalent to the first dimension of an image to be printed, and the first sampled image being comprised of the average intensity values.

[0025] In at least some instances in which the digital transformation(s) applying action employs the XY sampling technique, the at least one dimension of an image to be printed can include a sum of at least one voxel in a Z direction. Additionally, or alternatively, the at least one dimension of an image to be printed can include a sum of the total number of pixels in an array defined by an X direction and a Y direction. The compression parameter can include a multiple of two, and in some such embodiments, the compression parameter can include a multiple of four.

[0026] In instances in which the applied digital transformation(s) employs the sheathing technique, the digital transformation(s) applying action includes defining an expansion parameter and at least one dimension of an image to be printed and rendering a first original image in the first dimension of an image to be printed. The applying action also includes rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter, and subtracting the original image from the scaled image to create an image sheath. Still further, the applying action includes dyeing the sheath a subcritical shade of grey to create a dyed sheath, and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image. [0027] In instances in which the applied digital transformation(s) employs the banding technique, the digital transformation(s) applying action includes defining a banding parameter and at least one dimension of an image to be printed, and rendering a first original image in a first dimension of an image to be printed. The transformation also includes identifying edge pixels within the banding parameter of the first original image, and dyeing the identified band pixels a shade of grey to create a dyed band image.

[0028] In at least some instances in which the applied digital transformation(s) employs the sheathing technique, the expansion parameter can include one pixel. In at least some instances in which the applied digital transformation(s) employs the banding technique, the banding parameter can include one pixel, and in at least some such embodiments, the banding parameter can include at least two pixels.

[0029] In instances in which the applied digital transformation(s) employs the skin and bones technique, the digital transformation(s) applying action includes defining a first collection of pixels, with the first collection of pixels being defined as within a first distance of void pixels of a build image. The applying action further includes defining a second collection of pixels, with the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixel. Still further, the applying action includes performing antidensity operations on the first collection of pixels.

[0030] In at least some instances in which the applied digital transformation(s) employs the skin and bones technique, the digital transformation(s) applying action can include performing a grey-scale on the second collection of pixels, and in at least some such embodiments, the grey-scale can be substantially uniform across all pixels in the second collection of pixels. The members of the first collection of pixels can be defined, for example, by a low-pass filter.

[0031] In instances in which the applied digital transformation(s) employs the island technique, the digital transformation(s) applying action includes defining a first layer of an image of a build file and defining a second layer of the image of the build file. The second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer. The digital transformation(s) applying action further includes comparing the first layer to the second layer to establish a location of an overhang in the second layer, and removing the overhang in the second layer. [0032] In at least some instances in which the applied digital transformation(s) employs the island technique, comparing the first layer to the second layer to establish a location of an overhang in the second layer can include flooding the build file to identify at least two distinct bodies of pixels. The at least two distinct bodies of pixels can include one or more pixels present in the first layer and one or more pixels present in the second layer. Further, the one or more pixels in the first layer can include at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer. The action of flooding the build file to identify at least two distinct bodies of pixels can include defining each white pixel on the first layer as a seed point and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels. The method in which the island technique is employed can further include identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer and removing any white pixel of the second layer that is not identified as being within the designated distance. In at least some embodiments in which the island technique is employed can further include comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

[0033] In instances in which the applied digital transformation(s) employs the bridge technique, the digital transformation(s) applying action includes defining a first layer of an image of a build file and defining a second layer of the image of the build file. The second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer. The digital transformation(s) applying action further includes comparing the first layer to the second layer to establish a location of an unsupported region in the second layer, and removing the unsupported region in the second layer.

[0034] In at least some instances in which the applied digital transformation(s) employs the bridge technique, comparing the first layer to the second layer to establish a location of an unsupported region in the second layer can further include eroding the image by an image kernel. Eroding the image by an image kernel can include shrinking all edges of the image. Alternatively, or additionally, eroding the image by an image kernel can include applying a matrix of a pre-determined size to the image and checking that a designated grid pattern of pixels exists in a designated area. If the designated grid pattern of pixels does not exist in the designated area, the action of eroding the image by an image kernel can further include marking all pixels in the designated area. The action of removing the unsupported region in the second layer can further include the image having at least one of one or more borders or one or more islands removed where there had been thin spans before having been removed by eroding the image.

[0035] The various digital transformations provided for herein (e.g., sampling, sheathing, banding, skin and bones, island, bridge) can be mixed and matched as desired in any combination. That is, the action of applying one or more digital transformations can entail employing any one or any combination of the digital transformations disclosed.

[0036] In at least one example embodiment, a method of printing includes applying one or more digital transformations to a build file to provide an adjusted light intensity at one or more designated pixels of an image projected by a digital light projector. In such example embodiments, this transformation can include defining a compression parameter and at least one dimension of an image to be printed, and rendering a first original image in the first dimension of an image to be printed. The image dimension can be any of the X, Y, or Z dimensions of the pixel or voxel array. In some embodiments where curves are smoothed by a sampling method, a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by a compression parameter is rendered. The scaled image is then sampled into a plurality of evenly-distributed segments, wherein the total number of segments is equal to the dimension of an image to be printed. The average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments is then calculated. Finally, a first sampled image is rendered, having an image dimension equivalent to the dimensions needed of an image to be printed, with individual pixel values equal to the corresponding average intensity values.

[0037] In at least one example embodiment a method of printing includes applying one or more digital transformations to a build file for a part to be printed to adjust a projected dosage of light at one or more designated pixels of an image to be projected in conjunction with printing the part to yield a desired dosage of light at the one or more designated pixels during printing. In some embodiments wherein a sheath method is used to counteract the negative effects of scatter and interference, a first original image is rendered with a corresponding expansion parameter. A first scaled image is then rendered with the scaled image moving the perimeter of the original image according to the value of the expansion parameter, or some related additive or multiplicative effect. The original image is then subtracted from the scaled image to create an image sheath. The sheath is dyed grey value(s), and then the dyed sheath is added to the original image to create a first projected image.

[0038] According to other or the same embodiments, a method of printing includes applying one or more digital transformations to a build file for a part to be printed to adjust a projected dosage of light at one or more designated pixels of an image to be projected in conjunction with printing the part to yield a desired dosage of light at the one or more designated pixels during printing. In at least some of such embodiments, a banding parameter is defined in the first image. Band pixels within that banding parameter are then identified and dyed a shade of grey to create a dyed band image. In some embodiments, multiple layers of bands and stacks of banding parameters are used, retreating inwards from the defined edges of the printed part in that layer being printed.

[0039] Individuals will appreciate the scope of the disclosure and realize additional aspects thereof after reading the following detailed description of the examples in association with the accompanying drawing figures.

BRIEF DESCRIPTION OF DRAWINGS

[0040] The accompanying figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure and, together with the description, serve to illustrate at least some principles of the disclosure:

[0041 ] FIG. 1 A is a perspective view of one embodiment of a printing apparatus;

[0042] FIG. IB is a side view of the printing apparatus of FIG. 1A having a side panel of a housing removed to illustrate components of the printing apparatus disposed within the housing;

[0043] FIG. 2A is a 2D top schematic view of a pixel array showing a sample print of an original image prior to application of a banding technique;

[0044] FIG. 2B is a 2D top schematic view of a pixel array showing a dyed band image;

[0045] FIG. 3A is a representation of a first sample layer of a printed part, the first sample layer having a grey band; [0046] FIG. 3B is a representation of a second sample layer of the same printed part as FIG. 3 A, the second sample layer also having a grey band;

[0047] FIG. 4A is a 2D top schematic view of a pixel array showing a sample print of an original image prior to application of a sheathing technique;

[0048] FIG. 4B is a 2D top schematic view of a scaled image representing a transformation of the original image of FIG. 4A;

[0049] FIG. 4C is a 2D top schematic view of an image sheath formed by subtracting the original image of FIG. 4A from the scaled image of FIG. 4B;

[0050] FIG. 4D is a 2D top schematic view of a projected image formed by dying the image sheath of FIG. 4C and adding the dyed image sheath of the original image of FIG. 4A;

[0051] FIG. 5A is a 2D top schematic view of a first pass at using a machine learning or neural network algorithm to even more accurately define energy values and shows the unmanipulated result;

[0052] FIG. 5B is a 3D elevated schematic view of a pixel array showing an attempt at a machine learning or neural network algorithm that would even more accurately define energy values and the un-manipulated case is shown;

[0053] FIG. 6A is a 2D top schematic view of a scaled image prior to application of an XY plane sampling technique;

[0054] FIG. 6B is a 2D top schematic view of a sampled image representing the scaled image of FIG. 6A following the application of a XY plane sampling technique;

[0055] FIG. 7 A is a 2D schematic view of a feature to be printed using a Z sampling technique;

[0056] FIG. 7B is a 2D schematic view of a feature to be printed using a Z sampling technique;

[0057] FIG. 7C is a 2D top schematic view in the XZ plane of a feature to be printed after applying a Z sampling technique; [0058] FIG. 7D is a 2D top schematic view in the XZ plane of a feature to be printed prior to applying a Z sampling technique;

[0059] FIG. 8A is an image of a top view of a printed part made using techniques known in the art;

[0060] FIG. 8B is an image of a top view of a printed part akin to the printed part of FIG. 8A, but made using a Z sampling techniques according to at least some of the embodiments disclosed herein;

[0061] FIG. 9A is a perspective view of a relief map of the printed part of FIG. 8A;

[0062] FIG. 9B is a perspective view of a relief map of the printed part of FIG. 8B;

[0063] FIG. 10A is a top view of a print made using a skin and bones technique;

[0064] FIG. 10B is a magnified view of a portion of the print of FIG. 10A;

[0065] FIG. 11 A is a representation of an island analysis showing an original first layer;

[0066] FIG. 1 IB is a representation of an island analysis showing an original second layer;

[0067] FIG. 1 1C is a representation of an island analysis showing an analyzed second layer;

[0068] FIG. 1 ID is a representation of an island analy sis showing a finalized second layer;

[0069] FIG 12A is a representation of a bridge analysis showing an original first layer;

[0070] FIG. 12B is a representation of a bridge analysis showing an original second layer;

[0071] FIG. 12C is a representation of a bridge analysis showing an analyzed second layer;

[0072] FIG. 12D is a representation of a bridge analysis showing a finalized second layer;

[0073] FIG. 13 is a flowchart depicting an overview of a process for work flow for methods of 3D printing according to at least some embodiments of the disclosure herein;

[0074] FIG. 14 is a flowchart depicting a portion of a work flow in which a build file undergoes a banding parameter transformation; [0075] FIG. 15 is a flowchart depicting a portion of a work flow in which a build file undergoes a sheath parameter transformation;

[0076] FIG. 16 is a flowchart depicting a portion of a work flow in which a build file undergoes an XY sampling transformation;

[0077] FIG. 17 is a flowchart depicting a portion of a work flow in which a build file undergoes a Z sampling transformation;

[0078] FIG. 18 is a flowchart depicting a portion of a work flow in which a build file undergoes a skin and bones transformation;

[0079] FIG. 19 is a flowchart depicting a portion of a work flow in which a build file undergoes an island analysis;

[0080] FIG. 20 is a flowchart depicting a portion of a work flow in which a build file undergoes a bridge analysis; and

[0081] FIG. 21 is a schematic block diagram of one embodiment of a computer system for use in conjunction with the present disclosures.

DETAILED DESCRIPTION

[0082] Certain exemplary embodiments will now be described to provide an overall understanding of the principles of the structure, function, manufacture, and use of the devices and methods disclosed herein. One or more examples of these embodiments are illustrated in the accompanying drawings. Those skilled in the art will understand that the devices and methods specifically described herein and illustrated in the accompanying drawings are nonlimiting exemplary embodiments and that the scope of the present disclosure is defined solely by the claims. The features illustrated or described in connection with one exemplary embodiment may be combined with the features of other embodiments. Such modifications and variations are intended to be included within the scope of the present disclosure. Terms commonly known to those skilled in the art may be used interchangeably herein.

[0083] The present disclosure provides for systems and methods that combat two distinct sources of undesirable printing effects in photopolymer based additive manufactunng processes, including digital light processing (DLP), stereolithography (SLA), and liquid crystal display (LCD) techniques. In particular, a first class of effects stem from interactions of light and matter. These interactions include scattering and interference, and cause inconsistent light application at surface features of a printed part. At least some of the digital transformations provided for herein are designed to adjust the light intensity near geometric edges and throughout smaller features to offset this phenomenon. In another class of effects, limitations on the projection resolution of additive manufacturing devices limit the capabilities of those devices to print curvatures. As an example, many printing devices are only capable of printing in a 1080p or 2K resolution across two-dimensional slices of a 3D part to print, even when drawing programs are capable of drafting images at higher resolutions in a 2D XY plane, such as 4K resolution or 8K resolution. Systems and methods disclosed herein detail improvements to current printing methods, both across the XY-plane and in the Z direction, allowing for a lower resolution projector to print a more visually appealing approximation of a higher resolution curvature.

[0084] The systems and methods include applying one or more digital transformations, sometimes referred to as filters, to projected images to deliver near-optimal dosage to a majority, up to an entirety, of a printed part, including its edges, small features, and large features concurrently in complex geometries. In at least some instances, this includes convolving the input image with an appropriate kernel that acts on a series of image slices to mitigate or inverse the effects of scattering and interference, resulting in a more precise geometric representation of the model In other embodiments, rather than by convolution of a kernel, one or more digital transformations can be performed, and in other or the same embodiments, one or more compression methods can be performed. By way of contrast, the intensity of the light that ends up being directed to a voxel in prior art techniques is typically equalized across all voxels to be printed, independent of the state of nearby voxels, sometimes referred to “nearest neighbor” voxels.

[0085] As described herein, the present disclosures provide for new methodologies to apply specific types of digital transformations, and in some cases kernels, to projected image files in photopolymer printing to fight the detrimental effects of scatter that occur in many photopolymer resins. According to at least some of such embodiments, including some described in detail below, a photopolymer printing technique includes DLP additive manufacturing processes. As a result, certain geometries (e.g., such as RF lenses) can be printed successfully (e g., without aspects of the part being under- or over-cured to an undesirable level) using DLP additive manufacturing where such parts previously could not be printed successfully using previously known DLP techniques.

[0086] Without use of the disclosed systems and methods, interactions between light and matter, including scattering and interference, will typically cause inconsistent light application at surface features of a printed part. The digital transformations provided for herein are designed to adjust the light intensity near interior positions of geometric edges and throughout smaller features to offset this phenomenon.

[0087] Without the use of the disclosed systems and methods, interference patterns inhibit the ability of additive manufacturing devices to print organic curves. The digital transformations provided for herein are designed to adjust the light intensity near geometric edges and throughout smaller features to offset this phenomenon.

[0088] Without the use of the disclosed systems and methods, inherent properties of pixelated images inhibit the ability of additive manufacturing devices to print organic curves. The digital transformations provided for herein are designed to use higher-order renderings of organic curves to better approximate such curves at lower resolutions for printing.

[0089] Tn some embodiments, the digital transformations employed include one or more kernels. Convolving a kernel with the input image produces a new image that, when projected and scattered in the material, results in a more precise representation of the desired geometry. References to a nearby pixel or voxel, as used herein, can be one that is within one (1) pixel/voxel, five (5) pixels/voxels, 10 pixels/voxels, 20 pixels/voxels, 40 pixels/voxels, or 80 pixels/voxels, or any number in-between, depending on a variety of factors understood by a person skilled in the art in view of the present disclosures.

[0090] Several types of kernels have been tested and proven for use in one or more of the methodologies provided for herein. In combination, the kernels, as well as other digital transformations provided for herein or otherwise derivable from the present disclosures, can provide a toolbox or kit of digital transformations that can be used to transform images in a desired manner prior to, or in conjunction with, delivering light for curing. This approach can also be used to “characterize” the light and matter interaction characteristics of the resin system in question. [0091] At least some embodiments of the systems and methods disclosed herein manipulate the projected image in DLP printing to exhibit a different “Projected Dosage” at the edges, interior to the edges, and/or along curves of a printed part. In other or the same embodiments the disclosed systems and methods combat the adverse effects of light and matter interaction within many photoresins, especially particle-filled photoresins. According to at least some embodiments, the photoresins used in the present disclosure can include one or more functional additives (e.g., ceramic particles, magnetic particles), which can increase scatter and interference. Without the provided for digital transformations, complex and high precision printed parts could not be produced with a desired mechanical efficacy. At least some embodiments of the systems and methods disclosed herein can be applied to any printed part that includes edges and/or curvatures.

[0092] Before describing the various digital image transformations enabled by the present disclosure, a brief description of the types of additive manufacturing printers with which the present disclosures can be implemented is appropriate. Generally, the image digital image transformation techniques disclosed herein can be implemented on any additive manufacturing printer that utilizes digital images as part of a build file to construct a three-dimensional object. That is, references to a build file provided for herein can typically include one or more digital images, with each image typically including a plurality of layers. One non-limiting example of such a printer is a vat polymerization printing apparatus. Such printers or printing apparatuses include, but not limited to, printers that utilize digital light processing (DLP). Because a person skilled in the art will generally understand how DLP additive manufacturing works, the present disclosure does not provide all details related to the same. A person skilled in the art will understand how to apply the principles, techniques, and the like disclosed herein to DLP processes and DLP printers. Some non-limiting examples of DLP printers and techniques with which the present disclosure can be used include those provided for in U.S. Patent No. 10,703,052, entitled “Additive Manufacturing of Discontinuous Fiber Composites Using Magnetic Fields,” U.S. Patent No. 10,732,521, entitled “Systems and Methods for Alignment of Anisotropic Inclusions in Additive Manufacturing Processes,” and the FLUX 3D printer series, including the FLUX ONE 3D printer, manufactured by 3DFortify Inc. of Boston, MA (further details provided for at http://3dfortify.com/ and related web pages), the contents of all being incorporated by reference herein in their entireties. [0093] FIGS. 1A and IB illustrate one exemplary embodiment of a FLUX ONE 3D printer 10. The printer 10 includes an outer casing or housing 20 in which various components of the printer 10 are disposed. The FLUX ONE 3D printer is designed to use a bottom-up printing technique, and thus includes a build plate 30 that can be advanced vertically, substantially parallel to a longitudinal axis L of the printer 10 such that the build plate 30 can be moved vertically away from a print reservoir 50 in which resin to be cured to form a desired part is disposed. Generally the build plate 30 can be advanced up and down with respect to a linear rail 32 as desired, the linear rail 32 being substantially colinear with the longitudinal axis L. As a result, the rail 32 can be considered a vertical rail. The build plate 30 can be associated with the linear rail 32 by way of one or more coupling components, such as arm or armatures 34, guides 36, 38, and/or other structures known to those skilled in the art for creating mechanical links that allow one component to move with respect to another.

[0094] As described herein, as the build plate 30 moves away from the print reservoir 50, the resin is cured to the build plate 30 and/or to already cured resin to form the printed part in a layer-by-layer manner as the build plate 30 advances away from the reservoir 50. The resin is cured, for example, by a light source and/or a radiation source, as shown a digital light projector 60. One or more lasers can be used in lieu of or in addition to a digital light projector. The reservoir 50 can include a glass base 52 to allow' the digital light projector 60 to pass light into the reservoir 50 to cure the resin. The glass base 52 can more generally be a transparent platform through which light and/or radiation can pass to selective cure the resin. Resin can be introduced to the printer 10 by way of a materials dock 54 that can be accessible, for example via a drawer 22, formed as part of the housing 20. In other embodiments, a transparent membrane can be provided in addition to a glass or other base 52.

[0095] One or more mixers can be included to help keep the resin viscous and homogeneous. More particularly, at least one mixer, as shown an external mixer 80, can be in fluid communication with the print reservoir 50 to allow resin to flow out of the reservoir 50, into the mixer 80 to be mixed, and then flow' back into the reservoir 50 after it has been mixed by the mixer 80. The mixer 80 can be accessible, for example, via a front panel door 24 provided as part of the housing 20. At least one heating element 82 can be included for use in conjunction with the mixer 80 such that the treated (i.e., mixed) resin is also heated. In the illustrated embodiment the heating element 82 is disposed proximate to the print reservoir 50, heating the resin after it has been mixed by the mixer 80, although other location are possible, including but not limited to being incorporated with the mixer 80 to heat and mix simultaneously and/or consecutively. The resin can be heated more than once by additional heating elements as well. Resin that travels from the reservoir 50, to the mixer 80, and back to the reservoir 50 can flow through any number of conduits or tubes configured to allow resin to travel therethrough, such as the conduits 84 illustrated in FIG. 3B.

[0096] The resin can also flow through a reservoir manifold 56, which can be disposed above the print reservoir 50. The manifold 56 can serve a variety' of purposes, including but not limited to helping to maintain the position of the reservoir 50 during operation, and helping to facilitate mechanical, electrical, and fluid connections between the reservoir and other components of the printer 10. For example, the manifold can be designed to allow resin to be mixed and/or heated to flow out of the reservoir 50, as well as allow mixed and/or heated resin to flow into the reservoir 50 via ports formed therein. Electrical connections to help operate various features associated with the reservoir 50, such as monitoring of a level of resin and/or monitoring an orientation of one or more components disposed and/or otherwise situated with respect to the reservoir 50, can be passed through the manifold 56. The electrical connections may be associated with various electronics and the like housed within the printer 10, for example in an electronics panel 90. Additional details about a reservoir manifold are provided for in International Patent Application No. WO 2021/217102, entitled “Manifold and Related Methods for Use with a Reservoir for Additive Manufacturing,” the contents of which is incorporated by reference herein in its entirety.

[0097] In some embodiments, a magnetic fiber alignment system 92 can be provided for as part of the printer 10. Such a system 92 can help to control aspects of a print job when magnetic functional additives, such as magnetic particles, are associated with the resin being printed. More specifically, the system 92 can include one or more magnets and/or magnetic field generators that enable the location of the magnetic particle including resin to be controlled by the system 92. Other functional additives that are not necessarily magnetic can also be incorporated with the resin.

[0098] A touch screen 26 or other user interface can be included as part of the housing 20 to allow a user to input various parameters for a print job and/or for instructions, signals, warnings, or other information to be passed along by any systems of the printer 10 to a user. Still further, the housing 20 can include an openable and/or removable hood 28 that enables a printed part, as well as components of the printer 10, to be accessed. The hood 28 can also include a viewing portion, such as a window 29, that allows a user to view a print job being performed. As shown, the build plate 30, and thus a part being printed that will be attached to the build plate 30, can be seen through the window 29. Further, the reservoir 50, manifold 56, and other components of the printer 10 can also be visible through the window 29.

[0099] Non-limiting examples of processors and the like for implementing the various techniques disclosed herein are described in greater detail below with respect to FIG. 21. Further, a person skilled in the art will understand how to implement the various digital transformation techniques described herein into software employed by an additive manufacturing printer. Implementation of the disclosed techniques in conjunction with one or more build files is within the capabilities of a person skilled in the art in view of the present disclosures. More generally, a person skilled in the art will understand how to apply the systems, methods, and the like disclosed herein to various additive manufacturing processes and printers. Some non-limiting examples of DLP printers and techniques with which the present disclosure can be used include those provided for in U.S. Patent No. 10,703,052, entitled “Additive Manufacturing of Discontinuous Fiber Composites Using Magnetic Fields,” U.S. Patent No. 10,732,521, entitled “Systems and Methods for Alignment of Anisotropic Inclusions in Additive Manufacturing Processes,” and the FLUX 3D printer series, including the aforementioned FLUX ONE 3D printer (further details about the FLUX 3D printer series provided for at http://3dfortify.com/ and https://3dfortify.com/tech- talks/?utm_campaign=Press%20Release&utm_medium=email& ;_hsmi= 121056681 &_hsenc= p2ANqtz- 8FkzgrKUqpdmX8htE0mGXDqHASskXxSVI_I_LgnnKsoZ_oCHiruVZLpFcDoA CFbnSU Dx4x5HuO6G3 S As9sUijskF2H f Q&utm content= 12105668 f &utm source=hs email, and related web pages), the contents of all, including any videos accessible at such web pages and related web pages, being incorporated by reference herein in their entireties. The videos incorporated by reference at the second provided web page include videos entitled “Fortify’s Product and Services Ecosystem” (length 17 minutes), “CKM: Enabling the Printing of the Highest Performing DLP Materials” (length 9 minutes), “How Fluxprint Enables High Performance Materials for 3D Printing” (length 12 minutes), “Applications Highlight: Fortifying Mold Tools” (length 11 minutes), “Applications Highlight: 3D Printing Low Loss RF Devices” (length 13 minutes), “Tailoring Conductivity in Filled Photopolymers Using Fluxprint and CKM” (9 minutes), and “Innovation Through Collaboration : Fortify’s Material Partnerships” (8 minutes). The terms “3D” and “additive manufacturing” may be used interchangeably herein. In addition to DLP-style additive manufacturing printers, the methods and systems herein can be implemented at least on SLA additive manufacturing printers, LCD manufacturing printers, any other vat photopolymerization process printer, and with any printer that utilizes images in conjunction with its additive manufacturing process.

[0100] Correcting for Effects Resulting From the Physical Interaction of Light and Matter

[0101] Turning first to systems and methods that correct for physical interaction properties of light and matter generally, an enhanced surface finish can be improved through various implementations and embodiments of a banding techmque(s), while an enhanced dimensionality can be provided by a sheathing technique(s).

[0102] Banding Techniques

[0103] In a first banding technique, energy sinks can be placed in slices of a 3D part to alter the distribution of doses delivered to target voxels. In additive manufacturing techniques, a 3D model is typically “sliced” along the Z-axis to form a set of “slices.” While the slices themselves are represented as 2D images formed from sets of pixels, each slice acquires a depth through the additive manufacturing process, that depth being the Z-height of the part divided by the number of slices, such that the 2D pixels of an XY plane schematic ultimately correspond to 3D voxel counterparts. Parts printed in resins can have a roughness associated with them due, at least in part, to physical limitations of projecting a column of light that is perfectly perpendicular, or at least substantially perpendicular (within approximately ± 3 degrees of perpendicular) to the plane of the printed part. A banding technique can be used to correct for at least some of these practical errors.

[0104] According to at least some embodiments, a projected light distribution can be altered to counteract moire patterns and/or other interference patterns with response to the critical exposure threshold of a resin. According to at least some embodiments, a method of printing includes applying one or more digital transformations to a build file for a part to be printed to adjust a projected dosage of light at one or more designated pixels of an image to be projected in conjunction with printing the part to yield a desired dosage of light at the one or more designated pixels during printing. In at least some of such embodiments, a banding parameter, or banding parameter set, can be defined in an original image. A banding parameter, or banding parameter set, can include, by way of non-limiting examples, a border thickness in n-pixels, a band thickness in n-pixels, a dye value, and/or a choice to cascade inward or to not cascade inward. In some embodiments, an original image can be rendered according to a first image dimension, which may define the scale of the original image in relation to a pixel array, while in some other embodiments, an image can be rendered at its final values initially.

[0105] A sample original image to be transformed according to a banding technique of at least one embodiment disclosed herein is shown in FIG. 2A. For the sake of simplicity, original image 100 represents a 9 by 10 image comprised of pixels having two intensity values, the first being “OFF” pixels 101 (unlabeled), and the second being “ON” pixels 110 (labeled “X”). According to at least some banding methods, edge pixels within a banding parameter can be identified and dyed a shade of grey to create a dyed band image, a sample of which is shown as dyed band image 1 0 of FIG. 2B The banding parameter can be, for example, a distance, whether Euclidean, taxi-cab, or a combination thereof, to void or OFF pixels 101. In some embodiments, multiple layers of bands and stacks of banding parameters can be used, retreating inwards from the defined edges of the printed part in that layer being printed. One such embodiment is shown in FIG. 2B, with the dyed band image showing four classes of pixels: “OFF” pixels 101; “ON” pixels 110; first grey dyed pixels 103 (labeled “A”); and second grey dyed pixels 104 (labeled “B”). According to some embodiments in which multiple layers of bands are utilized, the bands may cascade from outside inward in a pattern of X, AB, X, AB, X, AB . . . until termination of the band. In at least some embodiments, the values of A and B can be equal, though that is by no means required. A band can be A-B-C- . .. Z, and/or other combinations understood by a person skilled in the art in view of the present disclosures. In at least some embodiments, A can often be equal to B.

[0106] In the juxtaposed FIGs. 2A and 2B, grey dyed pixels 103 and 104 can be assigned different values depending on the value of the set grey dye parameter in some embodiments of the banding process. According to at least some embodiments, only a single layer of banding “A” 103 is applied, while other embodiments allow for a variation of the banding grey values within the same band such as a value “B” 104. According to some embodiments, multiple bands can be stacked inward, which may be, by way of a non-limiting example, in such formations as X-AAA-XX-AAA-XXXXXXXXXXX. In other or the same embodiments, stacks of bands can be generally provided with at least one white “X” border between the layers, such as a banding pattern of XX-AAA-XX-BBB-XX-CCC. According to at least some embodiments, the level of grey value in a given band can be calculated, for example, based off the energy threshold needed to properly cure each voxel, which in such embodiments can be referenced as a critical dose. According to some embodiments, the models for these calculations can include, for example, the anticipated effects of scattering, and in other or the same embodiments, these calculations can include, by way of further non-limiting example, the anticipated effects of interference.

[0107] In at least some instances, the grey shift includes convolving the input image with an appropriate kernel that acts on a series of image slices to mitigate or inverse the effects of scattering, resulting in a more precise geometric representation of the model. The inversion can be such that the intensity of the light delivered from each pixel location of the projector in the digitally transformed projection image has an intensity that is inversely proportional to the effective intensity of nearby pixels in the original projection image. This inverse proportionality can operate according to some embodiments in relation to the density of “ON” voxels within a given space, referred to herein as an “antidensity” transformation. In embodiments in which photoresins have additives, the effects of scatter can be particularly pronounced, due at least in part to the size and/or density of fiber additives. Different additives can have different impacts, with magnetic fibers being one non-limiting example of an additive that negatively enhances the effects of scatter. By way of contrast to the inverse proportionality, the intensity of the light that ends up being directed to a voxel in prior art techniques is typically equalized across all voxels to be printed, independent of the state of nearby voxels, sometimes referred to “nearest neighbor” voxels. Voxels interior to edges are targeted with a projected dosage that is less than or equal to the desired dosage. This antidensity and inverse proportionality carries across several embodiments of this disclosure. In such embodiments, a delivered dosage within the resin can closely match a desired dosage. This methodology allows for edges, small features, and large features to receive close-to desired dosage simultaneously despite the natural scatter of light in many photoresins. Additional disclosure of antidensity principles in 3D printing systems and methods can be found in U.S. Patent Application No. 17/717,019, entitled “Digital Image Transformation to Reduce Effects of Scatter During Digital Light Processing-Style Manufacturing,” which is incorporated herein by reference in its entirety.

[0108] In operation of a banding method, the interior bands such as bands “A” and “B” of FIGs. 2A and 2B act as energy sinks. These sinks operate to absorb projected light that is scattered away from the edge of a printed part and toward the interior of that part, thus resulting in proper curing of the edges without overcuring the interior sections marked by band “A-B” and the surrounding interior material of a printed part. According to at least one embodiment, moire patterns can bias the light distribution of a projected beam in a printing process, which can result in a corrected curing of the energy sinks applied in the banding process without over- curing the edges.

[0109] According to at least some embodiments of a banding technique, including those embodiments utilizing DLP printers, light does not perfectly fit in a single pixel during a printing process, resulting in an Airy distribution of intensity across a pixel well when light travels through a caustic medium. This can result in a lateral scatter as the Airy distributed wave packets defining the photons across a given pixel can overlap and amplify in vacant pixels. As a result of this distribution, vacant pixels can begin an unintentional photopolymerization process, which can result in curing of portions of a resin that were otherwise meant not to be. One can model a materials behavior at, for example, a given irradiance, exposure time, fiber loading, and/or dye to create an Airy Function model that accurately shows how energy is distributed on the voxel grid for a layer. According to some embodiments taking these effects into account, an additive manufacturing system can utilize a feedback loop in which the light distribution (based, for example, on an Airy model) can be used to assign grey values in the grey band and grey values in a sheath (discussed below in multiple embodiments) such that the desired geometry can be more accurately cured in the photoresin. According to some embodiments, this model can factor in aspects including but not limited to irradiance, exposure time, fiber PSD, fiber loading, and/or the presence of dyes in the photoresin.

[0110] According to at least some embodiments that involve performing a banding process over an entire printed part, the process can be repeated for every layer in a build. At least some embodiments of this process provide a custom contoured band to each part that is within the build for each layer of the part. In such embodiments, the bands are contoured to the individual layer and not the part itself. In other embodiments, a banding process can be performed for only some layers in a build.

[0111] FIG. 3 A depicts a sample layer or slice 1100 of a printed part having a conforming grey band to be printed, such that this image will be ultimately cast into the build plane. FIG. 3 A is scaled to illustrate a grey band 1120. Similarly, FIG. 3B shows a later layer or slice 1150 in the same build as FIG. 3A. Layer 1150 has a different geometry from that of layer 1100, yet the same parameters for grey band 1170, such that the grey band 1170, like grey band 1120, conforms to the part.

[0112] Sheathing Techniques

[0113] Turning next to embodiments depicting sheathing techniques, such methods apply sub-critical doses at selected voxels at the edges of a printed part. In an illustrative embodiment given by FIGs. 4A-4D, FIG. 4A shows an original image 400 having two classes of pixels: “ON” pixels 410 marked with an “X;” and “OFF” pixels 401 that are unlabeled. FIG. 4B depicts a schematic top view of a scaled image 450, representing a transformation of the original image 400 of FIG. 4A. In this scaled image 450, there are two classes of pixels: “ON” pixels 410 and 420, which are again marked with an “X;” and “OFF” pixels 401, which again are unlabeled. “ON” pixels 420 can represent an expansion of the original image 400 that has been expanded at the perimeter by an expansion parameter. In the embodiment illustrated by FIGs. 4A-4B, the expansion parameter is one pixel. However, according to other embodiments, the expansion parameter can be selected according to an appropriate percentage of pixels to constitute a sheath. A sheath in this context represents a collection of voxels within the expansion parameter to receive an altered dosage of light.

[0114] Over the course of an entire print job, as with other methods provided for herein, the manufacturing technique can entail repeating these operations on a layer-by-layer basis for some, and up to all, of the layers of the object to be printed. The layer-by-layer operation can be a looped operation for all images that make up an image stack for a desired full print.

[0115] According to at least some embodiments, the original image 400 of FIG. 4A can be subtracted from the scaled image 450 of FIG. 4B to obtain an image sheath 550, as shown in FIG. 4C. In this sheath 550, sheath pixels 520 can be located at the nonzero values of the subtraction of the original image 400 of FIG. 4A from the scaled image 450 of FIG. 4B. Sheathed pixels 520 can be marked with an “S,” as they have been dyed a shade of grey according to at least some embodiments of this implementation. According to at least some embodiments, the level of grey shift in a given sheath can be calculated based, at least in part, off the energy threshold needed to properly cure each voxel of an image to be projected such that the image to be projected ultimately cures to match the original image 400 of FIG. 4A. FIG. 4D represents a combination of the original image 400 of FIG. 4A and the sheath 550 of FIG. 4C, showing a projected image 650. In the projected imaged 650, pixels 610, marked with an “X,” represent the pixels of original image 400, while pixels 620, marked with an “S,” represent the pixels of the greyed sheath 550.

[0116] In such embodiments, the energy threshold in which the projected image 650 prints as the original image 400 can be referred to as a sub-critical dose. According to some embodiments, the models for these calculations can include, by way of example, the anticipated effects of scattering, and in other or the same embodiments these calculations can include, by way of further example, the anticipated effects of interference. According to at least one embodiment, moire patterns bias the light distribution of a projected beam in a printing process and can result in a corrected curing of the energy sources applied in the sheathing process without over-curing the edges.

[0117] Techniques for calculating the desired grey values for the sheath 550 can include, by way of non-limiting examples, energy values determined experimentally or empirically , and may factor in Gaussian or Airy wave-packet interference across pixel space, as well as scattering effects disclosed herein. This process may also be further optimized by way of a machine learning algorithm or neural network as described with respect to FIGs. 5A and 5B. FIG. 5A depicts sample energy percentages taking the form of a grey-shift to be utilized in a comer of a 3D printed part, while FIG. 5B depicts a scheme showing these energy sinks across three dimensions of a printed part, rather than the top view of a comer presented in FIG. 5A. The representations of FIG. 5A are the result of a machine learning process 200 in which a sheath of multiple layers 203, 204, and 205 increase in projected intensity as they layers progress toward an edge of object 210. The sheath layers terminate into null space 201. As illustrated in FIG. 5B, this null space can be reflected as null space 301 in a 3D representation 300 of this machine learning process, showing intensity values of the sheaths in terms of Z- axis height. According to at least some embodiments of the present disclosure, a machine learning process utilizing antidensity and sheathing principles is utilized to print more precise edges of object 210.

[0118] Correcting for Effects Resulting From the Physical Limitations of Additive Manufacturing Hardware

[0119] Turning next to systems and methods that can correct for inherent defects in the process of pixelating an image having curves, the accuracy and overall presentation of a printed part can be improved through sampling techniques across one or both of the XY plane of the face of the printed part and in the Z direction.

[0120] XY Sampling Techniques

[0121] According to at least some embodiments of an additive manufacturing system configured for use with these techniques, a compression parameter can be defined in relation to a dimension of an image to be printed in the XY plane. The image dimension corresponds to an image resolution higher than what is desired for printing. In at least some embodiments in which the technique is implemented across the XY plane, the image dimension can be an expression of the resolution of an XY pixel array. According to at least some embodiments, these resolutions can include 1080p, 2K, 4K, or 8K resolutions. In some embodiments in which the technique is implemented in the Z direction, the image dimension can be an expression of the number of vertical slices of the 3D model that define the individualized layers to be printed in an additive manufacturing process. These processes are shown in greater detail with respect to the figures described below.

[0122] According to some embodiments, an imaging technique renders a printed part as a scaled image, with the scaled image being larger than the image to be printed by a multiplicative compression parameter. In such embodiments, a first scaled image can then be rendered, with the first scaled image having a scaled image dimension equal to the image dimension of the image to be printed multiplied by the compression parameter. In an example XY embodiment, with a dimension of an image to be printed of 4K (3840 pixels x 2160 pixels) and a compression parameter of 4, the image dimension of the scaled image can be an 8K (7680 pixels x 4320 pixels) image.

[0123] The first scaled image can then be sampled into an evenly-distributed set of segments (or substantially evenly-distnbuted set of segments), each of the evenly distributed segments (or substantially evenly-distributed set of segments) corresponding to a voxel in a part to be printed. In an XY plane example with an 8K to 4K resolution change between the image to be printed and the scaled image, thus defining a compression parameter of four (4) because there are four (4) times as many pixels in the 8K image as the 4K image, each of these distributed segments can contain four (4) corresponding 8K pixels. As such, each individual pixel of the 4K image to be printed effectively corresponds to four pixels of the corresponding 8K image. An average can then be taken of the intensity values for each of the four (4) 8K pixels within the evenly distributed segment. This average can then be set as the value for a corresponding 4K pixel in an image to be printed.

[0124] An illustrative example of this technique is shown in FIGs. 6A-6B. FIG. 6A depicts a scaled image 800. In scaled image 800, there are two pixel values: “ON” pixels 810 marked with “X” and having a value of 255; and “OFF” pixels 801, which are unmarked and have a null value. This scaled image 800 is four times the resolution of an image that a printer can ultimately print. For instance, in a printer capable of printing at 4K resolution, scaled image 800 would have an 8K resolution, defining a compression parameter of four. A person skilled in the art, in view of the present disclosures, will appreciate other scaling factors are possible beyond four, both lower and higher than four, with lower typically entailing only sampling in either an X dimension or a Y dimension.

[0125] As the scaled image 800 of FIG. 6A cannot typically be printed at its current resolution due to hardware restrictions in 3D printing and other additive manufacturing techniques, the scaled image 800 of FIG. 6A is typically reduced to a sampled image that can be printed by the additive manufacturing device. To accomplish this task, the pixels of the scaled image 800 of FIG. 6A can be grouped together in buckets of four (4), creating a sampled image 850 as shown in FIG. 6B. The sampled image can project intensity values based, at least in part, on the average of each bucket. Accordingly, sampled image 850 has five (5) different classes of pixels: null or “OFF” pixels 801 in which all four of the pixels in the bucket of scaled image 800 had a null value; 64 value pixels 802 in which one of the four pixels in the bucket of scaled image 800 has an “ON” value; 128 value pixels 804 in which two of the four pixels in the bucket of scaled image 800 have an “ON” value; 192 value pixels 806 in which three of the four pixels in the bucket of scaled image 800 have an “ON” value; and 255 value pixels 604 in which four of the four pixels in the bucket of scaled image 800 have an “ON” value. According to at least some embodiments utilizing a DLP printer, the values of classes of pixels can be set based on duty cycles of mirrors within the projector of a DLP printer, such as mirrors associated with a digital microminor device (DMD) of a DLP printer.

[0126] Z Sampling Techniques

[0127] According to other or the same embodiments in which sampling occurs across the Z dimension, slicing of the 3D model into 2D images to be printed as layers can occur at an integer multiple (the compression parameter) of a layer height (the dimension of an image to be printed). As in the XY plane, groups of sub-slices can be averaged in value across the vertical dimension, in groupings according to the compression parameter, resulting in increased accuracy of sub-voxel geometries in the Z direction. According to a demonstrative embodiment, a segment of a part in the XZ plane printed using systems and methods disclosed herein is depicted in FIG. 7A. In FIG. 7A, a multitude of voxels are depicted with intensity values. In the illustrated embodiment, voxels having value of 100 will receive the maximum value of intensity from a printing apparatus, while those voxels having a value of 50 will be targeted with only half the energy of the voxels having a value of 100 in the application of a printing method. According to the illustrated embodiment, FIG. 7B shows voxels of a printed part that are “ON” or “OFF” according to a sampling method. In particular, voxels that are on are labeled 720 and/or are shaded in a manner akin with the voxel labeled 720, voxels that are off are labeled 710 and/or are shaded in a manner akm with the voxel labeled 710, and those voxels not yet determined to be “ON” or “OFF” are labeled 715 and 716 and/or are shaded in a manner akin to the respective voxels labeled 715 and 716. The voxels of this third group, labeled and/or akin to the voxels labeled 715 and 716, can often be multiple voxels tall in the Z direction, which is represented by the two voxels 715 and 716 having multiple shades, i.e., the shaded voxel labeled 715, and/or shaded in a manner akin to the voxel labeled 715, being one shade, and the shaded voxel labeled 716, and/or shaded in a manner akin to the voxel labeled 716, being a different shade, within the group to distinguish unique voxels.

[0128] Turning next to FIGs. 7C and 7D, a magnified area 799 of the embodiment of FIG. 7B is shown. In particular, FIG. 7D shows the magnified area of FIG. 7C. FIG. 7D shows several voxels of FIG. 7C that are subdivided, in this case five (5) subdivisions due to a compression parameter of five (5). In methods understood in the art, which average the “ON” and “OFF” values of FIG. 7B, a printer will print voxels according to the intensity value averages of FIG. 7C. Using an improved sampling method consistent with the present disclosure, each voxel of FIG. 7A meets with a projected intensity based on the average values of the sub-voxels represented in FIG 7D, which have been subdivided according to the compression parameter.

[0129] Turning to FIGs. 8A-8B and 9A-9B, a part 760 printed according to known methods in the art is shown in FIG. 8A, in contrast with a part 770 printed using a sampling method as disclosed herein, shown in FIG. 8B. FIGs. 9A and 9B depict relief maps 860 and 870 corresponding to the printed parts 760 and 770. As shown in FIG. 8 A, the part 760 has a plurality of artifact textures 762, 7 4, 766, and 768. These features are further included in the relief map 860 as artifact textures 862, 864, 866, and 868. By contrast, these same surface artifact textures are not included in the printed part 770, and are not visible in the relief map 870. The elimination and/or minimization of these artifact textures is a consequence of a super sampling method according to the embodiments disclosed herein. When combined with other techniques disclosed herein, this sampling technique can enable edges and curvatures to be printed with significantly greater precision than methods known in the art. This is evident both by comparing the relief maps 860 and 870, but also by comparing the printed parts 760 and 770, with the printed part 770 being more crisp and better defined, among other visually evident differences, as compared to the printed part 760.

[0130] Input images can be in a variety of formats, including but not limited to PNG images and/or SVG images. Images such as SVG images can have geometric precision enabled, which can result in an amplified smoothing effect. This can create an anti-aliased PNG image that is true to the percent of the voxel that should actually be filled, thus provide for even more accurate data to compress results in an even more accurate effect. The same number of slices can be used (e.g., five images per slice layer) and there can be a conversion process for turning these SVG images into PNG images that can then be used with the same algorithms provided for herein.

[0131] “Skin and Bones ” Techniques

[0132] According to at least some embodiments of the techniques disclosed herein, antidensity grey-scaling can combat the detrimental effects of photoscatter by compensating for the lower polymerization conversion rates along edges and within smaller features of printed parts derived from photoresins that scatter light. In a “skin and bones” technique, a user can create embedded skeletal structures of higher photoconversion in a part, which ultimately can reduce part shrink, reduce part warp, and/or increase part survivability. The embedded skeletal structures or “bones” experience greater, or in some embodiments full, photoconversion and work to mechanically stabilize the geometry of a printed part, protecting against warp during a curing process, and in turn ensuring part longevity in operation. The skin and bones technique can be used in concert with antidensity according to some embodiments, and can also be used in concert with other methods disclosed herein to improve part quality and/or performance. Non-limiting alternative terminology that can be used in lieu of “skin and bones” is “creating embedded skeletal structures of higher photoconversion” and/or “skeletal structures for improved photoconversion.”

[0133] According to some embodiments, these anti-density approaches can result in grey- scaled slices that have darker interior regions and brighter edges and small features. In general, these approaches can lead to homogenous polymerization conversion across a part from large features to smaller features and edges. However, some embodiments of filled photoresin systems can demonstrate that higher levels of polymerization conversion lead to increased part stability, and can reduce warping of final parts. Achieving sufficient polymerization conversion to prevent these undesirable effects can require large doses that offset the benefits of anti-density greyscaling. In such embodiments, if anti-density grey scaling slices are used to print at too large of a dose, the ability to resolve fine features diminishes.

[0134] Disclosed herein is an embodiment that allows anti-density greyscale to drive the dose around the edges of parts, while achieving a higher dose in the core of features would have the mutual benefit of better resolving edges and small features with anti-density and achieving higher polymerization conversion in the core of a part during printing. According to at least some of such embodiments, a binary image stack can be converted into a 3D matrix. Once this 3D matrix is completed, any of, for example, a taxi-cab, Euclidean, or hybrid 3D dimensional distance transform can be performed on that matrix. This approach can assign values to each element of the matrix, which can be representing a pixel in the final part, a value of which can contain its distance to the nearest empty pixel (or value of 0) that it sees across three dimensions. According to at least some such embodiments, a distance transform can then be iterated on for each 2D array or slice, representing each image that can be projected onto the resin during additive manufacturing. Such embodiments can apply a “skin” variable to dictate what distance from the void pixels define the “skin;” in contrast, other values can be designated as the “bone” pixels. These “skin” and “bone” pixels can then be split into two separate matrices, with the “skin” being run through an anti-density process while the “bones” pixels can be either left white or dyed a grey color, for example. The two images can then be combined using matrix addition and the final matrix can be converted into an encoded image that can be read by a projector.

[0135] These “skin” and “bones” technique(s) can be used in combination with any of the techniques disclosed herein to further enhance the quality of an additively manufactured part. Likewise, unless otherwise understood by a person skilled in the art to not be feasible, a person skilled in the art will appreciate that the various techniques disclosed herein can be used in conjunction with other disclosed techniques or other techniques known to those skilled in the art. According to some embodiments, “skin” and “bones” processes can be optimized utilizing a low-pass filter to reduce computing load. An embodiment of a “skin” and “bones” print is depicted in FIGs. 10A-10B. FIG. 10A depicts a schematic view of a slice of part to be printed 900, identifying skin segments 910 and bone segments 920. FIG. 10B depicts a magnified portion of the slice 900 of FIG. 10A, showing these designations in greater detail. A distance 930 between the bone segment 920 and the edge of the skin segment 910 represents the “skin” thickness. This parameter is designed such that the skin should be thick enough for scatter from the "bone" not to bleed out and blur the part geometry. This parameter can vary from material to material and can be based, at least in part, on more than photokinetics. A design of experiments can be set up to establish this value for a given material, for example, by attempting multiple values until the desired visual and mechanical results are achieved. These attempts are within the skill of one in the art and, in view of the present disclosures, would not require undue experimentation to achieve.

[0136] “Overhang” Techniques

[0137] As previously noted, it can be advantageous to remove small overhangs associated with print jobs. One method for achieving this small overhang removal is by analyzing each layer slice of the part to identify “islands” of voxels that do not have corresponding voxels in the previous layer slice that would properly support the material created during the printing of that layer. DLP slice images often use white pixels to identify printed voxels, and black pixels to identify non-printed empty space. A pixel position on one slice image can be printed vertically adjacent to the same pixel position on the slice image of the following layer. Therefore, the case of a region with white pixels on a second layer in the same position as a region of black pixels on a first layer can be analyzed as a potential case of an unsupported region in the print. At least some embodiments as disclosed herein address thin spans by altering an image to temporarily remove a set number of pixels from a part edge, slightly shrinking the size of the pixel array, and accordingly creating “islands” where there had previously existed bridges smaller than the edge subtractions. The “island” analysis can then be performed and the edge removal undone.

[0138] According to at least one implementation of an “island” analysis, a first layer and a second layer can be defined such that the first layer is the layer slice image printed directly prior to the second layer. The first layer and the second layer can be part of an image, or alternatively, each can be considered an image. The first layer and the second layer, and thus the image(s) associated with the same, can be part of a build file. Regions in the second layer that have at least one corresponding white pixel below them in the first layer can then be analyzed. Different implementations of an island analysis can have different definitions of what a region means in this context. In one implementation, a region can be any contiguous cluster of white pixels (or a singular color or hue of pixels) and can be identified, for example, using a flood-fill tool provided by an image analysis tool. At least one embodiment of this method can work by defining each white pixel on the first layer as a seed point to “flood” both itself and the second layer with pixels identifying “supported pixels.” A flood fill method in this context refers to a common image manipulation algorithm that works by selecting a starting pixel and then filling all neighboring pixels that have an identical color identification. This works well for this use case at least because the pixels on the first layer act as support for pixels on the second layer provided they are connected in groups of pixels, identified by the result of the flood fill. At least some embodiments of this process can iterate through each white pixel on the first layer until each white pixel has been used or self-flooded. The flooded image representing the second layer can subsequently have removed all isolated “island” regions of non-supported pixels/voxels and this image can be saved as a filtered layer slice image. According to other embodiments, the definition of a region can be expanded by stating that every white pixel of the second layer must be within a set distance (e.g., number of pixels length) from at least one supporting white pixel in the first layer.

[0139] At least some other embodiments can utilize a “bridge” analysis. This analysis can be configured similarly to the above “island” analysis, looking below in the first layer for a supporting pixel for the second layer. Prior to identifying and “flooding” the image, the image can be eroded (e.g., shrunk on all edges) by an image kernel, for example in the form of a matrix of pre-determined size, checking that a certain or designated grid pattern of pixels exists in an area or a designated area, if it does not, marking all pixels in the area. This kernel can be scaled to target bridges of specific sizes that remain non-critical, removing them in this erosion step. The resultant layer image can have a border removed and island where there had been thin spans before having been removed by the erosion. Flooding of the islands can then take place as described in the first implementation example above, effectively removing both islands and bridges. After this “flood” analysis is complete, the image can be dilated (e.g., grown on all edges) by, for example, the same kernel used to erode, bringing any slightly altered edge pixels in the second layer back to their original shape.

[0140] This process can repeat with all layers of the print, using the filtered result from the previous layer as the “first layer” supporting layer for the analysis of the next layer.

[0141] FIGs. 11A-1 ID depict an example of an “island” analysis with an original first layer 1210 in FIG. 11 A, an original second layer 1220 in FIG. 1 IB, an analyzed second layer 1230 in FIG. 11C, and a finalized second layer 1240 in FIG. 11D. In the illustrated embodiment, first layer 1210 is compared with the second layer 1220 to establish that there is an overhang 1235 as identified on the analyzed second layer 1230. This can be identified through “flooding” the image, thereby identifying two (or at least two in some instances) distinct bodies of pixels, both those that were present in a previous layer and the newly presented “island” 1235. This can be further confirmed by comparing with the previous image in an overlay, asserting that there are no supporting pixels from the previous layer 1210. If there are no supporting pixel group, this can be confirmed as an island and colored as such in the island 1235. The finalized second layer 1240 takes overhang 1235 as identified and removes all associated pixels resulting in the new image region 1245. This is the result of the second layer introducing an unsupported “island” region in the top right in the form of the overhang 1235.

[0142] FIGs. 12A-12D depict an example of “bridge” filtering with two layer slices, a first layer 1310 at FIG. 12A, a second layer 1320 at FIG. 12B, an analyzed second layer 1330 in FIG. 12C, and a finalized second layer 1340 in FIG. 12D. The analyzed second layer 1330 shows a “bridge” region 1335 as a result of a comparison of the first layer 1310 and the second layer 1320. The bridge region 1335 can then be removed from the finalized second layer 1340. In this analysis, the second layer introduces an unsupported “bridge” region 1335 in the bottom left that would be unaffected by an “island” analysis as it is not a distinct body that can be identified by flooding the image. The “bridge” region 1335 is then processed to identify the region, implementing erosion using the provided image kernel. This removes a fixed matrix array of pixels from the black and white pixel border around the entirety of the layer image. In the case of the “bridge” region, a separation can be created at the thinnest connections as it is within the provided kernel shape. This separation can then create two distinct bodies within the image that can be identified and removed by employing the flooding technique as with “island” analysis. After flooding and removing the artificial islands created by the “bridge” analysis, changes from the kernel can be undone in a dilation operation, adding back the border pixels removed during erosion to ensure original dimensions are maintained. This results in removal of region 1335, as accounted for in the new image region 1345 in FIG. 12D.

[0143] The present disclosures address the aforementioned deficiencies of current methodologies used in DLP additive manufacturing. More particularly, the systems and methods provided can apply a transform(s) on the input image that can compensate for the physical scattering of light, resulting in a better approximation to the desired dose and hence to the desired geometry. Several different approaches for this digital filter methodology (e.g., projected image transformations) have been reduced to practice, including, but not limited to, using anti-gaussian kernels, modified Sorbel kernels, unsharp masking kernels, and many other possibilities not necessarily limited to kernels, such as an iterative approach and/or a machine- leaming-based approach. According to at least some embodiments of the systems and methods disclosed herein, an iterative approach for addressing printing scatter can include the steps of: (1) making an educated determination or guess about what the transformed image should be to offset the detrimental effects due to physical properties of matter and light interaction; (2) projecting that transformed imaged during a print and/or a simulation of a print; (3) characterizing the outcome of the print and/or the simulation of the print; and then (4) iterating back to (1) with a more educated determination or guess and continuing through this iterative process until a satisfactory result is achieved.

[0144] According to at least some embodiments of the systems and methods disclosed herein, a machine-learning approach can compare large datasets of transformed images and associated outcomes and make predictions for transformed images that can result in satisfactory printing outcomes. There can be many algorithms for machine-learning, including but not limited to random forest, neural networks, and others known to those skilled in the art. By way of further non-limiting example of the scope of digital transformations provided for herein, while the present descriptions related to “kernels” can include calculating the transformation at a pixel by using information about its nearby pixels in a two-dimensional context, i.e., based on each slice, the present disclosure also contemplates the ability to utilize digital transformations in a three-dimensional context. That is, kernels and other digital transformations can be implemented based on nearby pixels in layers above and below the slice.

[0145] According to at least some embodiments of the systems and methods disclosed herein, each unique kernel exists as a tool in a toolbox of kernels that can be employed to counter the different possible scattering schema unique to each resin system. In other or the same embodiments, a general feature of these digital transformations, or filters, is that the resultant projected images have brighter edges and effectively deliver higher “projected” dosages to edges and across small features. In at least some of such embodiments, this approach can also be used to “characterize” the scatter characteristics of the resin system in question. Thus, the present disclosure not only provides for the implementation of the digital transformations for printing components, but also allows for the usage of the digital transformations as a diagnostic tool.

[0146] In some embodiments utilizing a modified Sorbel kernel, a modified Sorbel kernel may offer advantages over an anti-gaussian kernel. In at least some of such embodiments, Sorbel kernels require fewer parameters that must be determined for successful printing outcomes. In addition, the processing time of the modified Sorbel kernel that relies on a smaller kernel size can be faster than an anti-gaussian kernel.

[0147] According to at least some embodiment, the use of machine learning can be implemented to best predict a projected dosage by an additive manufacturing printer that can result in a received dosage by a printed part that most closely represents the desired dosage in an ideal scenario free of interative effects of light and matter.

[0148] According to at least some embodiments, several different approaches for a digital transformation methodology use anti-gaussian kernels, modified Sorbel kernels, unsharp masking kernels, and many other possibilities (e.g., the application of kernels in a three- dimensional context), and such embodiments are not necessarily limited to kernels, such as an iterative approach or a machme-leaming-based approach. Each unique kernel, or other transformation(s)/ filter(s), can exist as a tool in a toolbox of kernels, or other transformations/filters, that can be employed, for example, to counter the different possible scattering schema unique to each resin system. In at least some embodiments, a feature of these transformations/filters can be that the resultant projected images have brighter edges and effectively deliver higher “projected” dosages to edges and across small features. This approach can also be used to “characterize” the scatter characteristics of the resin system in question.

[0149] According to at least some embodiments, techniques described herein are implemented as part of a workflow. In an implementation of a workflow 1400 of FIG. 13, a workflow 1400 can incorporate the application of the digital transformations to fight detrimental effects of scatter directly into the printing configuration application, and the build file can be completed without the need for user intervention. The workflow 1400 depicts a part designed in CAD at step 1410 imported into a printing configuration application at step 1420, and the build can be designed at step 1430. The file generated by the design build action at step 1430 can be imported onto a 3D printer at step 1440, using any technique known to those skilled in the art for such importation. The material configuration action of step 1450 can subsequently be performed in view, at least in part, of the imported build file and/or user preferences. Further, a processing of the build file step, step 1460, can be performed at the level of the 3D printer. The actions performed in conjunction with step 1460 can include the generation of slice images and/or instructions (e.g., code, software, computer product, etc.) for deriving an additive manufacturing device and the application of digital filter(s) to fight detrimental effects of scatter. As shown that action can occur after selecting a material configuration, although in other embodiments it can occur before, simultaneously, and/or in conjunction with the material configuration selection step 1450. The processing of the build step 1460 can occur in manners disclosed herein or otherwise known to those skilled in the art in view of the present disclosures. The build can then be initiated at step 1470. In such embodiments, part of the material configuration step 1450 can include determining or otherwise factoring in the digital transformation parameters required to apply the digital transformations for scatter. The software can apply the transformation of the slice images at the printer. In at least some exemplary' embodiments of the system, the user can select the material that he or she wants to print at the printer. The same build file can be used for different materials. The binary images that come from the slicing process can be transformed using parameters that can be optimized for the selected material. According to some embodiments, the processing of the build file at step 1460 may occur at any point in the workflow prior to the initiation of the build at step 1470 and following the designing of the part in CAD at step 1410.

[0150] According to at least some embodiments, the action of processing the build file can include performing one or more of the various techniques provided for herein. FIGS. 14-20 provide for various sub-workflows (e.g., sub-workflows 1500, 1600, 1700, 1900, 1900, 2000, 2100) that can be implemented in conjunction with the workflow 1400, most often as part of the process build file action 1460. FIG. 14 provides for applying a banding technique as a subworkflow 1500. This example sub- workflow 1500 includes selecting an original image at step 1510. A banding parameter is then defined at step 1520, and pixels located within that banding parameter are identified at step 1530. The band pixels are then dyed a shade of grey at step 1540. Further details about how banding technique(s) can be performed are discussed above with respect to FIGS. 2A-3B, and related descriptions.

[0151] FIG. 15 provides for applying a sheathing technique as a sub-workflow 1600. As with the sub-workflow 1500, the use of the sheathing technique(s) associated with the subworkflow 1600 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example sub-workflow 1600 includes selecting an original image at step 1610. A scaled image is defined at step 1620 in view of the original image and an expansion parameter, and the original image is subtracted from the scaled image to define a sheath at step 1630. The sheath is then dyed a shade of grey at step 1640, and the sheath is combined with the original image at step 1650. Further details about how sheathing technique(s) can be performed are discussed above with respect to FIGS. 4A-5B, and related descriptions.

[0152] FIG. 16 provides for applying an XY sampling technique as a sub-workflow 1700. As with the sub-workflows 1500 and 1600, the use of the XY sampling technique(s) associated with the sub-workflow 1700 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example sub-workflow 1700 includes defining a compression parameter at step 1710. A first scaled image is defined at step 1720, and then the first scaled image is sampled into segments at step 1730. The average intensity of each segment is calculated at step 1740, and then a sample image is rendered at step 1750. Further details about how XY sampling technique(s) can be performed are discussed above with respect to FIGS. 6A-6B, and related descriptions.

[0153] FIG. 17 provides for applying a Z sampling technique as a sub-workflow 1800. As with the sub-workflows 1500, 1600, and 1700, the use of the Z sampling technique(s) associated with the sub-workflow 1800 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example sub-workflow 1800 includes defining a compression parameter at step 1810, and slicing the image to be printed at step 1820. These slices are then collected into groups according to the compression parameter at step 1830, and the average intensity of each group is calculated at step 1840. The sampled image is then rendered at step 1850. Further details about how Z sampling technique(s) can be performed are discussed above with respect to FIGS. 7A-9B, and related descriptions.

[0154] FIG. 18 provides for applying a skin and bones technique as a sub-workflow 1900. As with the sub-workflows 1500, 1600, 1700, and 1800, the use of the skin and bones technique(s) associated with the sub-workflow 1900 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example sub-workflow 1900 includes converting a binary image stack to a 3D matrix at step 1910, defining skin pixels at step 1920 and bone pixels in step 1930. Antidensity transformations are then performed on the skin pixels at step 1940. Either simultaneously or separately, the bone pixels are dyed grey at step 1950. Further details about how skin and bones technique(s) can be performed are discussed above with respect to FIGS. lOA-lOB, and related descriptions.

[0155] FIG. 19 provides for applying an overhang technique, in this instance one involving an island analysis, as a sub-workflow 2000. As with the sub-workflows 1500, 1600, 1700, 1800, and 1900, the use of the overhang island analysis technique(s) associated with the subworkflow 2000 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example sub-workflow 2000 includes defining a first layer and second layer at step 2010. Regions in the second layer that have at least one white pixel in the below first layer can be identified at step 2020. The first layer can then be modified to cure the at least one white pixel at step 2030. Further details about how overhang technique(s) can be performed are discussed above with respect to FIGS. 11A-11D, and related descriptions.

[0156] FIG. 20 provides for applying another overhang technique, in this instance one involving a bridge analysis, as a sub-workflow 2100. As with the sub-workflows 1500, 1600, 1700, 1800, 1900, and 2000, the use of the overhang bridge analysis technique(s) associated with the sub-workflow 2100 can often occur as part of the process build file action 1460 illustrated in FIG. 13. This example subs- ork flow 2100 includes defining a first layer and a second layer at step 2110. Regions in the second layer that have at least one shaded pixel below in the first layer can be identified at step 2120. The first layer can then be modified to not cure the at least one shaded pixel at 2130. Further details about how overhang technique(s) can be performed are discussed above with respect to FIGS. 12A-12D, and related descriptions.

[0157] A person skilled in the art will appreciate that any number of the sub-workflows 1500, 1600, 1700, 1800, 1900, 2000, and 2100 can be utilized in the same print job. Further, the variations of the variously disclosed sub-workflows are possible, in view of the other teachings provided for herein, as w ell as the knowledge of the skilled person(s) in the art.

[0158] The present disclosure introduces not only the implementation of digital transformations for printing components, but also the usage of the digital transformation as a diagnostic tool according to at least some embodiments. One or more digital transformations can be used to show how well various resins cure with respect to the digital transformation being used and the amount of light exposure. By examining these prints, various diagnostic information regarding the behavior of scattering in a particular print medium can be determined. In some instances, the diagnostics can be done in real-time, or near real-time, to allow for adjustments to the print job to be made in response to the same using some combination of controllers and/or sensors in a feedback loop(s). By analyzing how the digital transformations disclosed herein are impact the resulting prints, one can effectively model how scattering impacts a printing medium.

[0159] According to at least some embodiments, the transformation can be implemented using image convolution, an alternative image processing technique. Implementation of the present disclosures on a computer readable medium can include a central processing unit (CPU), memory, and/or support circuits (or I/O), among other features. In embodiments having a memory, that memory can be connected to the CPU, and may be one or more of a readily available memory, such as a read-only memory (ROM), a random access memory (RAM), floppy disk, hard disk, cloud-based storage, or any other form of digital storage, local or remote. Software instructions, algorithms, and data can be coded and stored within the memory for instructing the CPU. Support circuits can also be connected to the CPU for supporting the processor in a conventional manner. The support circuits may include conventional cache, power supplies, clock circuits, input/output circuitry. and/or subsystems, and the like. Output circuitry can include circuitry allowing the processor to control a magnetic field generator, light source, and/or other components of an additive photopolymerization printer. In some embodiments, a user can selectively employ the methods described herein, or otherwise derivable from the present disclosure, within image slices produced in the computer readable medium. Convolution can be performed efficiently, but it can be further optimized by leveraging the graphics processing unit (GPU).

[0160] FIG. 21 provides for one non-limiting example of a computer system 1000 upon which actions, provided for in the present disclosure, including but not limited to instructions for driving an additive manufacturing device, can be built, performed, trained, etc. The system 1000 can include a processor 1010, a memory 1020, a storage device 1030, and an input/output device 1040. Each of the components 1010, 1020, 1030, and 1040 can be interconnected, for example, using a system bus 1050. The processor 1010 can be capable of processing instructions for execution within the system 1000. The processor 1010 can be a single-threaded processor, a multi-threaded processor, or similar device. The processor 1010 can be capable of processing instructions stored in the memory 1020 or on the storage device 1030. The processor 1010 may execute operations such as generating build instructions and/or applying antidensity transformations, among other features described in conjunction with the present disclosure.

[0161] The memory 1020 can store information within the system 1000. In some implementations, the memory 1020 can be a computer-readable medium. The memory 1020 can, for example, be a volatile memory unit or a non-volatile memory unit. In some implementations, the memory 1020 can store information related to the instructions for manufacturing sensing arrays, among other information.

[0162] The storage device 1030 can be capable of providing mass storage for the system 1000. In some implementations, the storage device 1030 can be a non-transitory computer- readable medium. The storage device 1030 can include, for example, a hard disk device, an optical disk device, a solid-date drive, a flash drive, magnetic tape, or some other large capacity storage device. The storage device 1030 may alternatively be a cloud storage device, e.g., a logical storage device including multiple physical storage devices distributed on a network and accessed using a network. In some implementations, the information stored on the memory 1020 can also or instead be stored on the storage device 1030.

[0163] The input/output device 1040 can provide input/output operations for the system 1000. In some implementations, the input/output device 1040 can include one or more of network interface devices (e.g., an Ethernet card), a serial communication device (e.g., an RS- 232 10 port), and/or a wireless interface device (e.g., a short-range wireless communication device, an 802.11 card, a 3G wireless modem, a 4G wireless modem, or a 5G wireless modem). In some implementations, the input/output device 1040 can include driver devices configured to receive input data and send output data to other input/output devices, e.g., a keyboard, a printer, and display devices. In some implementations, mobile computing devices, mobile communication devices, and other devices can be used.

[0164] In some implementations, the system 1000 can be a microcontroller. A microcontroller is a device that contains multiple elements of a computer system in a single electronics package. For example, the single electronics package could contain the processor 1010, the memory 1020, the storage device 1030, and input/output devices 1040.

[0165] The present disclosure also accounts for providing a non-transient computer readable medium capable of storing instructions. The instructions, when executed by a computer system like the system 1000, can cause the system 1000 to perform the various functions and methods described herein for printing, forming build files, etc.

[0166] Examples of the above-described embodiments can include the following:

1. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first layer of an image of the build file; defining a second layer of the image of the build file, wherein the second layer is to be panted after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an unsupported region in the second layer; and removing the unsupported region in the second layer.

2. The method of example 1, wherein comparing the first layer to the second layer to establish a location of an unsupported region in the second layer further comprises: eroding the image by an image kernel.

3. The method of example 2, wherein eroding the image by an image kernel further comprises: shrinking all edges of the image.

4. The method of example 2 or 3, wherein eroding the image by an image kernel further comprises: applying a matrix of a pre-determined size to the image; checking that a designated grid pattern of pixels exists in a designated area; and if the designated grid pattern of pixels does not exist in the designated area, marking all pixels in the designated area. 5. The method of any of examples 2 to 4, wherein removing the unsupported region in the second layer further comprises the image having at least one of one or more borders or one or more islands removed where there had been thm spans before having been removed by eroding the image.

6. The method of any of examples 1 to 5, further comprising: comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

7. The method of example 6, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

8. The method of example 7, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

9. The method of example 7 or 8, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

10. The method of any of examples 7 to 9, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

11. The method of any of examples 1 to 10, wherein applying one or more digital transformations to a build file further comprises: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

12. The method of example 11, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

13. The method of example 11 or 12, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

14. The method of any of examples 11 to 13, wherein the compression parameter comprises a multiple of two.

15. The method of example 14, wherein the compression parameter comprises a multiple of four.

16. The method of any of examples 1 to 15, wherein applying one or more digital transformations to a build file further comprises: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

17. The method of example 16, wherein the banding parameter comprises one pixel.

18. The method of example 17, wherein the banding parameter comprises at least two pixels. 19. The method of any of examples 1 to 18, wherein applying one or more digital transformations to a build file further comprises: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed: rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

20. The method of example 19, wherein the expansion parameter comprises one pixel.

21. The method of any of examples 1 to 20, wherein applying one or more digital transformations to a build file further comprises: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

22. The method of example 21, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

23. The method of example 22, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

24. The method of any of examples 21 to 23, wherein the members of the first collection of pixels are defined by a low-pass filter.

25. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first layer of an image of the build file; defining a second layer of the image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

26. The method of example 25, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

27. The method of example 26, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

28. The method of example 26 or 27, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

29. The method of any of examples 26 to 28, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

30. The method of any of examples 25 to 29, wherein applying one or more digital transformations to a build file further comprises: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

31. The method of example 30, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

32. The method of example 30 or 31, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

33. The method of any of examples 30 to 32, wherein the compression parameter comprises a multiple of two.

34. The method of examples 33, wherein the compression parameter comprises a multiple of four.

35. The method of any of examples 25 to 34, wherein applying one or more digital transformations to a build file further comprises: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

36. The method of example 35, wherein the banding parameter comprises one pixel.

37. The method of example 36, wherein the banding parameter comprises at least two pixels.

38. The method of any of examples 25 to 37, wherein applying one or more digital transformations to a build file further comprises: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

39. The method of example 38, wherein the expansion parameter comprises one pixel.

40. The method of any of examples 25 to 39, wherein applying one or more digital transformations to a build file further comprises: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

41. The method of example 40, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

42. The method of example 41, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

43. The method of any of examples 40 to 42, wherein the members of the first collection of pixels are defined by a low-pass filter.

44. A method for building a printed part by additive manufacturing, the method comprising: applying one or more digital transformations to a build file by: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

45. The method of example 44, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

46. The method of example 44 or 45, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction.

47. The method of any of examples 44 to 46, wherein the compression parameter comprises a multiple of two.

48. The method of example 47, wherein the compression parameter comprises a multiple of four.

49. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

50. The method of example 49, wherein the banding parameter comprises one pixel.

51. The method of example 50, wherein the banding parameter comprises at least two pixels.

52. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcritical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

53. The method of example 52, wherein the expansion parameter comprises one pixel.

54. A method for building a printed part by additive manufacturing, comprising: applying one or more digital transformations to a build file by: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

55. The method of example 54, wherein the applying one or more digital transformations to a build file further comprise performing a grey-scale on the second collection of pixels.

56. The method of example 55, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

57. The method of any of examples 54 to 56, wherein the members of the first collection of pixels are defined by a low-pass filter.

58. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first layer of an image of the build file; defining a second layer of the image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an unsupported region in the second layer; removing the unsupported region in the second layer.

59. The additive manufacturing printer of example 58, wherein comparing the first layer to the second layer to establish a location of an unsupported region in the second layer further comprises: eroding the image by an image kernel.

60. The additive manufacturing printer of example 59, wherein eroding the image by an image kernel further comprises: shrinking all edges of the image.

61. The additive manufacturing printer of example 59 or 60, wherein eroding the image by an image kernel further comprises: applying a matrix of a pre-determined size to the image; checking that a designated grid pattern of pixels exists in a designated area; and if the designated grid pattern of pixels does not exist in the designated area, marking all pixels in the designated area.

62. The additive manufacturing printer of any of examples 59 to 61, wherein removing the unsupported region in the second layer further comprises the image having at least one of one or more borders or one or more islands removed where there had been thin spans before having been removed by eroding the image.

63. The additive manufacturing printer of any of examples 58 to 62, further comprising: comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer.

64. The additive manufacturing printer of example 63, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

65. The additive manufacturing printer of example 64, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

66. The additive manufacturing printer of example 64 or 65, further comprising: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

67. The additive manufacturing printer of any of examples 64 to 66, further comprising: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang.

68. The additive manufacturing printer of any of examples 58 to 67, wherein the light source comprises a digital light projector.

69. The additive manufacturing printer of any of examples 58 to 68, wherein the light source comprises a laser.

70. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first layer of an image of the build file; defining a second layer of an image of the build file, wherein the second layer is to be printed after the first layer, and at least some pixels in the second layer correspond to pixels in the first layer; comparing the first layer to the second layer to establish a location of an overhang in the second layer; and removing the overhang in the second layer

71. The additive manufacturing printer of example 70, wherein comparing the first layer to the second layer to establish a location of an overhang in the second layer further comprises: flooding the build file to identify at least two distinct bodies of pixels, the at least two distinct bodies of pixels comprising one or more pixels present in the first layer and one or more pixels present in the second layer, wherein the one or more pixels in the first layer includes at least some pixels that are aligned with akin pixels of the one or more pixels in the second layer.

72. The additive manufacturing printer of example 71, wherein flooding the build file to identify' at least two distinct bodies of pixels further comprises: defining each white pixel on the first layer as a seed point; and operating a flood-fill tool to flood both the first layer and the second layer with pixels that identify supported pixels.

73. The additive manufacturing printer of example 71 or 72, wherein flooding the build file to identify at least two distinct bodies of pixels further comprises: identifying each white pixel of the second layer that is within a designated distance of at least one supporting white pixel of the first layer; and removing any white pixel of the second layer that is not identified as being within the designated distance.

74. The additive manufacturing printer of any of examples 70 to 73, wherein the one or more digital transformations further comprises: comparing a third layer of the image of the build file that precedes the first layer and the second layer in an overlay to confirm the location of the overhang. 75. The additive manufacturing printer of any of examples 70 to 74, wherein the light source comprises a digital light projector.

76. The additive manufacturing printer of any of examples 70 to 75, wherein the light source comprises a laser.

77. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a compression parameter and at least one dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first dimension of an image to be printed multiplied by the compression parameter; sampling the first scaled image into a plurality of evenly-distributed segments, the total number of segments being equal to the dimension of an image to be printed; calculating an average intensity value across each evenly-distributed segment of the plurality of evenly-distributed segments; and rendering a first sampled image, having an image dimension equivalent to the first dimension of an image to be printed, the first sampled image being comprised of the average intensity values.

78. The additive manufacturing printer of example 77, wherein the at least one dimension of an image to be printed comprises a sum of at least one voxel in a Z direction.

79. The additive manufacturing printer of example 77 or 78, wherein the at least one dimension of an image to be printed comprises a sum of the total number of pixels in an array defined by an X direction and a Y direction. 80. The additive manufacturing printer of any of examples 77 to 79, wherein the compression parameter comprises a multiple of two.

81. The additive manufacturing printer of example 80, wherein the compression parameter comprises a multiple of four.

82. The additive manufacturing printer of any of examples 77 to 81, wherein the light source comprises a digital light projector.

83. The additive manufacturing printer of any of examples 77 to 82, wherein the light source comprises at least one laser.

84. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a banding parameter and at least one dimension of an image to be printed; rendering a first original image in a first dimension of an image to be printed; identifying edge pixels within the banding parameter of the first original image; and dyeing the identified band pixels a shade of grey to create a dyed band image.

85. The additive manufacturing printer of example 84, wherein the banding parameter comprises one pixel.

86. The additive manufacturing printer of example 85, wherein the banding parameter comprises at least two pixels. 87. The additive manufacturing printer of any of examples 84 to 86, wherein the light source comprises a digital light projector.

88. The additive manufacturing printer of any of examples 84 to 87, wherein the light source comprises a laser.

89. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining an expansion parameter and at least one dimension of an image to be printed; rendering a first original image in the first dimension of an image to be printed; rendering a first scaled image having an image dimension equal to the first original image plus the expansion parameter; subtracting the original image from the scaled image to create an image sheath; dyeing the sheath a subcntical shade of grey to create a dyed sheath; and adding the dyed sheath to the original image in the first dimension of an image to be printed to create a first projected image.

90. The additive manufacturing printer of example 89, wherein the expansion parameter comprises one pixel.

91. The additive manufacturing printer of example 89 or 90, wherein the wherein the light source comprises a digital light projector.

92. The additive manufacturing printer of any of examples 89 to 91, wherein the light source comprises at least one laser. 93. An additive manufacturing printer, comprising: a tank configured to have a photopolymer resin material disposed therein; a build plate disposed above the tank and configured to at least move along a vertical axis, away from the tank; a light source configured to project an image of a part to be printed towards the tank; and a processor, configured to: apply one or more digital transformations to a build file, the one or more digital transformations comprising: defining a first collection of pixels, the first collection of pixels defined as within a first distance of void pixels of a build image; defining a second collection of pixels, the second collection of pixels defined as those pixels that are neither void pixels nor part of the first collection of pixels; and performing anti-density operations on the first collection of pixels.

94. The additive manufacturing printer of example 93, wherein the digital transformations further comprise performing a grey-scale on the second collection of pixels.

95. The additive manufacturing printer of example 94, wherein the grey-scale is substantially uniform across all pixels in the second collection of pixels.

96. The additive manufacturing printer of any of examples 93 to 95, wherein the members of the first collection of pixels are defined by a low-pass filter.

97. The additive manufacturing printer of any of examples 93 to 96, wherein the light source comprises a digital light projector.

98. The additive manufacturing printer of any of examples 93 to 97, wherein the light source comprises a laser.

[0167] One skilled in the art will appreciate further features and advantages of the present disclosure based on the above-described embodiments. Accordingly, the disclosure is not to be limited by what has been particularly shown and described, except as indicated by the appended claims. Further, a person skilled in the art, in view of the present disclosures, will understand how to implement the disclosed systems and methods provided for herein in conjunction with DLP-style additive manufacturing printers. All publications and references cited herein are expressly incorporated herein by reference in their entireties.

[0168] In the foregoing detailed description, numerous specific details are set forth by way of examples to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well-known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. While this disclosure includes a number of embodiments in many different forms, there is shown in the drawings and will herein be described in detail particular embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the disclosed methods and systems, and is not intended to limit the broad aspects of the disclosed concepts to the embodiments illustrated. As will be realized, the subject technology is capable of other and different configurations, several details are capable of modification in various respects, embodiments may be combined, steps in the flow charts may be omitted or performed in a different order, all without departing from the scope of the subject technology. Accordingly, the drawings, flow charts, and detailed description are to be regarded as illustrative in nature and not as restrictive.