Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VARIABLE RESOLUTION IMAGE CAPTURE
Document Type and Number:
WIPO Patent Application WO/2016/071566
Kind Code:
A1
Abstract:
A method, apparatus and computer program product are provided for variable resolution image capture. The method includes facilitating capture of at least one first image associated with a first pixel density, based on receiving light from at least one light modulating element (330) onto at least one image sensor, when the at least one light modulating element (330) is positioned in a first alignment position (336). Texture data associated with the at least one first image is determined. A second alignment position (344) of the at least one light modulating element (330) is determined based on the texture data associated with the at least one first image. Capture of at least one second image of the scene is facilitated with the at least one light modulating element (330) being positioned in the second alignment position (344). The at least one second image corresponds to the at least one first image and is associated with a second pixel density. The light modulating element (330) can be a micro- mirror or embodied on a liquid crystal spatial light modulator.

Inventors:
ULIYAR MITHUN (IN)
PUTRAYA GURURAJ GOPAL (IN)
S V BASAVARAJA (IN)
PATWARDHAN PUSHKAR PRASAD (IN)
Application Number:
PCT/FI2015/050755
Publication Date:
May 12, 2016
Filing Date:
November 03, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA CORP (FI)
International Classes:
H04N5/232; G02B26/08; G06V10/24; B81B7/04; G02B17/00; G06T5/00
Foreign References:
US20080129857A12008-06-05
US20070188883A12007-08-16
US20090128644A12009-05-21
US20080049291A12008-02-28
US6977777B12005-12-20
Other References:
PARENT, J. ET AL.: "Active imaging lens with real-time variable resolution and constant field of view'.", INTERNATIONAL OPTICAL DESIGN CONFERENCE, 13 June 2010 (2010-06-13), Retrieved from the Internet [retrieved on 20160218]
LIU, Y. ET AL.: "Automatic Texture Segmentation for Texture-based Image Retrieval'.", 10TH INTERNATIONAL MULTIMEDIA MODELLING CONFERENCE (MMM'04, 5 January 2004 (2004-01-05), Retrieved from the Internet [retrieved on 20160205]
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (IPR DepartmentKarakaari 7, Espoo, FI)
Download PDF:
Claims:
CLAIMS

1. A method comprising:

facilitating capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and being captured based on receiving light from at least one light modulating elements, being positioned in a first alignment position, onto at least one image sensor;

determining a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image;

determining a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and

facilitating capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

2. The method as claimed in claim 1 , wherein the texture data associated with the at least one first image comprises a texture level associated with a plurality of pixels of the at least one first image.

3. The method as claimed in claim 2, further comprising determining the one or more textured regions and the one or more non-textured regions in the at least one first image based on a comparison of the texture level of the plurality of pixels with at least one threshold value of the texture level.

4. The method as claimed in claims 1 or 3, further comprising:

computing, based on the determination of the second alignment position of the at least one light modulating element, at least one transformation parameter associated with the at least one light modulating element; and

facilitating positioning of the at least one light modulating element in the second alignment position by applying the at least one transformation parameter to the first alignment position of the at least one light modulating element.

5. The method as claimed in claim 4, wherein the second alignment position of the at least one light modulating element comprises the first alignment position.

6. The method as claimed in claim 4, wherein the second alignment position of the at least one light modulating element comprises a re-aligned position.

7. The method as claimed in claim 6, further comprising applying reverse transformation to one or more non-textured regions of the at least one second image, the one or more non-textured regions of the at least one second image corresponds to the one or more non- textured regions of the at least one first image.

8. The method as claimed in claims 1 to 7, wherein the at least one light modulating element is embodied on a panel of one of a reflective digital micro-mirror device (DMD) and a liquid crystal spatial light modulation (SLM) device, and wherein the at least one light modulating element is associated with a first resolution.

9. The method as claimed in claim 8, wherein the first resolution associated with the at least one light modulating element is greater than a second resolution associated with the at least one image sensor.

10. The method as claimed in claims 8, wherein the first resolution associated with the at least one light modulating element is lower than a second resolution associated with the at least one image sensor. 1 1. The method as claimed in claim 10, wherein the at least one first image comprises a first plurality of images of the scene corresponding to a plurality of distinct regions of the scene.

12. The method as claimed in claim 11 , wherein facilitating capture of the first plurality of images of the scene comprises imaging the first plurality of images onto a plurality of regions of the at least one image sensor.

13. The method as claimed in claim 12, wherein facilitating capture of the at least one second image of the scene comprises facilitating capture of a second plurality of images of the scene, the second plurality of images being corresponding to the first plurality of images.

14. The method as claimed in claim 13, further comprising temporally multiplexing the second plurality of images being imaged onto the plurality of regions of the at least one image sensor.

15. An apparatus comprising:

at least one processor; and

at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:

facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from at least one light modulating element, being positioned in a first alignment position, onto at least one image sensor,

determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image,

determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image, and

facilitate capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

16. The apparatus as claimed in claim 15, wherein the texture data associated with the at least one first image comprises a texture level associated with a plurality of pixels of the at least one first image.

17. The apparatus as claimed in claim 16, wherein the apparatus is further caused at least in part to determine the one or more textured regions and the one or more non-textured regions in the at least one first image based on a comparison of the texture level of the plurality of pixels with at least one threshold value of the texture level.

18. The apparatus as claimed in claims 15 or 17, wherein the apparatus is further caused at least in part to: compute, based on the determination of the second alignment position of the at least one light modulating element, at least one transformation parameter associated with the at least one light modulating element; and

facilitate positioning of the at least one light modulating element in the second alignment position by applying the at least one transformation parameter to the first alignment position of the at least one light modulating element.

19. The apparatus as claimed in claim 18, wherein the second alignment position of the at least one light modulating element comprises the first alignment position.

20. The apparatus as claimed in claim 18, wherein the second alignment position of the at least one light modulating element comprises a re-aligned position.

21. The apparatus as claimed in claim 20, wherein the apparatus is further caused at least in part to apply reverse transformation to one or more non-textured regions of the at least one second image, the one or more non-textured regions of the at least one second image being corresponds to the one or more non-textured regions of the at least one first image.

22. The apparatus as claimed in claims 15 to 21 , wherein the at least one light modulating element is embodied on a panel of one of a reflective digital micro-mirror device (DMD) and a reflective liquid crystal spatial light modulator (SLM), and wherein the at least one light modulating element is associated with a first resolution.

23. The apparatus as claimed in claim 22, wherein the first resolution associated with the at least one light modulating element is greater than a second resolution associated with the at least one image sensor.

24. The apparatus as claimed in claim 22, wherein the first resolution associated with the at least one light modulating element is lower than a second resolution associated with the at least one image sensor.

25. The apparatus as claimed in claim 24, wherein the at least one first image comprises a first plurality of images of the scene corresponding to a plurality of distinct regions of the scene.

26. The apparatus as claimed in claim 25, wherein facilitating capture of the first plurality of images of the scene comprises imaging the first plurality of images onto a plurality of regions of the at least one image sensor.

27. The apparatus as claimed in claim 26, wherein facilitating capture of the at least one second image of the scene comprises facilitating capture of a second plurality of images of the scene, the second plurality of images being corresponding to the first plurality of images. 28. The apparatus as claimed in claim 27, wherein the apparatus is further caused at least in part to, temporally multiplex the second plurality of images being imaged onto the plurality of regions of the at least one image sensor.

29. The apparatus as claimed in claim 15, wherein the apparatus comprises an electronic device comprising:

a user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs; and

a display circuitry configured to display at least a portion of a user interface of the electronic device, the display and the display circuitry configured to facilitate the user to control at least one function of the electronic device.

30. The apparatus as claimed in claim 29, wherein the electronic device comprises the at least one image sensor configured to capture the at least one first image and the at least one second image.

31. The apparatus as claimed in claims 29 or 30, wherein the electronic device comprises a mobile phone. 32. An apparatus comprising:

at least one light modulating element configured to assume a plurality of alignment positions;

at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:

facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from the at least one light modulating element, being positioned in a first alignment position from among the plurality of alignment positions, onto at least one image sensor,

determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image,

determine a second alignment position from among the plurality of alignment positions of the at least one light modulating element based on the texture data associated with the at least one first image, and

facilitate capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

33. The apparatus as claimed in claim 32, wherein the at least one light modulating element is embodied on a panel of a reflective digital micro-mirror device (DMD).

34. The apparatus as claimed in claim 32, wherein the at least one light modulating element is embodied on a panel of a reflective liquid crystal spatial light modulator (SLM).

35. An apparatus comprising:

at least one light modulating element configured to assume a plurality of alignment positions;

at least one image sensor capable of receiving light from the at least one light modulating element;

at least one processor; and

at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:

facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from the at least one light modulating element, being positioned in a first alignment position of the plurality of alignment positions, onto the at least one image sensor,

determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image,

determine a second alignment position from among the plurality of alignment positions of the at least one light modulating element based on the texture data associated with the at least one first image, and

facilitate capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

36. The apparatus as claimed in claim 35, wherein the at least one light modulating element is embodied on a panel of a reflective digital micro-mirror device (DMD).

37. The apparatus as claimed in claim 35, wherein the at least one light modulating element is embodied on a panel of a reflective liquid crystal spatial light modulator (SLM).

38. A computer program product comprising at least one computer-readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to:

facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from at least one light modulating elements, being positioned in a first alignment position, onto at least one image sensor;

determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image;

determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and

facilitate capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

39. The computer program product as claimed in claim 38, wherein the texture data associated with the at least one first image comprises a texture level associated with a plurality of pixels of the at least one first image. 40. The computer program product as claimed in claim 39, wherein the apparatus is further caused at least in part to determine the one or more textured regions and the one or more non-textured regions in the at least one first image based on a comparison of the texture level of the plurality of pixels with at least one threshold value of the texture level. 41. The computer program product as claimed in claims 38 or 40, wherein the apparatus is further caused at least in part to:

compute, based on the determination of the second alignment position of the at least one light modulating element, at least one transformation parameter associated with the at least one light modulating element; and

facilitate positioning of the at least one light modulating element in the second alignment position by applying the at least one transformation parameter to the first alignment position of the at least one light modulating element.

42. The computer program product as claimed in claim 41 , wherein the second alignment position of the at least one light modulating element comprises the first alignment position

43. The computer program product as claimed in claim 41 , wherein the second alignment position of the at least one light modulating element comprises a re-aligned position.

44. The computer program product as claimed in claim 43, wherein the apparatus is further caused at least in part to apply reverse transformation to one or more non-textured regions of the at least one second image, the one or more non-textured regions of the at least one second image being corresponds to the one or more non-textured regions of the at least one first image.

45. The computer program product as claimed in claims 38 to 44, wherein the at least one light modulating element is embodied on a panel of one of a reflective digital micro-mirror device (DMD) and a spatial light modulator (SLM) device, and wherein the at least one light modulating elements is associated with a first resolution.

46. The computer program product as claimed in claim 45, wherein the first resolution associated with the at least one light modulating element is greater than a second resolution associated with the at least one image sensor.

47. The computer program product as claimed in claim 45, wherein the first resolution associated with the at least one light modulating element is lower than a second resolution associated with the at least one image sensor. 48. The computer program product as claimed in claim 47, wherein the at least one first image comprises a first plurality of images of the scene, the first plurality of images being corresponding to a plurality of distinct regions of the scene.

49. The computer program product as claimed in claim 48, wherein facilitating capture of the first plurality of images of the scene comprises imaging the first plurality of images onto a plurality of regions of the at least one image sensor.

50. The computer program product as claimed in claim 49, wherein facilitating capture of the at least one second image of the scene comprises facilitating capture of a second plurality of images of the scene, the second plurality of images being corresponding to the first plurality of images.

51. The computer program product as claimed in claim 50, wherein the apparatus is further caused at least in part to, temporally multiplex the second plurality of images being imaged onto the plurality of regions of the at least one image sensor.

52. An apparatus comprising:

means for facilitating capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from at least one light modulating element being positioned in a first alignment position onto at least one image sensor, at least one light modulating element of the plurality of light modulating elements being positioned in a first alignment position; means for determining a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image;

means for determining a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and

means for facilitating capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position. 53. The apparatus as claimed in claim 52, wherein the texture data associated with the at least one first image comprises a texture level associated with a plurality of pixels of the at least one first image.

54. The apparatus as claimed in claim 53, further comprising means for determining the one or more textured regions and the one or more non-textured regions in the at least one first image based on a comparison of the texture level of the plurality of pixels with at least one threshold value of the texture level.

55. The apparatus as claimed in claims 52 or 54, further comprising:

means for computing, based on the determination of the second alignment position of the at least one light modulating element, at least one transformation parameter associated with the at least one light modulating element; and

means for facilitating positioning of the at least one light modulating element in the second alignment position by applying, the at least one transformation parameter to the first alignment position of the at least one light modulating element.

56. The apparatus as claimed in claim 55, wherein the second alignment position of the at least one light modulating element comprises the first alignment position. 57. The apparatus as claimed in claim 55, wherein the second alignment position of the at least one light modulating element comprises a re-aligned position.

58. The apparatus as claimed in claim 57, further comprising means for applying reverse transformation to one or more non-textured regions of the at least one second image, the one or more non-textured regions of the at least one second image corresponds to the one or more non-textured regions of the at least one first image.

59. The apparatus as claimed in claims 52 to 58, wherein the at least one light modulating element is embodied on a panel of one of a reflective digital micro-mirror device (DMD) and a spatial light modulator (SLM) device, and wherein the at least one light modulating element is associated with a first resolution.

60. The apparatus as claimed in claim 59, wherein the first resolution associated with the at least one of light modulating elements is greater than a second resolution associated with the at least one image sensor.

61. The apparatus as claimed in claim 59, wherein the first resolution associated with the at least one of light modulating elements is lower than a second resolution associated with the at least one image sensor.

62. The apparatus as claimed in claim 61 , wherein the at least one first image comprises a first plurality of images of the scene corresponding to a plurality of distinct regions of the scene.

63. The apparatus as claimed in claim 62, wherein facilitating capture of the first plurality of images of the scene comprises means for imaging the first plurality of images onto a plurality of regions of the at least one image sensor. 64. The apparatus as claimed in claim 63, wherein facilitating capture of the at least one second image of the scene comprises means for facilitating capture of a second plurality of images of the scene, the second plurality of images being corresponding to the first plurality of images. 65. The apparatus as claimed in claim 64, further comprising: means for temporally multiplexing the second plurality of images being imaged onto the plurality of regions of the at least one image sensor.

66. A computer program comprising program instructions which when executed by an apparatus, cause the apparatus to:

facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density and captured based on receiving light from at least one light modulating element being positioned in a first alignment position onto at least one image sensor; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image;

determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and

facilitate capture of at least one second image, corresponding to the at least one first image and associated with a second pixel density with the at least one light modulating element being positioned in the second alignment position.

Description:
VARIABLE RESOLUTION IMAGE CAPTURE TECHNICAL FIELD Various implementations relate generally to method, apparatus, and computer program product for capturing images.

BACKGROUND Various electronic devices, for example, cameras, mobile phones, and other multimedia devices are widely used for capturing images and/or videos of a scene. These electronic devices feature a camera that includes an image sensor for capturing images. The image sensor is associated with a fixed number of pixels for capturing an image of the scene, and thus the captured image of the scene assumes a fixed resolution. An image having a fixed resolution may include same resolution for different image regions that may be associated with varying levels of details or textures. For example, in a fixed resolution image of a scene having a monumental building with the sky in the background, the resolution of the portions of the image having the monumental building and the sky may be same. Thus, in reality even though the details (or textures) associated with the monumental building may be more than the details associated with the sky, the monumental building may be represented with same resolution in the captured image as is the resolution of the sky. The fixed resolution images may thus suffer from inability to provide details of different objects or regions in the scene.

SUMMARY OF SOME EMBODIMENTS

Various example embodiments are set out in the claims.

In a first embodiment, there is provided a method comprising: facilitating capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating element onto at least one image sensor, at least one light modulating element being positioned in a first alignment position; determining a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determining a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitating capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position, the at least one second image being corresponding to the at least one first image and associated with a second pixel density.

In a second embodiment, there is provided an apparatus comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform: facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element being positioned in a first alignment position; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitate capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position, the at least one second image corresponding to the at least one first image and associated with a second pixel density.

In a third embodiment, there is provided an apparatus comprising: at least one light modulating element configured to assume a plurality of alignment positions; at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element being positioned in a first alignment position from among the plurality of alignment positions; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitate capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position from among the plurality of alignment positions, the at least one second image corresponding to the at least one first image and associated with a second pixel density. In a fourth embodiment, there is provided an apparatus comprising: at least one light modulating element configured to assume a plurality of alignment positions; at least one image sensor capable of receiving light from the at least one light modulating element; at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element being positioned in a first alignment position from among the plurality of alignment positions; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitate capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position from among the plurality of alignment positions, the at least one second image corresponding to the at least one first image and associated with a second pixel density.

In a fifth embodiment, there is provided a computer program product comprising at least one computer-readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to at least perform: facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element of the plurality of light modulating elements being positioned in a first alignment position; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitate capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position, the at least one second image corresponding to the at least one first image and associated with a second pixel density.

In a sixth embodiment, there is provided an apparatus comprising: means for facilitating capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element of the plurality of light modulating elements being positioned in a first alignment position; means for determining a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; means for determining a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and means for facilitating capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position, the at least one second image corresponding to the at least one first image and associated with a second pixel density.

In a seventh embodiment, there is provided a computer program comprising program instructions which when executed by an apparatus, cause the apparatus to: facilitate capture of at least one first image of a scene, the at least one first image being associated with a first pixel density, the at least one first image being captured based on receiving light from at least one light modulating elements onto an image sensor, at least one light modulating element of the plurality of light modulating elements being positioned in a first alignment position; determine a texture data associated with the at least one first image, the texture data being indicative of one or more textured regions and one or more non-textured regions in the at least one first image; determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image; and facilitate capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position, the at least one second image corresponding to the at least one first image and associated with a second pixel density.

BRIEF DESCRIPTION OF THE FIGURES

Various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which: FIGURE 1 illustrates a device, in accordance with an example embodiment;

FIGURE 2 illustrates an apparatus for capturing images, in accordance with an example embodiment;

FIGURE 3A illustrates an example representation of an optical arrangement for capturing an image, in accordance with an example embodiment;

FIGURE 3B illustrates an example representation of an optical arrangement for applying transformation parameters to a light modulating element, in accordance with an example embodiment;

FIGURES 4A, 4B and 4C illustrates an example representation of an optical arrangement for capturing an image, in accordance with another example embodiment;

FIGURES 5A and 5B illustrate an example representation of an optical arrangement for capturing an image, in accordance with an example embodiment;

FIGURES 6A, 6B and 6C illustrate an example representation of an image captured, in accordance with an example embodiment;

FIGURE 7 is a flowchart depicting an example method for capturing an image, in accordance with an example embodiment; and

FIGURE 8 is a flowchart depicting an example method for capturing an image, in accordance with another example embodiment.

DETAILED DESCRIPTION

Example embodiments and their potential effects are understood by referring to FIGURES 1 through 8 of the drawings.

FIGURE 1 illustrates a device 100 in accordance with an example embodiment. It should be understood, however, that the device 100 as illustrated and hereinafter described is merely illustrative of one type of device that may benefit from various embodiments, therefore, should not be taken to limit the scope of the embodiments. As such, it should be appreciated that at least some of the components described below in connection with the device 100 may be optional and thus in an example embodiment may include more, less or different components than those described in connection with the example embodiment of FIGURE 1. The device 100 could be any of a number of types of mobile electronic devices, for example, portable digital assistants (PDAs), pagers, mobile televisions, gaming devices, cellular phones, all types of computers (for example, laptops, mobile computers or desktops), cameras, audio/video players, radios, global positioning system (GPS) devices, media players, mobile digital assistants, or any combination of the aforementioned, and other types of communications devices. The device 100 may include an antenna 102 (or multiple antennas) in operable communication with a transmitter 104 and a receiver 106. The device 100 may further include an apparatus, such as a controller 108 or other processing device that provides signals to and receives signals from the transmitter 104 and receiver 106, respectively. The signals may include signaling information in accordance with the air interface standard of the applicable cellular system, and/or may also include data corresponding to user speech, received data and/or user generated data. In this regard, the device 100 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. By way of illustration, the device 100 may be capable of operating in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like. For example, the device 100 may be capable of operating in accordance with second-generation (2G) wireless communication protocols IS-136 (time division multiple access (TDMA)), GSM (global system for mobile communication), and IS-95 (code division multiple access (CDMA)), or with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA1000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD- SCDMA), with 3.9G wireless communication protocol such as evolved- universal terrestrial radio access network (E-UTRAN), with fourth-generation (4G) wireless communication protocols, or the like. As an alternative (or additionally), the device 100 may be capable of operating in accordance with non-cellular communication mechanisms. For example, computer networks such as the Internet, local area network, wide area networks, and the like; short range wireless communication networks such as include Bluetooth® networks, Zigbee® networks, Institute of Electric and Electronic Engineers (IEEE) 802.1 1x networks, and the like; wireline telecommunication networks such as public switched telephone network (PSTN).

The controller 108 may include circuitry implementing, among others, audio and logic functions of the device 100. For example, the controller 108 may include, but are not limited to, one or more digital signal processor devices, one or more microprocessor devices, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field- programmable gate arrays (FPGAs), one or more controllers, one or more application-specific integrated circuits (ASICs), one or more computer(s), various analog to digital converters, digital to analog converters, and/or other support circuits. Control and signal processing functions of the device 100 are allocated between these devices according to their respective capabilities. The controller 108 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission. The controller 108 may additionally include an internal voice coder, and may include an internal data modem. Further, the controller 108 may include functionality to operate one or more software programs, which may be stored in a memory. For example, the controller 108 may be capable of operating a connectivity program, such as a conventional Web browser. The connectivity program may then allow the device 100 to transmit and receive Web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like. In an example embodiment, the controller 108 may be embodied as a multi-core processor such as a dual or quad core processor. However, any number of processors may be included in the controller 108.

The device 100 may also comprise a user interface including an output device such as a ringer 1 10, an earphone or speaker 1 12, a microphone 1 14, a display 1 16, and a user input interface, which may be coupled to the controller 108. The user input interface, which allows the device 100 to receive data, may include any of a number of devices allowing the device 100 to receive data, such as a keypad 1 18, a touch display, a microphone or other input device. In embodiments including the keypad 118, the keypad 1 18 may include numeric (0-9) and related keys (#, *), and other hard and soft keys used for operating the device 100. Alternatively or additionally, the keypad 1 18 may include a conventional QWERTY keypad arrangement. The keypad 1 18 may also include various soft keys with associated functions. In addition, or alternatively, the device 100 may include an interface device such as a joystick or other user input interface. The device 100 further includes a battery 120, such as a vibrating battery pack, for powering various circuits that are used to operate the device 100, as well as optionally providing mechanical vibration as a detectable output. In an example embodiment, the device 100 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 108. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. In an example embodiment in which the media capturing element is a camera module 122, the camera module 122 may include a digital camera capable of forming a digital image file from a captured image. As such, the camera module 122 includes all hardware, such as a lens or other optical component(s), and software for creating a digital image file from a captured image. Alternatively, the camera module 122 may include the hardware needed to view an image, while a memory device of the device 100 stores instructions for execution by the controller 108 in the form of software to create a digital image file from a captured image. In an example embodiment, the camera module 122 may further include a processing element such as a co-processor, which assists the controller 108 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data. The encoder and/or decoder may encode and/or decode according to a JPEG standard format or another like format. For video, the encoder and/or decoder may employ any of a plurality of standard formats such as, for example, standards associated with H.261 , H.262/ MPEG-2, H.263, H.264, H.264/MPEG-4, MPEG-4, and the like. In some cases, the camera module 122 may provide live image data to the display 1 16. Moreover, in an example embodiment, the display 1 16 may be located on one side of the device 100 and the camera module 122 may include a lens positioned on the opposite side of the device 100 with respect to the display 1 16 to enable the camera module 122 to capture images on one side of the device 100 and present a view of such images to the user positioned on the other side of the device 100.

The device 100 may further include a user identity module (UIM) 124. The UIM 124 may be a memory device having a processor built in. The UIM 124 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UlCC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), or any other smart card. The UIM 124 typically stores information elements related to a mobile subscriber. In addition to the UIM 124, the device 100 may be equipped with memory. For example, the device 100 may include volatile memory 126, such as volatile random access memory (RAM) including a cache area for the temporary storage of data. The device 100 may also include other non-volatile memory 128, which may be embedded and/or may be removable. The non-volatile memory 128 may additionally or alternatively comprise an electrically erasable programmable read only memory (EEPROM), flash memory, hard drive, or the like. The memories may store any number of pieces of information, and data, used by the device 100 to implement the functions of the device 100.

FIGURE 2 illustrates an apparatus 200 for capturing of images, in accordance with an example embodiment. The apparatus 200 may be employed, for example, in the device 100 of FIGURE 1. However, it should be noted that the apparatus 200, may also be employed on a variety of other devices both mobile and fixed, and therefore, embodiments should not be limited to application on devices such as the device 100 of FIGURE 1. Alternatively, embodiments may be employed on a combination of devices including, for example, those listed above. Accordingly, various embodiments may be embodied wholly at a single device, for example, the device 100 or in a combination of devices. Furthermore, it should be noted that the devices or elements described below may not be mandatory and thus some may be omitted in certain embodiments.

The apparatus 200 includes or otherwise is in communication with at least one processor 202 and at least one memory 204. Examples of the at least one memory 204 include, but are not limited to, volatile and/or non-volatile memories. Some examples of the volatile memory include, but are not limited to, random access memory, dynamic random access memory, static random access memory, and the like. Some examples of the non-volatile memory include, but are not limited to, hard disks, magnetic tapes, optical disks, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, and the like. The memory 204 may be configured to store information, data, applications, instructions or the like for enabling the apparatus 200 to carry out various functions in accordance with various example embodiments. For example, the memory 204 may be configured to buffer input data comprising media content for processing by the processor 202. Additionally or alternatively, the memory 204 may be configured to store instructions for execution by the processor 202.

An example of the processor 202 may include the controller 108. The processor 202 may be embodied in a number of different ways. The processor 202 may be embodied as a multi-core processor, a single core processor; or combination of multi-core processors and single core processors. For example, the processor 202 may be embodied as one or more of various processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In an example embodiment, the multi-core processor may be configured to execute instructions stored in the memory 204 or otherwise accessible to the processor 202. Alternatively or additionally, the processor 202 may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 202 may represent an entity, for example, physically embodied in circuitry, capable of performing operations according to various embodiments while configured accordingly. For example, if the processor 202 is embodied as two or more of an ASIC, FPGA or the like, the processor 202 may be specifically configured hardware for conducting the operations described herein. Alternatively, if the processor 202 is embodied as an executor of software instructions, the instructions may specifically configure the processor 202 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 202 may be a processor of a specific device, for example, a mobile terminal or network device adapted for employing embodiments by further configuration of the processor 202 by instructions for performing the algorithms and/or operations described herein. The processor 202 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 202.

A user interface 206 may be in communication with the processor 202. Examples of the user interface 206 include, but are not limited to, input interface and/or output interface. The input interface is configured to receive an indication of a user input. The output user interface provides an audible, visual, mechanical or other output and/or feedback to the user. Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, and the like. Examples of the output interface may include, but are not limited to, a display such as light emitting diode display, thin-film transistor (TFT) display, liquid crystal displays, active-matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, ringers, vibrators, and the like. In an example embodiment, the user interface 206 may include, among other devices or elements, any or all of a speaker, a microphone, a display, and a keyboard, touch screen, or the like. In this regard, for example, the processor 202 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface 206, such as, for example, a speaker, ringer, microphone, display, and/or the like. The processor 202 and/or user interface circuitry comprising the processor 202 may be configured to control one or more functions of one or more elements of the user interface 206 through computer program instructions, for example, software and/or firmware, stored on a memory, for example, the at least one memory 204, and/or the like, accessible to the processor 202.

In an example embodiment, the apparatus 200 may include an electronic device. Some examples of the electronic device include communication device, media capturing device with or without communication capabilities, computing devices, and the like. Some examples of the electronic device may include a mobile phone, a personal digital assistant (PDA), and the like. Some examples of computing device may include a laptop, a personal computer, and the like. In an example embodiment, the electronic device may include a user interface, for example, the user interface 206, having user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs. In an example embodiment, the electronic device may include a display circuitry configured to display at least a portion of the user interface 206 of the electronic device. The display and display circuitry may be configured to facilitate the user to control at least one function of the electronic device.

In an example embodiment, the electronic device may be embodied as to include a transceiver. The transceiver may be any device operating or circuitry operating in accordance with software or otherwise embodied in hardware or a combination of hardware and software. For example, the processor 202 operating under software control, or the processor 202 embodied as an ASIC or FPGA specifically configured to perform the operations described herein, or a combination thereof, thereby configures the apparatus or circuitry to perform the functions of the transceiver. The transceiver may be configured to receive media content. Examples of the media content may include audio content, video content, data, and a combination thereof. In an example embodiment, the electronic device may be embodied to include a camera 208 for capturing at least one first image of a scene and a corresponding at least one second image of the scene. In various example embodiments, the camera 208 may include a spatial light modulator, at least one optical element, and at least one image sensor. The spatial light modulator is capable of reflecting and/or transmitting light coming from the scene onto the at least one optical element (for example, a camera lens) to be further imaged on the at least one image sensor. In an example embodiment, the spatial light modulator may include a plurality of light modulating elements embodied on a panel. In an example embodiment, the spatial light modulator may include a reflective digital micro-mirror device (DMD). In an example embodiment, the DMD may include a plurality of light modulating elements, such as reflective micro-mirrors embodied on a panel. In another example embodiment, the spatial light modulator may include a liquid crystal spatial light modular (SLM) device. In an example embodiment, the at least one optical element may include a first leans and a second lens, where the first lens may be associated with the spatial light modulator and the second lens may be associated with the at least one image sensor. Various embodiments describing configuration and functionalities of the first lens and the second lens are illustrated and explained further in detail with reference to FIGURES 3A-5B.

The camera 208 may be in communication with the processor 202 and/or other components of the apparatus 200. The camera 208 may be in communication with other imaging circuitries and/or software, and is configured to capture digital images or to make a video or other graphic media files. The camera 208 and other circuitries, in combination, may be an example of at least one camera module such as the camera module 122 of the device 100. An example representation of the camera 208 is shown in FIGURE 3A. The camera 208 may be structurally different from a traditional camera with respect to inclusion of the spatial light modulator at input of the camera 208 to reflect/transmit the plurality of light beams coming from the scene onto the camera lens to be further imaged on the at least one image sensor. The camera 208 focuses the spatial light modulator towards the scene in front of the camera lens and the at least one image sensor. The camera 208 is hence used to implement imaging functions that include, but are not limited to, high dynamic range imaging, optical feature detection, and object recognition using appearance matching, as compared to the traditional camera.

These components (202-208) may communicate to each other via a centralized circuit system 210 to capture an image associated with a scene. The centralized circuit system 210 may be various devices configured to, among other things, provide or enable communication between the components (202-208) of the apparatus 200. In certain embodiments, the centralized circuit system 210 may be a central printed circuit board (PCB) such as a motherboard, main board, system board, or logic board. The centralized circuit system 210 may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media.

In an example embodiment, the apparatus 200 may be caused to capture an image of the scene that may be associated with a variable resolution. Herein, 'variable resolution image' may refer to an image that may include some regions associated with high resolutions and other regions that may have low resolutions. The apparatus 200 may be configured to detect texture levels (or texture distribution) associated with various regions of the image, and based on the determination of texture distribution in the image, the apparatus 200 may be caused to capture the variable resolution image of the scene. In an example embodiment, the variable resolution image captured by the apparatus 200 may include higher resolution being assigned to the highly-textured regions of the image, and lower resolution being assigned to the low-textured regions of the image. In an example embodiment, the processor 202 is configured to, with the content of the memory 204, and optionally with other components described herein, to cause the apparatus 200 to facilitate capture of at least one first image of a scene. Herein, the term 'scene' may refer to an arrangement (natural, manmade, sorted or assorted) of one or more objects of which images and/or videos can be captured. In an example embodiment, the at least one first image may be captured by the camera 208. In an example embodiment, the at least one first image is associated with a first pixel density. Herein, the 'first pixel density' may refer to an even or similar concentration of pixels on the at least one image sensor associated with the at least one first image.

In an example embodiment, the camera 208 may include the plurality of light modulating elements that may receive scene light from an optical element, for example, a lens element. Herein, the 'light modulating elements' may refer to elements such as micro-mirrors having reflective property, and/or transmissive property. The plurality of light modulating elements may be embodied on a panel in various devices such as a DMD device, a liquid crystal SLM device, and the like. In an example embodiment, the plurality of light modulating elements may be configured to assume a plurality of positions. In an example embodiment, the plurality of light modulating elements may be positioned in a first alignment position. In an example embodiment, the term 'first alignment position' associated with the plurality of light modulating elements may refer to an initial alignment position of the plurality of light modulating elements. It may be noted that individual light modulating elements of the plurality of light modulating elements may have same or different respective first alignment positions. For example, one of the light modulating elements may be aligned at an angle of 0 degrees while another one may be initially aligned at -10 degrees with respect to the panel, and so on. In another example scenario, the plurality of light modulating elements may initially be aligned at a same angle with respect to the horizontal.

In an example embodiment, the plurality of light modulating elements of the spatial light modulator may be associated with a first resolution. Herein, the term 'first resolution' of the spatial light modulator may refer to a number of light modulating elements per unit area of the spatial light modulator device. For example, for a DMD device, the term first resolution may refer to the number of micro-mirrors per unit area. In an example embodiment, the plurality of light modulating elements may reflect/transmit the scene light onto an optical element, for example a second lens element that may further facilitate in imaging the at least one first image onto the at least one image sensor associated with the camera 208. In an example embodiment, the at least one image sensor may be associated with a second resolution. Herein, the term 'second resolution' associated with the at least one image sensor may refer to a number of pixels of an image being imaged onto the at least one image sensor. In an example embodiment, the more the second resolution of an image, more will be the sharpness and quality of the image being imaged onto the at least one image sensor. In an example embodiment, a processing means may be configured to facilitate capture of the at least one first image. An example of the processing means may include the processor 202, which may be an example of the controller 108, and/or the camera 208.

In an example embodiment, the at least one first image includes an image of the scene that may be captured by the camera 208 with the plurality of light modulating elements being held in the first alignment position. In an example embodiment, if the spatial light modulator has a high field of view (FoV), and the first resolution of the light modulating elements is greater than the second resolution associated with the at least one image sensor, the at least one first image may entirely capture the image of the scene. In such an example scenario, only a single-capture may be sufficient to capture an entire image of the scene.

In another example embodiment, the at least one first image may include a first plurality of images of the scene that may be captured by the camera 208 with the plurality of light modulating elements being held in the first alignment position. In an example embodiment, if the spatial light modulator has a lower FoV such that the second resolution associated with the at least one image sensor is greater than the first resolution of the spatial light modulator, the at least one first image may include the first plurality of images for entirely capturing the scene. In an example embodiment, the first plurality of images may correspond to a plurality of distinct portions of the scene. In an example embodiment, the first plurality of images may be captured by multiple captures (in a single-click) of the scene, where multiple captures may be performed for entirely capturing the image of the scene. In an example embodiment, the first plurality of images of the scene may be captured by imaging a plurality of regions on the at least one image sensor.

In an example embodiment, the apparatus 200 may be caused to determine the texture data associated with the at least one first image. In an example embodiment, the texture data may be indicative of one or more textured regions and one or more non-textured regions in the at least one first image. In an example embodiment, the texture data provides information associated with spatial arrangements of features such as color and/or intensities in an image. In an example embodiment, the texture data associated with the at least one first image may be determined based on at least one feature associated with a plurality of pixels of the at least one first image. Examples of the at least one feature may include, but are not limited to, color and/or gradient associated with the plurality of pixels of the at least one first image. In an example embodiment, the texture data of an image may be determined based on various methods such as a variation of color or gray levels in the image, spectral information associated with the image, and so on. In an example embodiment, a processing means may be configured to determine the texture data associated with the at least one first image. An example of the processing means may include the processor 202, which may be an example of the controller 108.

In an example embodiment, the texture data associated with an image may include at least a texture level associated with the plurality of pixels of the image, where the texture level may facilitate in categorizing the regions of image as textured regions and non-textured regions. In an example embodiment, the apparatus 200 may be caused to compare a texture level associated with the regions of the at least one first image with at least one threshold value of the texture level. In an example embodiment, the apparatus 200 may be caused to determine one or more textured regions and one or more non-textured regions of the at least one first image, based on the comparison of the texture level of the plurality of pixels, with at least one threshold value of the texture data. For example, on comparison if it is determined that the texture level associated with a pixel is greater than a first threshold value of the texture level, the pixel may be assumed to be associated with a textured region of the first image. However, if it is determined that the texture level associated with the pixel is lower than the first threshold value of the texture level, the pixel may be assumed to be associated with a non-textured region of the first image. It will be noted that the at least one value of texture level may include a single or multiple values of the texture level that may facilitate in categorizing the regions of the first image into textured, non-textured, moderately textured, heavily textured, and various similar categories. The categorization of the regions of the first image into various levels of the texture may facilitate the apparatus 200 in processing the at least one first image based on the texture levels associated with the respective portions of the at least one first image. For example, if it is determined that a region of the first image is highly textured and another region is moderately textured, then the apparatus 200 may facilitate in assigning more number of pixels for imaging the highly textured region and lower number of pixels for imaging the moderately textured region, to thereby capture necessary details in the highly textured regions. In various embodiments, the apparatus 200 may follow different rules for optimizing the pixel distribution among the plurality of regions of the at least one first image based on the texture levels associated with the pixels of the plurality of regions. In an example embodiment, a processing means may be configured to determine the one or more textured regions and the one or more non-textured regions in the at least one first image. An example of the processing means may include the processor 202, which may be an example of the controller 108.

In an example embodiment, the apparatus 200 may be caused to determine a second alignment position of the at least one light modulating element based on the texture data associated with the at least one first image. Herein, the second alignment position of the at least one light modulating element may refer to a position being assumed by the at least one light modulating element upon comparison of texture levels of regions of the at least one first image with the at least one threshold value of texture level. In an example embodiment, the second alignment position of a light modulating element of the plurality of light modulating elements may be one of a re-aligned position or the initial position.

In an example embodiment, the second alignment position may be a re-aligned or tilted position of the light modulating element as compared to the first alignment position of the light modulating element. As previously discussed, the first alignment position refers to the initial alignment position of the light modulating element. Upon re-aligning the light modulating element based on the texture data, the first alignment position of the light modulating element may be changed to the second alignment position. For example, based on the texture data if it is determined that a region of the first image is categorized/labeled as a high textured region, then the apparatus 200 may cause a corresponding light modulating element to tilt to such a position so that the region of the image may be imaged using greater number of pixels than an original number of pixels assigned to that region. In another example, based on the texture data if it is determined that a region of the first image is categorized/labeled as a low textured region, then the apparatus 200 may cause a corresponding light modulating element to tilt to such a position so that the region of the image may be imaged using less number of pixels than the original number of pixels assigned to that region.

In another example embodiment, the second alignment position of the light modulating element may be same as the first alignment position. For example, based on the texture data if it is determined that a region of the image is categorized/labeled as a low-textured region, then the apparatus 200 may cause a corresponding light modulating element to remain in the original position so that the region of the image may be imaged using the same number of pixels as the original number of pixels assigned to that region. An example of distributing the number of pixels to different regions of an image based on the texture data associated with the image is described further in detail with reference to FIGURES 4B, 4C, 5A, and 5B.

In an example embodiment, based on the determination of the second alignment positions, the apparatus 200 may further be caused to compute a plurality of transformation parameters associated with the plurality of light modulating elements. In an example embodiment, the transformation parameters associated with a light modulating element of the plurality of light modulating elements may define an extent by which the light modulating element may be realigned (for example, from the first alignment position to the second alignment position). In an example embodiment, the apparatus 200 may further be caused to apply the transformation parameters to the first alignment positions of the plurality of light modulating elements. An example of applying the transformation parameters to the first alignment positions of the plurality of light modulating elements is described further in detail with reference to FIGURE 3B.

In an example embodiment, on application of the transformation parameters to the first alignment positions and subsequent re-alignment of the one or more light modulating elements of the plurality of light modulating elements, the apparatus 200 may further be caused to capture at least one second image of the scene corresponding to the at least one first image. In an example embodiment, the application of the transformation parameters to the first alignment positions may facilitate in aligning the one or more light modulating elements of the plurality of light modulating elements in the second alignment positions such that during a subsequent attempt to capture the image (for example, at least one second image) of the scene, the captured image may have lesser number of pixels being assigned to the low resolution/low-textured/non-textured regions and may have higher number of pixels being assigned to the higher resolution/textured/high-textured regions of the image, thereby imparting a variable resolution to the at least one second image. In an example embodiment, the at least one second image is associated with a second pixel density. Herein, the 'second pixel density' may refer to a distributed concentration of pixels on the at least one image sensor. In an example embodiment, the distributed concentration of pixels may include high concentration of pixels on the at least one image sensor for the one or more textured regions (or highly textured regions) and a low concentration of pixels on the at least one image sensor for the one or more non-textured regions (or low-textured regions), in the at least one second image. In an example embodiment, the corresponding at least one second image is captured based on the reflection/transmission of the scene light by the plurality of light modulating elements being aligned in the second position, onto the second lens (camera lens) and the at least one image sensor. In an example embodiment, the at least one second image includes a single image of the scene. For example, in case the resolution of the spatial light modulator is greater than the resolution of the at least one image sensor, and the FoV of a first lens associated with the spatial light modulator is equal to the FoV of the second lens associated with the at least one image sensor, then the first image of the scene may be able to capture the entire scene. In another example embodiment, in case the FoV of the first lens associated with the spatial light modulator is greater than the FoV of the second lens associated with the at least one image sensor, but the resolution of the spatial light modulator is greater than the resolution of the at least one image sensor for a common FoV, then the first image of the scene may be able to capture the entire scene. In such scenarios, the at least one second image may include a single image of the scene that may be corresponding to the first image of the scene.

In an example embodiment, in case the resolution of the spatial light modulator is less than the resolution of the sensor, or the FoV of the first lens is less than the FoV of the second lens, the first image of the scene may not be able to capture the entire scene. In such a case, the apparatus 200 may be caused to capture multiple images (the first plurality of images) of the scene, for example, in a sequential manner, so as to capture an entire view of the scene. Further, in this embodiment, on determination of the texture data associated with the first plurality of images, the apparatus 200 may be caused to realign one or more light modulating elements and capture a second plurality of images corresponding to the first plurality of images. In another example embodiment, the corresponding at least one second image includes a second plurality of images of the scene. In various embodiments, the second plurality of images corresponds to a plurality of distinct regions of the scene. In an example embodiment, a processing means may be configured to facilitate capture of the corresponding at least one second image. An example of the processing means may include the processor 202, which may be an example of the controller 108.

In an example embodiment, the apparatus 200 may cause a temporal multiplexing of the second plurality of images that are imaged on the at least one image sensor to generate the corresponding at least one second image of the scene. Herein, the term 'temporal multiplexing' refers to combining the plurality of images to capture a variable resolution image of the scene. In an example embodiment, a number (N) of the second plurality of images that may be temporally multiplexed may be determined based on a difference in the FoV of the first lens (associated with the spatial light modulator) and the second lens (image sensor). For example, in case the FoV of the first lens is half than the FoV of the second lens, then to get the complete FoV of the scene for the second lens, the number (N) of images to be temporally multiplexed may be four. In another example embodiment, the number (N) of the second plurality of images that may be temporally multiplexed may be determined based on a difference in resolution (or resolution improvement) between the first image (e.g. first captured image) and the second image (e.g. the image captured after realigning the light modulating elements) of the scene. For example, in case the resolution improvement between the first image and the second image is assumed to be two in X direction and Y direction, and if the resolution of the first lens (associated with the light modulation device) and the second lens (camera lens) is same, then the number (N) of images to be temporally multiplexed is four.

In an example embodiment, on determination of the number of the images to be temporally multiplexed, a corresponding second image is determined for each region of the plurality of regions (for example, N regions) of the scene that are being captured by the first plurality of images. For each region of the plurality of regions, a corresponding second image is determined by applying the transformation on the plurality of light modulating elements so that each region may occupy the complete image sensor or a larger portion of the image sensor as compared to the portion occupied by that region in the first image. The second image corresponding to the each region being imaged on the image sensor may be stored, for example, in the memory 204. In an example embodiment, a processing means may be configured to temporally multiplex the second plurality of images. An example of the processing means may include the processor 202, which may be an example of the controller 108.

In an example embodiment, the first image and the second image may be captured in a single click. In an example embodiment, the first plurality of images and the second plurality of image may be captured in multiple captures to generate the first image and the second image. In an example embodiment, the second image (for example, the second image being captured by applying the transformation to the first image, or the second image being generated by applying a transformation and temporally multiplexing the second plurality of images) may be captured in such a manner that the high textured regions of the image may occupy more pixels on the image sensor 208 as compared to the plain regions (or non-textured/low-textured regions). In an example embodiment, the high textured regions of the image may include regions such that the pixels belonging to these regions may have a texture level greater than or equal to a threshold value of texture level. Also, the plain regions may include those regions of the image in which the pixels may have the texture level lower than a threshold value of texture level. In an example embodiment, upon application of the transformation, the textured regions in the second image may be expanded while the non-textured regions may be reduced, thereby changing an aspect ratio of the second image. An example describing the second image is illustrated and explained further with reference to FIGURE 4B and 4C. In an example embodiment, the second image may further be processed so as to compensate for the aspect ratio change and/or any other deformations caused in the second image due to the application of transformation.

In an example embodiment, the apparatus 200 may further be caused to apply reverse transformation to one or more non-textured regions of the second image. In an example embodiment, the one or more non-textured regions of the second image may correspond to the one or more non-textured regions of the first image. Herein, the reverse transformation may be performed to increase the resolution of the non-textured regions such that the ratio of pixels for textured to non-textured regions for the first image and the corrected image is same or nearly same. In an example embodiment, the ratio of pixels for the textured regions to the non-textured regions of the first image and the corrected image may be in the range of 0.85 to 1.15. For example, if the ratio of the textured regions to the non-textured regions for the first image is denoted by TV, and the ratio of the textured regions to the non-textured regions for the corrected image is denoted by 'B', and a ratio of A to B is denoted by 'X', then: X = A/B, where X may assume a value between 0.85 and 1.15 (including 0.85 and 1.15).

In an example embodiment, for applying the reverse transformation, the apparatus 200 may be caused to determine a resolution improvement (R) of the textured regions in the second image with respect to the first image. In an example embodiment, the apparatus 200 may further be caused to allocate a buffer for a final image obtained after reverse transformation, where size of the buffer (B) may be determined based on the resolution improvement (R). For example, in case the size of the first image is Width (W) X Height (H), the size of the buffer (B) may be (WxR) X (HxR). In an example embodiment, for every pixel in the first image, the apparatus may be caused to determine whether the pixel belongs to the textured/highly textured regions or low textured/plain regions. In an example embodiment, on determination that the pixel belongs to the textured/highly textured regions, the apparatus 200 may be caused to copy high-resolution pixels for the corresponding region from the second image to the buffer (B). In an example embodiment, on determination that the pixel belongs to the low textured/plain regions, the pixels from the first image may be interpolated to fill the buffer (B). In an example embodiment, a processing means may be configured to apply reverse transformation to one or more non-textured regions of the second image. An example of the processing means may include the processor 202, which may be an example of the controller 108. In an example embodiment, a resultant image or a corrected image being generated subsequent to the reverse transformation being applied to the second image, may be larger in size as compared to the first image or the second image. Herein, the size of the image may refer to the number of pixels in the image. In an example embodiment the number of pixels in the resultant/final image may be more than the number of pixels in the first image. In another example embodiment, the size of the image may refer to width and height of the image. In an example embodiment, the height/width of the resultant/ corrected image may be more the than height/width of the first image.

Various embodiments for capturing the images of the scene based on texture data are explained further with reference to FIGURES 3, 4A-4C, 5A-5B, and 6A-6C. In various embodiments, the variable resolution image of the scene is generated based on determination the textured regions and the non-textured regions. Such embodiments are also explained with reference to FIGURES 7 and 8. FIGURE 3A illustrates an example representation of an optical arrangement 300 for capturing at least one first image of a scene, in accordance with an example embodiment. In an example embodiment, the optical arrangement 300 may be illustrative of an image capturing mechanism being embodied in a camera, for example, the camera 208 (FIGURE 2) for capturing a first image and a second image of a scene. In an example embodiment, the camera 208 may include a first lens 302, a spatial light modulator 304, a second lens 306 and an image sensor 308. In an example embodiment, the first lens 302 may be configured to receive scene light and direct the same at the spatial light modulator 304. The spatial light modulator 304 may include a plurality of light modulating elements, for example a light modulating element 310 being embodied on a panel of the spatial light modulator 304. Examples of the spatial light modulator may include digital DMD, Liquid crystal SLM, and the like.

In an example embodiment, the plurality of light modulating elements, being positioned in a first alignment position, may reflect/transmit the scene light collected at the spatial light modulator 304 to the second lens 306. The second lens 306 may further facilitate in imaging the scene onto the image sensor 308 so as to generate the first image. For example, a scene point A may be imaged as a point A' at the light modulating element 310, and as A" at the image sensor 308. In an example embodiment, the first image may have a resolution higher than that of an image captured using only the second lens 306 and the image sensor 308. Herein, the plurality of light modulating elements on the spatial light modulator 304 are aligned at an initial angle of reflection, for example a first alignment position, and hence projects the plurality of light beams towards the camera lens 306 to image or generate a normal image, for example the first image, on the image sensor 308. For example, the light modulating element 310 receives light beams (as a ray cone) that represent a single scene point of the scene and diverts the light beams to the second lens 306. The second lens 306 further images the scene point on the image sensor 308 at a position 312. Similarly, remaining light modulating elements of the plurality of light modulating elements may simultaneously divert light beams representing remaining scene points of the scene to the image sensor 308 to generate the first image corresponding to the scene. In an example embodiment, the first image may be stored in a memory for example, the memory 204 (FIGURE 2), or any other storage location embodied in the apparatus 200 (FIGURE 2) or otherwise accessible to the apparatus 200.

In an example embodiment, the first image is a high resolution image, however there may be objects in the scene that may be associated with a higher resolution and may be presented in greater detail in the image. In an example embodiment, such regions (for example, high textured regions) may be identified along with low textured/plain regions in the first image. A data, for example, a texture data representing texture levels of the high textured regions and low textured regions of the first image may be determined and utilized for capturing an image (for example, a second image) of the scene such that the second image may include variable resolution. Due to the variable resolution of the second image, the high textured regions in the second image may acquire higher resolutions as compared to the low textured/ plain regions of the image. Example representations of generating variable resolution images are further shown and explained with reference to FIGURES 4A to 4C. FIGURE 3B illustrates an example representation of an optical arrangement 320 for applying transformation parameters to a first alignment position of a light modulation element from among the plurality of light modulation elements, in accordance with an example embodiment. In an example embodiment, the optical arrangement 320 may be illustrative of an image capturing mechanism being embodied in a camera, for example, the camera 208 (FIGURE 2) for capturing a first image and a second image of a scene. In an example embodiment, the camera 208 may include a first lens 322, a spatial light modulator 324, a second lens 326 and an image sensor 328. The spatial light modulator 324 may include a plurality of light modulation elements, for example a light modulation element 330 being embodied on a panel of a light modulation device. In an example embodiment, the first lens 322 may be configured to receive scene light, for example, shown as a light beam 332 from a scene point, such as a scene point A, and direct the same at the spatial light modulator 324 at an angle of incidence 334. In an example embodiment, the light modulating element 330, being positioned in a first alignment position at an angle 336, may reflect/transmit the scene light collected at the spatial light modulator 324 to the second lens 326 at an angle of reflection 338. The second lens 326 may further facilitate in imaging the scene onto a pixel 340 of the image sensor 328 so as to generate the first image A1 " of the scene point A. For example, a scene point A may be imaged as a point A' at the light modulation element 330, and as A1 " at the image sensor 328.

In an example embodiment, based on the texture data if it is determined that the scene point A may be associated with a textured/highly textured region, then the scene point may be imaged at various other points such as a point A1 " on the image sensor. For example, the scene point A may be imaged onto another pixel 340 at point A2". In an example embodiment, the scene point may be imaged at multiple points by re-aligning the corresponding light modulating element, for example the light modulating element 330. In an example embodiment, the light modulating element 330 may be re-aligned by applying transformation parameters to the light modulating element 330, where the transformation parameters may be determined based on the texture data. In an example embodiment, after applying the transformation parameters, the light modulating element 330 is re-aligned or positioned in a second alignment position at an angle 344 (as compared to the first alignment position at the angle 336). In an example embodiment, the first lens 322 may be configured to receive the scene light as the single ray and direct the same at the spatial light modulator 324 at the angle of incidence 332. The light modulating element 330 being positioned in the second alignment position at the angle 344, may reflect/transmit the scene light collected at the spatial light modulator 324 to the second lens 326 at an angle of reflection 346. In an example embodiment, the angle of reflection 346 may be determined by summing the angle of incidence 334 with twice the angle 342 (i.e. the angle of reflection 346 is equal to the angle of incidence 334 + 2* angle 344). The second lens 326 may further facilitate in imaging the scene onto the pixel 342 of the image sensor 328 so as to generate the second image. For example, the scene point A may be imaged as the point A' at the light modulating element 330, and as A2" at the image sensor 328. In an example embodiment, the transformation parameters are functions based on a distance between the light modulating element 330 and the second lens 326, the angle of incidence 332, the angle of reflection 338 and a position at which the light beam falls on the second lens 326 after reflection from the light modulating element 330, a distance between the second lens 326 and the image sensor 328, and positions of the pixel 340 and the pixel 342.

FIGURES 4A-4C illustrates an example representation of an optical arrangement for capturing an image, for example a variable resolution image of a scene, in accordance with an example embodiment. In an example embodiment, the optical arrangement 300 may be illustrative of image capturing mechanism being embodied in a camera, for example, the camera 208 (FIGURE 2) for capturing a first image and a second image of a scene. In an example embodiment, the camera 208 may include a first lens 402, a spatial light modulator 404, a second lens 406 and an image sensor 408. It will be noted that the elements of the camera 208 as explained in FIGURE 3 are similar as the elements of the camera 208 as explained in FIGURE 4A-4C. For example, the first lens 402, the spatial light modulator 404, the second lens 406 and the image sensor 408 may be similar to the first lens 302, the spatial light modulator 304, the second lens 306 and the image sensor 308, respectively. Accordingly, the elements such as the first lens 402, the spatial light modulator 404, the second lens 406 and the image sensor 408 are not explained again herein in FIGURES 4A-4C.

In an example embodiment, the spatial light modulator 404 may include a plurality of light modulating elements, for example a light modulating element 410 and a light modulating element 412. In an example embodiment, the optical arrangement may be configured to generate a first image associated with the scene by reflecting the scene light coming from a plurality of scene points. For example, the scene light coming from scene points A and B may fall on the light modulating elements 410, 412 so as to form images A' and B' at the spatial light modulator 404. The light modulating elements 410, 412 may further reflect the light from points A and B' onto the second lens 406 to thereby form the first image A" and B" of the scene points A and B, respectively at the image sensor 408.

The camera lens 406 may further image remaining scene points of the scene onto the image sensor 408 to generate the first image. In an example embodiment, the first image may be stored in a memory for example, the memory 204 (FIGURE 2), or any other storage location embodied in the apparatus 200 or otherwise accessible to the apparatus 200.

In an example embodiment, based on the texture data associated with the first image, the texture levels of the neighboring scene points such as the scene points A and B may be determined. For example, based on the texture data associated with the imaged scene points (A" and B") imaged on the image sensor, it may be determined that the scene points belong to non-textured/plain regions of the image. In such a scenario, the points A" and B" may be merged, as illustrated in FIGURE 4B. In an example embodiment, merging the points A" and B" has the advantage that the scene points associated with low textured regions/plain regions may utilize lesser number of pixels on the image sensor, and may thus free-up some of the pixels that may be utilized for high- textured regions. An example representation of merging the two scene points A and B on the image sensor 408 is shown in FIGURE 4B.

In an example embodiment, the scene points A" and B" imaged on the image sensor may be merged by re-aligning the light modulating elements such as the light modulating elements 410, 412 responsible for imaging the scene points A" and B". In an example embodiment, an extent of re-alignment of the light modulating elements 410, 412 may be determined based on the texture data associated with the first image. In an example embodiment, the light modulating elements 410, 412 may be re-aligned by applying transformation parameters to the light modulating elements 410, 412, where the transformation parameters may be determined based on the texture data.

In an example embodiment, an angle by which the light modulating elements 410, 412 may be re-aligned may be determined based on the texture data. For instance, the angles by which the light modulating elements 410, 412 may be re-aligned may be different. In some example scenarios, only one of the light modulating elements 410, 412 may be realigned so as to facilitate mapping/merging of the points A" and B" on the image sensor 408.

In another example embodiment, based on the texture data associated with the imaged scene points (A" and B") imaged on the image sensor 408, it may be determined that the scene points belong to different regions of the first image. For example, it may be determined that the scene point A" belongs to a textured region and the scene point B" belongs to a non-textured region. In this example embodiment, the scene point belonging to the textured region, for example, the scene point A", may be imaged as a high-resolution image portion. Also, the scene point belonging to the non-textured region, for example the scene point B", may be imaged as a low-resolution portion of the first image.

As illustrated with reference to FIGURE 4C, the scene point A" is distributed to different points or pixels, such as scene points A1 , A2, A3 and A4, on the image sensor 408, while the scene point B" is imaged at one point on the image sensor 409. In an example embodiment, for distributing the scene point A" to different points on the image sensor, the light modulating elements responsible for imaging the scene point A' for example the light modulating element 410, 414, 416 and 418 at points A1 ', A2', A3' and A4', respectively, may be re-aligned/positioned in a second alignment position that is different from a first alignment positions. In an example embodiment, on positioning the light modulating elements 410, 414, 416 and 418 in a second alignment position, the light received at the light modulating elements 410, 414, 416 and 418 in the second alignment position may be diverted and imaged on the image sensor 408 at points A1 , A2, A3 and A4 in order to capture the scene point A at a higher resolution. In an example embodiment, the scene points imaged at points A1 , A2, A3, A4 and B" may be temporally merged so as to form the second image of the scene. Herein, the second image of the scene is a variable resolution image, and is generated based on a single-click multiple-capture of the scene.

In an example scenario, the resolution of the spatial light modulator may be less than the resolution of the image sensor. In such a scenario, the entire scene may not be captured by the spatial light modulator in one shot, and may require multiple captures to entirely capture the scene. An example representation of generating the variable resolution image by multiple-capture of the first image and the second image is explained and illustrated with reference to FIGURES 5A and 5B. FIGURES 5A and 5B illustrate an example representation of an optical arrangement 500 for capturing an image, for example a variable resolution image of a scene, in accordance with an example embodiment. In an example embodiment, the optical arrangement 500 may be illustrative of an image capturing mechanism being embodied in a camera, for example, the camera 208 (FIGURE 2) for capturing a first image and a second image of a scene. In an example embodiment, the camera 208 includes a first lens 502, a spatial light modulator 504, a second lens 506 and an image sensor 508. In an example embodiment, the spatial light modulator 504 may be associated with a first resolution that may be lesser than a second resolution of the image sensor 508. In another example embodiment, the first lens 502 may have a field of view (FoV) that may be less than the FoV of the second lens.

In an example embodiment, the optical arrangement 500 may be utilized for capturing an image of the scene having a plurality of scene points such as points A and B. However, since the resolution of the spatial light modulator 504 is less than the resolution of the image sensor 508, the entire scene (including scene points A and B) may not be captured by the spatial light modulator 504 in a single capture, and may require multiple captures to entirely capture the scene. For example, the scene point A is captured by a light modulating element 510 at point A' and imaged onto point A" on the image sensor 508. However, since the resolution of the spatial light modulator is less than the resolution of the image sensor, the scene point B is not captured by the light modulating element 512, and thus not imaged on the image sensor. In an example embodiment, the optical arrangement 500 may facilitate in capturing another first image of the scene, where another image may capture the scene point B and image the same point onto point B" on the image sensor 508, as illustrated in FIGURE 5B. In this manner, a first plurality of images may be imaged onto the image sensor so as to image the entire scene. In an example embodiment, the texture data associated with the first plurality of images may be determined and may be utilized for re-aligning the at least one light modulating element associated with the spatial light modulator 504.

In an example embodiment, the optical arrangement 500 may further be utilized to generate a second plurality of images corresponding to the first plurality of images, based on the texture data associated with the first plurality of images. It will be noted that a 'second image corresponding to the first image' may refer to images being captured for the same portion of a scene, but having different pixel densities. For example, a first image of scene point A may have a corresponding second image of the scene point A, however, the first image and the second image of the scene point A may differ in pixel densities thereof. In an example embodiment, the second image may include variable pixel densities, meaning thereby that various portions of the second image may have different pixel distributions/resolutions, while the first image may have same pixel distribution/density throughout.

In an example embodiment, the second plurality of images may be temporally multiplexed so as to generate the second image of the scene. For example, in case the FoV of first lens is half than the FoV of the second lens, then two image captures may fill-up the image sensor 508, and the two images may be multiplexed for displaying the image of the scene.

It will be noted that FIGURES 3 to 5B are provided for the representation of examples only, and should not be considered limiting to the scope of the various example embodiments. Also, it should be noted that the variable resolution images generated using above methods provide an image of high quality and clarity. An example illustrating capturing of a variable resolution image is illustrated and described further with reference to FIGURES 6A-6C.

FIGURES 6A, 6B and 6C illustrate an example representation of capturing an image, for example, a variable resolution image, in accordance with an example embodiment. FIGURE 6A illustrates a first image being captured by an apparatus for example, the apparatus 200. In an example embodiment, the apparatus may embody an optical arrangement, for example, an optical arrangement (FIGURES 4A-4C) or an optical arrangement 500 (FIGURES 5A-5C).

Referring to FIGURE 6A, a first image 600 of the scene is illustrated. The first image 600 may be captured with the light modulating elements in a first alignment position, and thus may include a uniform resolution similar to that of an image sensor, for example the image sensor 408 or 508. Herein, the first image 600 illustrates an outdoor scene including a sky 602, a car with a driver 604, and a road 606. In an example embodiment, a texture data associated with the image 600 may reveal one or more textured and one or more non-textured regions of the image 600. In particular, the texture data associated with the image 600 may provide texture levels associated with various regions of the image 600. In an example scenario, the one or more non-textured regions, for example the sky 602, and the road 606, and the one or more textured regions, for example the car with the driver 604, may be captured using a single capture shot or using a multiple capture shot by the apparatus 200. Referring now to FIGURE 6B, a second image 620 of the scene is illustrated, in accordance with an example embodiment. In an example embodiment, the second image 620 is an image, similar in size as the first image 600, that includes a higher resolution of the one or more textured regions, for example a car with a driver 624, and a lower resolution for the one or more non-textured regions, for example a sky 622 and a road 626. The second image 620 is a variable resolution image and is captured by first determining the texture data in the first image 600. Based on the texture data, if it is determined that one or more light modulating elements on the spatial light modulator (for example, the spatial light modulator 404 or 504) needs to be re-aligned, then a transformation may be applied to facilitate the re-aligning of the one or more light modulating elements. The second image 620 may then be captured such that the one or more textured regions are expanded or occupy more number of pixels on an image sensor (for example, the image sensor 408 or 508) and the one or more non-textured regions may be shrunk or are made to occupy less number of pixels on the image sensor. Hence, as illustrated in the FIGURE 6B, the one or more non-textured/low-textured regions, for example the sky 622 and the road 626, are shrunk and the one or more textured regions, for example the car with the driver 624, are expanded. The one or more textured regions have higher resolution as compared to the one or more non-textured regions.

In an example embodiment, upon application of the transformation, the textured regions in the second image may be expanded while the non-texture regions may be reduced, thereby changing an aspect ratio of the second image, as illustrated in FIGURE 6B. Referring now to FIGURE 6C, a reverse transformation may be applied on the one or more non-textured regions of the second image 620 to generate a variable resolution image 640. The variable resolution image 640 may be larger in size as compared to the first image 600 or the second image 620. The variable resolution image is a corrected image in terms of aspect ratio and deformations caused in the second image 620 due to the transformation. For example, the variable resolution image 640 includes the one or more textured regions, for example a car with a driver 644, and the one or more non-textured regions, for example a sky 642 and a road 646 such that the resolution of one or more textured regions is different from the resolution of the one or more non-textured regions. The image 640 may be a variable resolution image and as illustrated in the FIGURE 6C, the one or more non-textured regions, for example the sky 642 and the road 646, is expanded using the reverse transformation and the one or more textured regions, for example the car with the driver 644, remain expanded as per FIGURE 6B. FIGURE 7 is a flowchart depicting an example method 700 for capturing an image, in accordance with an example embodiment. Example references are made to FIGURES 2 to 6C for the description of the method 700. The method 700 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2.

At 702, the method 700 includes facilitating capture of at least one first image of a scene. In an example embodiment, the at least one first image is associated with a first pixel density. In an example embodiment, the first pixel density may be indicative of a uniform resolution of the at least one first image. In an example embodiment, the at least one first image may be captured based on a reflection of scene light by at least one light modulating element onto a camera lens and imaged on at least one image sensor. An example representation of an optical arrangement for capturing the at least one first image is already explained with reference to FIGURE 4B and FIGURE 4C. In an example embodiment, the at least one light modulating element is positioned in a first alignment position (refer, FIGURE 2). In an example embodiment, the at least one light modulating element is embodied on a panel of one of a reflective DMD and a liquid crystal spatial light modulator device. In an example embodiment, the at least one light modulating element may be associated with a first resolution. In an example embodiment, the first resolution associated with the at least one light modulating element may be greater than a second resolution associated with the image sensor (for a single click or capture). In another example embodiment, the first resolution associated with the at least one light modulating element may be lower than the second resolution associated with the image sensor (for single click multiple capture scenario). In an example embodiment, the first image may be captured by receiving the scene light at the spatial light modulator and diverting the light to the camera lens to be imaged onto the at least one image sensor. Various embodiments describing the capture of a single first image are explained further with reference to FIGURE 4A. In another example embodiment, the first image is captured by shifting the lens and the spatial light modulator to different positions to capture a first plurality of images corresponding to different scene points or distinct regions of the scene. In an example embodiment, N is dependent on a field of view of the lens and the camera lens. For example, if the field of view of the lens is half of that of the field of view of the camera lens, then N = 2 and only 2 images may be captured. The first plurality of images may be used collectively to generate the first image. In an example embodiment, the first plurality of images of the scene may be captured by imaging the first plurality of images onto a plurality of regions on the at least one image sensor. Various embodiments describing the capture of the first plurality of images are explained further with reference to FIGURES 5A and 5B. In an example, the first image may be stored in the memory 204.

At 704, the method 700 includes determining a texture data associated with the at least one first image based on at least one feature associated with a plurality of pixels of the at least one first image. In an example embodiment, the texture data associated with the at least one first image includes a texture level associated with a plurality of pixels of the at least one first image. The texture data is indicative of one or more textured regions and one or more non-textured regions in the at least one first image. In an example embodiment, the one or more textured regions and one or more non-textured regions in the at least one first image may be determined based on a comparison of the texture level of the plurality of pixels with at least one threshold value of the texture level. For example, the textured regions may include regions having texture levels within a range of a first threshold value and a second threshold value of texture levels, and regions having texture levels outside this range may be considered as the non-textured regions. In an example embodiment, the texture data associated with a region may be determined based on at least one feature, for example color and gradient associated with the pixels of the region. At 706, a second alignment position of the at least one light modulating element may be determined based on the texture data associated with the at least one first image. In an example embodiment, the second alignment position of the at least one light modulating element may be one of a re-aligned position and the first alignment position. For example, based on the texture data associated with the at least one first image, the level of texture associated with various portions of the at least one first image may be determined. In case the texture level associated with a region of first image indicates that the image belongs to a textured region, the second alignment position for a light modulating element responsible for imaging that region onto the image sensor may include a re-aligned position. In another example embodiment, if the texture level associated with a region of first image indicates that the image belongs to a non-textured region, the second alignment position for a light modulating element responsible for imaging that region onto the image sensor may be same as the first alignment position, meaning thereby that the position of the light modulating element may not be changed. At 708, the method 700 includes facilitating capture of at least one second image of the scene with the at least one light modulating element being positioned in the second alignment position. In an example embodiment, the at least one second image corresponds to the at least one first image and may be associated with a second pixel density. If it is determined that the at least one light modulating element need to be re-aligned based on the texture data of the first image, a controller 108 or a controller within the camera 208 may apply a transformation to the at least one light modulating element. The transformation facilitates in re-aligning the one or more light modulating elements in order to generate a variable resolution image or the second image. In an example embodiment, the second image is captured as a single image or as multiple images as in the case of the first image based on the resolutions of the spatial light modulator and the image sensor. Various embodiments describing the capturing of the variable resolution image or the second image based on the resolutions of the spatial light modulator and the image sensor are already explained with reference to FIGURES 4A, 4B, 4C, 5A and 5B. FIGURE 8 is a flowchart depicting an example method 800 for capturing an image, in accordance with another example embodiment. Example references are made to FIGURES 2 to 6C for the description of the method 800. The method 800 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2. At 802, the method 800 includes facilitating capture of a first image of a scene. In an example embodiment, the first image of the scene may be captured by the apparatus 200. In an example embodiment, the first image is captured based on a reflection of light (associated with a scene) by a plurality of light modulating elements onto an image sensor. At least one light modulating element of the plurality of light modulating elements is positioned in a first alignment position (refer, FIGURE 2). In an example embodiment, the first image is associated with a first pixel density. In an example embodiment, the first pixel density may be indicative of a uniform resolution of the at least one first image. An example representation of an optical arrangement for capturing the at least one first image is already explained with reference to FIGURE 4B and FIGURE 4C. In an example embodiment, at least one light modulating element of the plurality of light modulating elements is positioned in a first alignment position. In an example embodiment, the plurality of light modulating elements is embodied on a panel of one of a reflective DMD and a Liquid crystal SLM. In an example embodiment, the plurality of light modulating elements may be associated with a first resolution. In an example embodiment, the first resolution associated with the plurality of light modulating elements may be greater than a second resolution associated with the image sensor (for a single capture scenario). In another example embodiment, the first resolution associated with the plurality of light modulating elements may be lower than the second resolution associated with the image sensor (for multiple capture scenario).

In an example embodiment, the first image may be captured by receiving the scene light at the spatial light modulator and diverting the light to the camera lens to be imaged onto the image sensor. Various embodiments describing the capture of a single first image are explained further with reference to FIGURE 4A. In another example embodiment, the first image is one of a first plurality of images corresponding to a different scene point or distinct portion of the scene and may be captured by shifting the lens and the spatial light modulator. In an example embodiment, the first image may be stored in the memory 204.

At 804, the method 800 includes determining a texture data associated with the first image. In an example embodiment, the texture data associated with the first image comprises a texture level associated with a plurality of pixels of the first image. In an example embodiment, the texture data is based on at least one feature associated with a plurality of pixels of the at least one first image. In an example embodiment, the texture data may be determined based on at least one feature, for example color and gradient associated with the plurality of pixels. The texture data is indicative of one or more textured regions and one or more non-textured regions in the at least one first image.

At 806, the method 800 includes comparing the texture level of the plurality of pixels with at least one threshold value of the texture level in order to determine the one or more textured regions and one or more non-textured regions in the first image. At 808, the method 800 includes labelling regions of the first image as one of a textured region and a non-textured region based on the comparison of the texture level of the regions with the at least one threshold value. For example, regions included within a range of a first threshold value and a second threshold value may be labeled as the textured regions, and regions outside this range may be labeled as the non-textured regions.

At 810, the method 800 includes computing transformation parameters associated with the textured regions and non-textured regions of the first image based on the comparison. At 812, the method 800 includes applying the transformation parameters associated with the textured regions and non-textured regions of the first image, to thereby perform re-alignment of the at least one light modulating element to a second alignment position. In an example embodiment, the second alignment position of the at least one light modulating element may be one of a re-aligned position and the first alignment position. For example, based on the texture data associated with the first image, the level of texture associated with various portions of the first image may be determined. In case the texture level associated with a region of the first image indicates that the image belongs to a textured region, the second alignment position for a light modulating element responsible for imaging that region onto the image sensor may include a re-aligned position. In another example embodiment, if the texture level associated with a region of the first image indicates that the image belongs to a non-textured region, the second alignment position for a light modulating element responsible for imaging that region onto the image sensor may be same as the first alignment position, meaning thereby that the position of the light modulating element may not be changed.

At 814, the method 800 includes facilitating capture of the second image corresponding to the first image with the at least one light modulating element being re-aligned to the second alignment position. In an example embodiment, the second image may be corresponding to the at least one first image and associated with a second pixel density. If it is determined that the at least one light modulating element is to be re-aligned based on the texture data of the first image, a controller 108 or a controller within the camera 208 may apply a transformation to the at least one light modulating element. The transformation facilitates in re-aligning the one or more light modulating elements in order to generate a variable resolution image or the second image. In an example embodiment, the second image is captured as a single image or as multiple images as in the case of the first image based on the resolutions of the spatial light modulator and the image sensor. Various embodiments describing the capturing of the variable resolution image or the second image based on the resolutions of the spatial light modulator and the image sensor are already explained with reference to FIGURES 4A, 4B, 4C, 5A and 5B.

At 816, the method 800 includes determining if there is any other first image of the scene that may be captured by the apparatus 200. If it is determined that there is another first image of the scene, the method at 802 is performed. In an example embodiment, if it is determined that another image of the first plurality of images is captured (in the multiple capture scenario), the method 802-814 is performed until a corresponding second image of the second plurality of images is captured. In an example embodiment, the first plurality of images of the scene may be captured by imaging the first plurality of images onto a plurality of regions on the image sensor. Various embodiments describing the capture of the first plurality of images are explained with reference to FIGURES 5A and 5B. If, at 816, it is determined that no another first image of the scene is captured, then at 818, a reverse transformation may be applied to one or more non-textured regions of the second image. In an example embodiment, the one or more non-textured regions of the second image may correspond to the one or more non-textured regions of the first image. Herein, the term reverse transformation may refer to increasing the resolution of the non-textured regions such that the ratio of pixels for textured to non-textured regions for the first image and the second image is same. The reverse transformation applied to the one or more non-textured regions of the second image may provide a corrected image in terms of aspect ratio and deformations caused due to the transformation applied.

In an example embodiment, for applying the reverse transformation, a resolution improvement (R) of the textured regions in the second image may be determined with respect to the first image. In an example embodiment, a buffer for a corrected image obtained after reverse transformation may be allocated, where size of the buffer (B) may be determined based on the resolution improvement (R). For example, in case the size of the first image is Width (W) X Height (H), the size of the buffer (B) may be (WxR) X (HxR). In an example embodiment, for every pixel in the first image, it may be determined whether the pixel belongs to the textured/highly textured regions or low textured/plain regions. In an example embodiment, on determination that the pixel belongs to the textured/highly textured regions, high-resolution pixels for the corresponding region may be copied from the second image to the buffer (B). In an example embodiment, on determination that the pixel belongs to the low textured/plain regions, the pixels from the first image may be interpolated to fill the buffer (B). In this manner, the pixels belonging to the textured/highly textured regions and the low textured/plain regions may collectively form the final/resultant image.

It should be noted that to facilitate discussions of the flowcharts of FIGURES 7 and 8 certain operations are described herein as constituting distinct steps performed in a certain order. Such implementations are examples only and non-limiting in scope. Certain operations may be grouped together and performed in a single operation, and certain operations may be performed in an order that differs from the order employed in the examples set forth herein. Moreover, certain operations of the methods 700 and 800 are performed in an automated fashion. These operations involve substantially no interaction with the user. Other operations of the methods 700 and 800 may be performed by in a manual fashion or semi-automatic fashion. These operations involve interaction with the user via one or more user interface presentations. The methods depicted in these flowcharts may be executed by, for example, the apparatus 200 of FIGURE 2. Operations of the flowchart, and combinations of operation in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described in various embodiments may be embodied by computer program instructions. In an example embodiment, the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of an apparatus and executed by at least one processor in the apparatus. Any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus embody means for implementing the operations specified in the flowchart. These computer program instructions may also be stored in a computer- readable storage memory (as opposed to a transmission medium such as a carrier wave or electromagnetic signal) that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the operations specified in the flowchart. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a non-transitory computer-implemented process such that the instructions, which execute on the computer or other programmable apparatus, provide operations for implementing the operations in the flowchart. The operations of the methods are described with help of apparatus 200. However, the operations of the methods can be described and/or practiced by using any other apparatus.

Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is to generate a variable resolution image of a high image quality from a first image associated with a scene. In various example embodiments, the variable resolution image or a second image may be captured by a camera using a spatial light modulator, for example a DMD or a liquid crystal SLM. In various embodiments, the textured regions of the scene, for example a specific region of interest, are assigned more number of pixels or a higher resolution and the non-textured regions of the scene, for example plain regions, are assigned less number of pixels or a lower resolution, based on orienting one or more light modulating elements on the spatial light modulator. Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a computer program product. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of an apparatus described and depicted in FIGURES 1 and/or 2. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.

If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.

Although various embodiments are set out in the independent claims, other embodiments comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present disclosure as defined in the appended claims.