Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND SYSTEMS FOR RECOVERING FOLDED SIGNALS
Document Type and Number:
WIPO Patent Application WO/2021/091486
Kind Code:
A1
Abstract:
Signal processing methods and systems for recovering folded signals are disclosed. A method of recovering a signal from a series of folded signal samples of the signal comprises: segmenting the series of folded signal samples of the signal into a plurality of signal segments; representing each signal segment as a sparse matrix; solving the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment; and recovering the signal using the folding number for each folded signal sample and the respective folded signal sample.

Inventors:
JI FENG (SG)
TAY WEE PENG (SG)
PRATIBHA PRATIBHA (SG)
Application Number:
PCT/SG2020/050631
Publication Date:
May 14, 2021
Filing Date:
November 03, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV NANYANG TECH (SG)
International Classes:
H03M1/12
Foreign References:
US20190103876A12019-04-04
Other References:
FENG J. ET AL.: "Recovering Graph Signals from Folded Samples", ARXIV, 8 April 2019 (2019-04-08), pages 1 - 5, XP081131426, [retrieved on 20201214]
BHANDARI A. ET AL.: "On Unlimited Sampling", 2017 INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS, 7 July 2017 (2017-07-07), pages 31 - 35, XP033147577, [retrieved on 20201215], DOI: 10.1109/SAMPTA.2017.8024471
RUDRESH S. ET AL.: "WAVELET-BASED RECONSTRUCTION FOR UNLIMITED SAMPLING", 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP, 20 April 2018 (2018-04-20), pages 4584 - 4588, XP033404031, [retrieved on 20201215], DOI: 10.1109/ICASSP.2018.8461466
Attorney, Agent or Firm:
LINDSAY, Jonas Daniel (SG)
Download PDF:
Claims:
CLAIMS

1. A method of recovering a signal from a series of folded signal samples of the signal, the method comprising: segmenting the series of folded signal samples of the signal into a plurality of signal segments; representing each signal segment as a sparse matrix; solving the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment; and recovering the signal using the folding number for each folded signal sample and the respective folded signal sample.

2. A method according to claim 1 , further comprising receiving the series of folded signal samples of the signal from a plurality of self-reset analog-digital converters.

3. A method according to claim 1 or claim 2, wherein the series of folded signal samples of the signal are a set of time varying signals from a set of sensors.

4. A method according to claim 3, wherein the set of sensors correspond to nodes of a graph and the signal is a graph signal.

5. A method according to claim 1 or claim 2, wherein the signal is an image signal and the series of folded signal samples correspond to pixels of the image.

6. A method according to any preceding claim, wherein segmenting the series of folded signal samples of the signal into a plurality of signal segments comprises utilizing a distance function between the folded signal samples of the signal.

7. A method according to any preceding claim, wherein segmenting the series of folded signal samples of the signal into a plurality of signal segments comprises iteratively adding folded signal samples of the signal to a segment using a greedy algorithm according to a constraint dependent on a folding rate of the folded signal samples.

8. A method according to claim 7, wherein the constraint includes an allowable perturbation.

9. A method according to any preceding claim, wherein solving the sparse matrix for each respective signal segment to determine the folding number for each folded signal sample within the respective signal segment comprises using observed folding numbers for a subset of the folded signal samples of the signal as linear constraints.

10. A computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method according to any one of claims 1 to 9.

11. A signal processing system for recovering a signal from a series of folded signal samples of the signal, the system comprising a processor and a data storage device storing computer program instructions operable to cause the processor to: segment the series of folded signal samples of the signal into a plurality of signal segments; represent each signal segment as a sparse matrix; solve the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment; and recover the signal using the folding number for each folded signal sample and the respective folded signal sample.

12. A signal processing system according to claim 11 , wherein the data storage device further stores computer program instructions operative to cause the processor to receive the series of folded signal samples of the signal from a plurality of self-reset analog-digital converters.

13. A signal processing system according to claim 11 or claim 12, wherein the series of folded signal samples of the signal are a set of time varying signals from a set of sensors.

14. A signal processing system according to claim 13, wherein the set of sensors correspond to nodes of a graph and the signal is a graph signal.

15. A signal processing system according to claim 11 or claim 12, wherein the signal is an image signal and the series of folded signal samples correspond to pixels of the image.

16. A signal processing system according to any one of claims 11 to 15, wherein the data storage device further stores computer program instructions operative to cause the processor to segment the series of folded signal samples of the signal into a plurality of signal segments utilizing a distance function between the folded signal samples of the signal.

17. A signal processing system according to any one of claims 11 to 16, wherein the data storage device further stores computer program instructions operative to cause the processor to segment the series of folded signal samples of the signal into a plurality of signal segments by iteratively adding folded signal samples of the signal to a segment using a greedy algorithm according to a constraint dependent on a folding rate of the folded signal samples.

18. A signal processing system according to claim 17, wherein the constraint includes an allowable perturbation.

19. A signal processing system according to any one of claims 11 to 18, wherein the data storage device further stores computer program instructions operative to cause the processor to determine the folding number for each folded signal sample within the respective signal segment using observed folding numbers for a subset of the folded signal samples of the signal as linear constraints.

Description:
METHODS AND SYSTEMS FOR RECOVERING FOLDED SIGNALS

TECHNICAL FIELD The present disclosure relates to signal processing and in particular to recovering signals from folded signal samples.

BACKGROUND The listing or discussion of a prior-published document in this specification should not necessarily be taken as an acknowledgement that the document is part of the state of the art or is common general knowledge.

Analog-to-digital converters (ADCs) are used to sample and digitize continuous signals. The Nyquist-Shannon rate is the minimum sampling rate that guarantees perfect recovery of a band limited signal from its signal samples. To overcome the problem of signal clipping that occurs during sampling of high dynamic range signals, self-reset ADCs are used to sample modulo-operated signal values, which are called folded samples.

The recovery of the high dynamic range signal from the folded samples presents a problem. If each self-reset ADC is provided with a reset counter, then the output of the counter can be used in the recovery of the high dynamic range signal. However, this requires additional circuitry to provide the reset counters in each ADC and the dynamic range of the system will be limited by the maximum value that the reset counters can store.

SUMMARY According to a first aspect of the present disclosure, a method of recovering a signal from a series of folded signal samples of the signal is provided. The method comprises: segmenting the series of folded signal samples of the signal into a plurality of signal segments; representing each signal segment as a sparse matrix; solving the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment; and recovering the signal using the folding number for each folded signal sample and the respective folded signal sample. In an embodiment, the method further comprises receiving the series of folded signal samples of the signal from a plurality of self-reset analog-digital converters.

In an embodiment, the series of folded signal samples of the signal are a set of time varying signals from a set of sensors

In an embodiment, the set of sensors correspond to nodes of a graph and the signal is a graph signal.

In an embodiment, the signal is an image signal and the series of folded signal samples correspond to pixels of the image.

In an embodiment, segmenting the series of folded signal samples of the signal into a plurality of signal segments comprises utilizing a distance function between the folded signal samples of the signal.

In an embodiment, segmenting the series of folded signal samples of the signal into a plurality of signal segments comprises iteratively adding folded signal samples of the signal to a segment using a greedy algorithm according to a constraint dependent on a folding rate of the folded signal samples.

In an embodiment, the constraint Includes an allowable perturbation.

In an embodiment, solving the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment comprises using observed folding numbers for a subset of the folded signal samples of the signal as linear constraints. According to a second aspect of the present disclosure a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method as set out above is provided. According to a third aspect of the present disclosure, a signal processing system for recovering a signal from a series of folded signal samples of the signal is provided. The system comprises a processor and a data storage device storing computer program instructions operable to cause the processor to: segment the series of folded signal samples of the signal into a plurality of signal segments; represent each signal segment as a sparse matrix; solve the sparse matrix for each respective signal segment to determine a folding number for each folded signal sample within the respective signal segment; and recover the signal using the folding number for each folded signal sample and the respective folded signal sample. BRIEF DESCRIPTION OF THE DRAWINGS

In the following, embodiments of the present invention will be described as non-limiting examples with reference to the accompanying drawings in which: FIG.1 is a graph showing a signal and its folded version;

FIG.2 shows a signal recovery system according to an embodiment of the present invention; FIG.3 shows a signal recovery system according to an embodiment of the present invention;

FIG.4 is a flowchart showing a method of recovering a signal from a series of folded samples of the signal according to an embodiment of the present invention;

FIG.5 is an example that illustrates how to segment a graph signal utilizing a distance function; FIG.6 and FIG.7 illustrate a change variable procedure to represent each signal segment as a sparse matrix;

FIG.8 shows a graph used to simulate a power plant network to demonstrate the performance of a method according to an embodiment of the present invention;

FIG.9A to 9F show the results of simulations using embodiments of the present invention; FIG.10A and FIG.10B show examples of the recovery of continuous-time folded graph signals;

FIG.11 A and FIG.11 B show results of using embodiments of the present invention on weather station network data;

FIG.12A and FIG.12B show example heat maps of the recovery of temperature readings of a weather station network data;

FIG.13A, FIG.13B and FIG.13C show a folded image, an image recovered using a method according to an embodiment of the present invention and an image recovered using a variation minimization method respectively;

FIG.14A, FIG.14B and FIG.14C show a folded image, an image recovered using a method according to an embodiment of the present invention and an image recovered using a variation minimization method respectively;

FIG.15A to FIG.15C show an optical image, an image recovered using a method according to an embodiment of the present invention and a folded image of a mouse brain respectively;

FIG.16Ato FIG.16H show recovered images from a folded mouse brain surface image under different parameter settings; and FIG.17A and FIG.17B show recovery of the folded mouse brain surface image using the variation minimization method.

DETAILED DESCRIPTION

Graph signal processing is an emerging field that studies multidimensional signals embedded in a graph representing the inherent relationship among the different components of the signals. Graph signal processing has attracted increased attention as it allows us to capture complex correlations in many practical problems. It has been applied to various problems consisting of signal recovery, prediction, and anomaly detection.

Analog-to-digital converters (ADCs) are used to sample and digitize continuous signals. The Nyquist-Shannon rate is the minimum sampling rate that guarantees perfect recovery of a band limited signal from its signal samples. To overcome the problem of signal clipping that occurs during sampling of high dynamic range signals, self-reset ADCs are used to sample modulo-operated signal values, which are called folded samples.

FIG.1 is a graph showing a signal and its folded version. Suppose [0, λ] is the maximum amplitude range the ADC can capture. A signal x and its folded version 0 ≤ p < λ are related by the following equation analogous to modulo arithmetic: x = λζ + p (1) where z∈ Z is the largest integer not more than x/ λ. We call z the folding number.

We want to recover the original signal x from the folded signal p without knowledge of the folding number z, or at least without knowledge of the folding number z for every folded signal sample.

The signals observed in many applications can be modeled as graph signals, for example, photo and microscopic images, and readings from sensor networks. This motivates us to study signal recovery from folded versions in the context of graph signals. Different from most of the other related works on folded signals described above that considers only a single continuous-time signal, we consider the problem of recovery of a continuous-time graph signal in which each vertex of a graph is associated with a continuous-time signal. This allows us to perform discrete sampling in both the graph vertex and time domain, i.e., we employ a spatio-temporal sampling. Moreover, it presents an opportunity to exploit the additional correlation information captured in the graph signals to enhance the signal recovery methods.

If the signals do not have time components, we are restricted to graph signals in the traditional sense. An important special case is imaging. Pixel values can be considered as signals on a grid graph. If the pixel values of an image are folded, we obtain a folded image. The method proposed in this disclosure can be used to recover folded images as is illustrated in the description below. FIG.2 shows a signal recovery system according to an embodiment of the present invention. The signal recovery system 100 is arranged to recover high dynamic range signals detected by a plurality of sensors 110a, 110b, 110c, and 110d. Each of the sensors 110a, 110b, 110c, and 110d has a respective self-reset analog-digital converter (ADC) 112a, 112b, 112c, and 112d which generates a folded signal sample 114a, 114b, 114c, and 114d based on the output of the sensor 110a, 110b, 110c, and

110d as described above with reference to FIG.1. The folded signal samples 114a, 114b, 114c, and 114d are received by the signal recovery system 100 which executes signal processing methods described in more detail below to recover the high dynamic range signals from the folded signal samples 114a, 114b, 114c, and 114d. As shown in FIG.2, a self-reset ADC 112d is coupled to a reset counter 116d which counts the number of resets performed by the self-reset ADC 112d and provides this to the signal recovery system 100 as an observed folding number 118d which may be used in the signal recovery processing.

The sensors 110a, 110b, 110c, and 110d may be distributed at different locations and can be many different types of sensor, including CMOS sensor, image sensor, high temperature sensor, which senses a high dynamic range signal. In some embodiments, the sensors 110a, 110b, 110c, and 110d are pixels of an image sensor which may comprise an array of sensors. It should be noted that FIG.2 shows four sensors including a single sensor coupled to a self-reset ADC with a reset counter for simplicity and it is envisaged that embodiments of the present invention may be used to recover signals from different numbers of sensors and multiple self-reset ADCs with self-reset counters may be used in such embodiments.

FIG.3 shows a signal recovery system according to an embodiment of the present invention. The signal recovery system 100 is a computer system with memory that stores computer program modules which implement signal recovery processing methods according to embodiments of the present invention.

The signal recovery system 100 comprises a processor 150, a working memory 152, an input module 154, an output module 156, and program storage 120. The processor 150 may be implemented as one or more central processing unit (CPU) chips. The program storage 160 is a non-volatile storage device such as a hard disk drive which stores computer program modules. The computer program modules are loaded into the working memory 152 for execution by the processor 150. The input module 154 is an interface which allows data including the folded signal samples to be received by the signal recovery system 100. The output module 156 is an output device which allows the high dynamic range signals recovered by the signal recovery system 100 to be output. The output module 156 may be coupled to display device or a printer, or to a system for processing and analyzing the recovered signals.

The program storage 160 stores a folded signal segmentation module 162, a sparse matrix module 164 and a signal recovery module 166. The computer program modules cause the processor 150 to execute various signal processing which is described in more detail below. The program storage 160 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media. As depicted in FIG.3, the computer program modules are distinct modules which perform respective functions implemented by the signal recovery system 100. It will be appreciated that the boundaries between these modules are exemplary only, and that alternative embodiments may merge modules or impose an alternative decomposition of functionality of modules. For example, the modules discussed herein may be decomposed into sub-modules to be executed as multiple computer processes, and, optionally, on multiple computers. Moreover, alternative embodiments may combine multiple instances of a particular module or sub-module. It will also be appreciated that, while a software implementation of the computer program modules is described herein, these may alternatively be implemented as one or more hardware modules (such as field-programmable gate array(s) or application-specific integrated circuit(s)) comprising circuitry which implements equivalent functionality to that implemented in software.

FIG.4 is a flowchart showing a method of recovering a signal from a series of folded samples of the signal according to an embodiment of the present invention. The method 400 shown in FIG.4 is carried out by the signal recovery system 100 shown in FIG.3. The method 400 may be carried out on the folded signal samples 114a, 114b, 114c, and 114d shown in FIG.2.

The signal recovery system 100 receives the folded signal samples prior to step 402 of the method 400. The folded signal samples may correspond to time varying signals. The folded signal samples may be generated from sensors located at nodes of a graph. In such a case, the signal to be recovered may be a graph signal. The folded signal samples may correspond to pixels and the signal to be recovered may be an image signal.

In step 402, the folded signal segmentation module 162 is executed by the processor 150 to segment the folded signal samples into a plurality of signal segments. The aim of this step is to generate signal segments containing folded signal samples which each have a similar folding number. As will be described in more detail below, the segmentation carried out by the segmentation module 162 may involve calculating a distance function between folded sample signals and including the folded sample signals in a segment based on the distance function. Further as described in more detail, a greedy algorithm implemented by the segmentation module 162 to iteratively add folded sample signals to a segment according to a constraint. The constraint may depend on the folding rate of the folded signal samples. The constraint may include an allowable perturbation. The effect of varying this allowable perturbation is discussed below. In step 404, the sparse matrix module 164 is executed by the processor 150 to represent each signal segment as a sparse matrix. The sparse matrix represents a set of equations which can be solved to return the folding number of each folded signal sample.

In step 406, the sparse matrix module 164 is executed by the processor 150 to solve the sparse matrix to give the folding number for each folded signal sample. In some embodiments, observed folding numbers for some of the folded signal samples are included as linear constraints in the solution of the sparse matrix.

In step 408, the signal recovery module 166 is executed by the processor 150 to recover the signal for each folded signal sample using the folding numbers calculated in step 406. Once the folding number is calculated in step 406, equation (1 ) above can be used to determine the high dynamic range signal.

In the following description, the theory underlying the method described above is set out in detail. In the present disclosure the following notation is used. We use 1R and Z to denote the set of real numbers and integers, respectively. Suppose x : V ·→ R is a mapping from a set V to R. Then, for a subsetS <= v, we let x(S) = ( x (v)) ves - For a matrix W, Ws is the submatrix consisting of the rows of the matrix W indexed by the set S. Thus, Wu is the row u of W.

Firstly, we present our system model and assumptions. We also derive a sufficient condition under which perfect recovery from folded graph signal samples is achievable. Consider an undirected, simple graph G = ( V,E ) with V the set of vertices and E the set of edges.

Let x = (x(v, t)) be a continuous-time graph signal, where for each vertex v e reV,teR

V,x(v,·): IR ·→ IR is an L 2 function. Component-wise, for each t∈ R , x(-, t) = graph signal; and for each v∈ V, x(v,·) = (x(v, t)) teR is a continuous¬ time signal atv. Note a graph time series x is a special case of a generalized graph signal as defined by Y. Tanaka, “Spectral domain sampling of graph signals,” IEEE Trans. Signal Process., vol. 66, no. 14, pp. 3752-3767, July 2018. We start with the following bandlimit assumptions.

Assumption 1. For each time t, the graph signal x(·, t) is spanned by W = w 1, ...,w K , K eigenvectors of the associated eigenvalues of the Laplacian matrix L of the graph G. Moreover, W is independent of t.

We use W to denote the matrix with wi as the columns. At any time t, x(·, t) can be represented using the basis vector coefficients b t = (b k t ) ∈ R K as

At each time t, we can sample the graph signal by recording the signal at a subset S ⊂ V to obtain the spatially sampled signal x(S,t). The submatrix of W formed by taking the rows indexed by S is denoted by W s ∈ R |s|xK . The sampled signal can be used to recover the full graph signal if the sampled vertices are chosen such that |S| > k and W s has a left-inverse.

It is common to have W consisting of eigenvectors corresponding to the K smallest eigenvalues of L, for example, when one performs graph signal smoothing by using a low-pass filter.

Assumption 2. At each node v ∈ V, the L 2 function x(v,·) is bandlimited to [-B, B]~Hz.

If we sample x discretely with a subset U ⊂ Vx R, a useful measure is the sampling rate of U defined as:

Using a sampling interval T„, the discrete signal is obtained. Applying the Nyquist-Shannon sampling rate, we may choose T 0 = 1/(2B) for complete signal recovery of x(v,·) from the sampled discrete signal y. Using both spatial and temporal sampling, a sampling rate of 2BK over the entire graph guarantees a perfect recovery of the full signal.

The concept of a F -transform for generalized graph signals was developed to quantify the variation of the signal using the joint spectrum over the graph vertex and time domains. The F -transform of x evaluated at the k-th graph eigenvalue and time domain frequency f, where k ∈ {1, K} and f ∈ [-B, B], is given by which is also the Fourier transform of b k,t if we consider it to be a continuous-time signal.

Assumption 3. For each k = 1, K and f ∈ [-B, B], there is a uniform upper bound a k,f ≥ 0 independent of x such that

Assumption 3 is used in the quantitative analysis discussed below.

We consider the scenario where the graph signal is not recorded perfectly by the ADC, e.g., the signal dynamic range exceeds the voltage range of the ADC. We use selfreset ADCs to sample the graph signal. A self-reset ADC captures the signal at each sampled node and yields a folded signal via a modulo operation. More specifically, the maximum amplitude that can be measured by the ADC, a positive real number λ, is called the folding rate. Using the definition of a modulo operator described in (1 ), the graph signal y(S,n) = (y(v,n)) v∈S ∈ R K can be expressed as follows: y(S, n) = D k z(S, n ) + p(S, n) where D λ is a diagonal matrix with diagonal entries all equal to λ and 0 < p(v,n) < λ for all v∈ S. We refer to z(S,n)∈ Z K as the vector of folding numbers and p(S,n) e [0, λ) κ as the vector of folded signals. Suppose that |5| = K and W s ∈ is invertible, then the basis coefficients b i,nTo , and hence y(-,n), are uniquely determined by y(S,n). However, if we only observe p(5,n), it is in general not enough to recover y(-,n) and additional information is required.

To state such information, we need one more notion: k real numbers α 1, ...,α k are said to be linearly independent over Z if the only integer solution to the equation for all 1 < i ≤ k. It is common to have k -tuples linearly independent over Z by countability.

Theorem 1. Consider graph signals that belong to the column span of W = [w 1, W 2 , ...,w K ] ∈ LetS ⊂ v be a subset of sampled vertices with \S\ = K such that the submatrix W s formed by taking the rows of W indexed by S is invertible. Suppose that folded signals with folding rate λ are observed at vertices in S . We have the following:

(a) Let u∈ V\S be an additional node such that the entries of are linearly independent over Z . If the folding rate λ' at u is chosen according to a probability distribution absolutely continuous with respect to the Lebesgue measure, then with probability one, any graph signal in the column span of W is uniquely determined by the folded signals of at V = S u {u}.

(b) If u∈ V\S with folding rate λ such that contains an irrational entry, then there are infinitely many graph signals with the same folded signals atS and distinct folded signals at u.

The theorem essentially says that in terms of recovery, in theory, one needs to at least sample at one more node than the amount required by the bandlimit condition. Moreover, one may also need to choose a different folding rate λ' at this additional node v. On the other hand, almost every λ' works, as long as it is chosen randomly according to a probability distribution absolutely continuous w.r.t. the Lebesgue measure. Proof:

(a) Consider two graph signals with the corresponding vectors of folded signals and folding numbers at the sample set respectively. Suppose We want to show that for a vertex P(u) have the same folded signal with folding rate λ' randomly chosen according to a distribution absolutely continuous w.r.t the Lebesgue measure, then with probability one, and hence This implies that g is almost surely uniquely determined by the folded signals at

Since we have equation (5): where is the difference in the folding numbers of the two graph signals § and g at the sample set S. We also have equation (6):

Combining (5) together with (6), and since W s is invertible and we have where q k and are the k-th components of the vectors q and respectively.

Before proceeding further, we give an intuition as to why for most choices of λ', (7) has only the trivial solution q = 0. The left-hand-side is an integer multiple of X'. On the other hand, the right-hand-side is an element from a countable set if each q j is an integer. Therefore, for the equality to hold if is the ratio between a number from a countable set and a non-zero integer, and thus it belongs to a countable set to hold, we must have Now, we turn this into a formal argument. are not all 0, then as the entries of are linearly independent over Z , the expression is non-zero. Moreover, if q varies over the collection is a countable set in R We have seen that Hence, the set Z for some is countable and has Lebesgue measure zero. Therefore, with probability one, a randomly chosen λ' does not belong to Λ. Consequently, with such λ', if g and § has the same folded sample at it, then q = 0 and our claim is proved.

(b) Without loss of generality, we assume that the first component of is irrational. In (7), we let q k = 0 for k > 1 and let q x vary over Z. With such a choice, as r is irrational, the folded signal values of are dense in the interval [0, λ) . In particular, there are infinitely many possibilities for Therefore, we can observe the same folded signals at S and different folded signals at u for infinitely many graph signals g.

Theorem 1 essentially says that in terms of recovery, in theory, one needs to at least sample at one more vertex than the number required by the bandlimit condition in Assumption 2. Moreover, one may also need to choose a different folding rate λ' at this additional vertex. On the other hand, almost every λ' works, as long as it is chosen randomly according to a probability distribution absolutely continuous w.r.t. the Lebesgue measure.

One should note that Theorem 1 (a) does not rule out the possibility λ' = A . From the proof, we can choose is irrational for some q = It can be shown that the conditions in Theorem 1 as well as the above condition for λ' = A are satisfied in certain random models. In practice, say in a sensor network, the random folding rate at an additional vertex or sensor required in Theorem 1(a) can be realized by setting one of the sensors to operate at a lower folding rate λ' than its maximum voltage level λ, choosing it uniformly randomly in (Ο,λ). We would like to point out that λ' appearing in Theorem 1 is independent of the signal x(-,t) for each time t. Therefore, as long as W in Assumption 2 is fixed and the conditions of Theorem 1 (a) are satisfied, we are able to recover the graph signal x(-,t) from folded samples at each time t. This leads to the following.

Corollary 1 (Discrete sampling rate) Suppose that Assumption 1 and Assumption 2 hold. LetS , u and λ' be chosen as in Theorem 1 (a). Let T be a discrete subset of R sampled at rate 2B~Hz. Then, with probability one, the generalized graph signal x can be recovered from the folded signals at

Proof: From Theorem 1(a), for each we are able to recover the entire graph signal x(·, t ) from the folded samples at Consequently, by the Nyquist- Shannon sampling theorem, we can recover x(v,·) for each v∈ V and hence the entire signal x.

Suppose we have a sample seSt of size K with folding rate λ such that W s is invertible and another sample set S' of size K' > 1 with folding rate λ' chosen randomly as in Theorem 1(a). In addition, we assume that for some u∈ S', the entries of are linearly independent overZ. From Theorem 1(a), the folded signals atS U S' allow us to recover the graph time series signal x. Specifically, at time nT, let the folded signals and folding numbers atS and S' be given by and respectively. Using (4) and noting that y(s' ,n) = we have

Recoverability of the signal x is equivalent to (8) having a unique integer solution. We can thus apply integer programming to recover z(5,n) and z(5',n). This makes our approach local in nature. This means that in each stage of the recovery scheme, we only need to consider finitely many sampled nodes, concentrated in a short time interval. It can also be applied to snapshot graph signal recovery from observations of folded signals at appropriately chosen sample nodes. Moreover, it can also locally recover signals that fail to fulfill global bandlimited or L 2 assumptions in the time direction. Regarding sampling rate, a direct application of an unlimited sampling algorithm for the signal at each vertex of the graph yields an overall sampling rate of 2 BKe. In our proposed approach, by using the signal correlation enforced by graph bandlimit conditions, a sampling rate of 2 B ( K + 1) over the entire graph is achievable by choosing K' = 1. This rate is significantly smaller than 2 BKe for large K and B. However, integer programming is intractable for large K. In the following, we propose an alternative sparse recovery method that is however suboptimal. We utilize both the spatial and temporal correlations to recover the signal. The main idea is to choose a sample set of vertices V = S u S' so that it can be further partitioned into subsets on which the graph signal do not vary much at each time instant. Each of these subsets of vertices then share “approximately” the same folding number at each time instant. In the following section, we introduce the concept of partition complexity and show how it is related to the degrees of freedom in the graph signal. Recall that sampling on G = (V,E) refers to selecting a subset of vertices V⊂ v, so that certain requirements are met. The signal recovery problem from folded signals is equivalent to the recovery of the folding numbers. In this section, we propose an approach to sample vertices so that the assumptions can be relaxed and solved using a sparse optimization approach instead of the NP-complete integer programming approach.

We want to sample vertices whose signals tend to have small differences. This is because, at such vertices, the folding numbers have a greater chance to be similar. To motivate the following technical discussions, we first outline our strategy. To solve equation (8), we perform relaxation of the unknowns a,b to continuous variables. Doing so results in a system of K' equations with K + K' unknowns. However, if the solution we are looking for satisfies certain conditions such as being sparse, then we may recover it using tools such as L 1 -optimization.

Consider a collection of m vertices {v t : 1 < i ≤ m) on G. If a graph signal g on G is such that is small for all i ≠ j, then it is likely that the folding numbers of denoted as z(v t ) and z(v j ) respectively, are equal. This means that the set is sparse. We may then apply ^-optimization to a minimization similar to a relaxed version of equation (8) but involving differences of folding numbers instead.

We need a way to quantify the difference between the signals at two different vertices. To do this, we utilize a symmetric non-negative “distance” function that will be carefully designed as set out below to account for the graph signal’s bandlimitedness in both the graph vertex and time domains. We then wish to partition a set of sampled vertices into subsets of vertices that are close to each other in terms of this “distance” function. We start with a basic definition.

FIG.5 is an example that illustrates how to segment a graph signal utilizing a distance function.

Definition 1. Suppose is a symmetric non-negative function on pairs of vertices of G and r > 0 is a real number. be a subset of vertices. A admissible partition of V is a decomposition of V into a disjoint union of subsets V = such that for each component ¼, there is a center for The smallest c such that V has an admissible partition into c components is called the -complexity of V, denoted by

In the example shown in FIG.5, we let with 4 vertices and $V' = {v 1 ,v 2 ,v 3 }. The values of the function φ are labeled along the edges. If r = 2, the complexity In this case, we apply the partition with centers v x and ^respectively. On the other hand, if r = 3, then . The partition V = V is trivial, and the center must be chosen as v v Moreover, in the case r = 2, we have We can choose V with center v 3 .

If we take the function as a measure of the difference between the vertex signals at vertices u and v, then a component -admissible partition V = collects all the vertices with similar signals together. A subset V⊂ v with higher means that it requires more components to describe all its vertex signals and is hence more “complex”. Intuitively, we want to sample V from V so that a subseSt ⊂ v’ is such that W s has a left-inverse while maintaining a small ) As choosing S such that W s has a left-inverse is easy

(with high probability), we focus on the latter task here. Our goal is to find a subset of vertices V with size bounded below by an integer s < \V\ such that the (φ, r) - complexity of V is as small as possible, i.e., we want to

Definition 2. To solve Problem (9), we can consider it from a different angle. For each v∈ V, the φ-ball of radius r at v is defined as the set r}. For a subset For each -admissible partition we have , where c t is a center. Let C = {c t }l ≤ i ≤ c so that Then, Problem (9) is equivalent to the following optimization problem:

To solve Problem (10), we may estimate starting from c = l .

Regarding the size of we have the following observation. Lemma 1. The function s a submodular set function.

There are several equivalent characterizations for submodularity and we describe one such characterization here. Let Ω be a set. A function F on subsets of Ω is submodular if and only if the following holds: for any we have:

We now proceed to the proof of Lemma 1 with (11 ) as the definition of submodularity.

Proof: Given and v∈ V, we have the obvious inclusion

Therefore, the function is submodular following the above definition.

Maximizing a submodular function with size constraint is usually intractable. However, it is well known that the greedy algorithm admits an 1 - 1/e approximation. Therefore, we propose the use of the greedy algorithm for Problem (10): we initialize by choosing C 0 = at iteration c = 0. In each subsequent iteration c, we add a new vertex v to such that is maximized. The procedure terminates at the iteration i where As a byproduct, if in the c-th iteration we include v to we have V which forms the decomposition as required in Definition 1.

Initialize by choosing A-sparsity

We start with the following notion of λ-sparsity, which quantifies the degrees of freedom the folding numbers of a signal in a given set can have.

Definition 3. Consider a set of signals G where each can be written as sparsity of a signal in G is the smallest size of a subset such that z is uniquely determined by (z(i): i∈ /} in the sense that:

Intuitively, consider the set of graph time series signals x satisfying Assumptions 1-3. We use λ-sparsity to quantify the number of different folding numbers when x is folded w.r.t the folding rate λ. We now introduce an explicit non-negative symmetric function φ:7 χ 7→ R ≥0 as follows: with B as in Assumption 2, and a i f as in Assumption 3. We use φ to measure the proximity of two vertices u, v in terms of the weighted average of the difference in the signals at u, v. As discussed above, we hope that for two vertices u, v such that φ(ιι, v ) is small, a graph time series signal satisfying Assumptions 1-3 tends to have the same folding number at vertices u and v at each time instant.

Lemma 2. Consider the signals y and where (p t ,z t ) are the folded signal and folding number of y t , respectively for i = 1,2 . Suppose y 2 e then the folding number z 2 of y 2 is uniquely determined: If p 2 - Otherwise, z 2 = z x . Proof: The proof is best conveyed through FIG.6: if then we have where the last inequality follows from Therefore, Furthermore, since we have . The proofs of the remaining cases are similar and the lemma is proved.

Theorem 2. Let V 7 <= y be a subset of vertices of G and T 0 = 1/(2B). For any n∈ Z and The λ - sparsity of y(-) for signals \bx satisfying Assumptions 1-3, is upper bounded by the -complexity

Furthermore, suppose s a -admissible partition with centers For each be the folding number and folded signal of y(v). We have the following:

Proof: With Assumption 2 and Assumption 3, we can express the graph time series signal x as: for a family of coefficients such that Since we obtain where the inequality follows from the assumption that and the equality

5 is a (φ,λ/2)-admissible partition with centers Then for each we have and from (13), From Lemma 2, the folding number of y(v) is determined by the folding number of y Therefore, from Definition 3, the λ-sparsity o is at most The remainder of the theorem follows from Lemma 2, and the proof is complete.

In applications, we may relax / by allowing a perturbation and replace it by with some suitable choice of The consequence is that we may have a smaller partition complexity, at the cost of losing as the theoretical upper bound of λ-sparsity. This can be useful if the uniform bounds in Assumption 3 are not tight. We explore the effect of adding e in the examples below.

Signal recovery

In this section, we make use of the concepts introduced in the previous section to develop an algorithm that recovers a graph time series signal x using its folded samples recorded by self-reset ADCs. This is equivalent to recovering the folding numbers. Throughout this section, we assume that we sample from vertices in S with folding rate λ and from vertices in S’ with folding rate λ'. The sample set S is chosen so that \S\ = K and W s is invertible. In addition, we assume that for some u∈ S’, the entries of are linearly independent over Z. These properties theoretically guarantee perfect recovery. At each time nT 0 , where T 0 = 1/(2B) and n∈ Z, let the folded signals and folding numbers at S and S' be given by pairs of vectors { p(S,n),z(S,n )} and {p(5',n),z(5',n)}, respectively.

Lemma 3. For a fixed graph time series signal x , where z(v, t) is the folding number of x(v, t) , is almost everywhere compactly supported.

Proof:

As V is finite, it suffices to show that z(v,·) is compactly supported for each v∈ V almost everywhere in the time domain. As x belongs to L 2 (R). By Assumption 2 and the Whittaker-Shannon interpolation formula, in the L 2 sense. As integer translates of the sine function are pairwise orthogonal w.r.t the L 2 -norm and x(v,·)∈ L 2 (R) , the sum In particular, for some b. For , we have

By choosing N 0 to be sufficiently large, can be made arbitrarily small. 0

At the same time, for any \t\ large enough, can be made arbitrarily small. Therefore, Thus, as does not fold, and z must be compactly supported.

Recall from (4) that y(-,n), n∈ Z, are samples of x(·, t) at discrete time instants. For square integrable x, by Lemma 2, the discrete samples y(-,n) are unfolded for large |n|. Hence, it suffices to recover the folding number for each y(-,n + 1) - y(-,n). In the case of clipping, i.e., we do not observe y(-,n) when |n| is large, our method recovers the folding numbers up to an additive constant. Fix an n and let y( ) y( ) y( ) Denote the folding numbers and folded signals of . From (8), we obtain ) which is a system of K’ equations in K + K’ unknowns This system of equations in general has no unique solution if we relax to be continuous variables. But our previous discussions on λ-sparsity indicate that a sparse solution can be found. Our signal recovery steps are as follows:

Step 1 : For a fixed K' > 1, we proceed by using the greedy algorithm with s = K + K' to obtain a sample set of vertices which is a -admissible partition with centers (Note that since the greedy algorithm is suboptimal.) Decompose V = S u S’ such that |S| = K and W s is invertible.

Step 2: We replace the unknowns in (14) by variables as follows (cf. Theorem 2):

(i) If v = c t for some i = 1, ...,c, make the substitution

(ii) Suppose that write z If Otherwise, write z(v) =

FIG.7 illustrates the three cases in the recovery step 2. As shown in FIG.7, z(v) (which is denoted as z(J) in FIG.7 is substituted according to the relationship between p(v) and p(c t ).

In sub-step unless for some center . Hence, we have a sparse solution in the variables ζ(ν) if the number of partition components c is small.

Step 3: After performing Step 2 on (14), we obtain a new set of equations in the K + K’ variables xpressed as with M being a K' x (K + K') matrix and h a K' x 1 vector. Equation (15) can be solved for ζ exactly if rank(bM ) > c since there are only c non-zero unknown variables. However, if the bounds in Assumption 3 are not tight, neither is the bound by the function φ in (13). In this case, the -complexity any may be large. We may instead sample V with smaller -complexity for some We no longer expect (15) can be solved directly. However, at the compensation of a smaller partition complexity we still expect ζ to be sparse. Therefore, we heuristically propose solving:

Moreover, suppose that at some vertex v the folding number s observed and v belongs to V t . This gives us an additional equation taking one of the forms: z(v) = which can be added as linear constraints to Problem (16).

In the following we present simulation results to demonstrate the performance of methods according to embodiments of the present invention on folded continuous- time graph signals. We then conduct experiments on folded images to validate that the approach can recover the original unfolded image.

We consider G being one of the following graphs: the complete graph with discrete random edge weights, the Arizona power plant network as illustrated in FIG.8, and 2- dimensional (2D) lattice of size 25 x 20.

FIG.8 shows a graph used to simulate a power plant network to demonstrate the performance of a method according to an embodiment of the present invention. The graph 800 shown in FIG.8 comprises a plurality of nodes 810 connected by edges 820 which is intended to simulate the Arizona power plant network. A power plant may observe signals with high dynamic range such as temperature measurements. On the other hand, a lattice can be used to model an image carrying HDR pixel values. For the parameters in Assumptions 1 -3, we standardize the bandlimit B to be 1 Hz and set ai,r to be inversely propositional to i and f. The choice of K and K’ are summarized in Table I.

Table I: K and K'for different graphs For each simulation instance, each coefficient of the signal x w.r.t. the basis is chosen uniform randomly within the bounded set by the corresponding The folding rate λ is chosen to ensure significant amount of non-zero foldings. In addition, we add white Gaussian noise to the observed folded signals with the signal-to-noise ratios (SNR) being 15 dB, 20 dB, and 40 dB. As discussed above, we aim at recovering the difference signals. An example of a continuous-time graph signal and its folded version is shown in FIG.10.

Due to noise, we apply the proposed recovery algorithm by solving the following modified version of Problem (16) to recover the difference signal: where a is a chosen regularizing parameter.

We choose the set of sample nodes V’ with small -complexity by the greedy algorithm. FIG.9A to 9F show the results of simulations using embodiments of the present invention. For different values of e, we plot the average total recovery error in the folding numbers and the size of (φ,λ/ 2 +∈ ) admissible partitions of V’ against ∈/λ. Different curves correspond to different noise levels.

FIG.9A shows average total recovery error in folding numbers of the difference signals against e/λ for the complete graph simulation. FIG.9B shows the size of (φ,λ/2 + ∈ ) admissible partitions of V’ against e/λ for the complete graph simulation. FIG.9C shows average total recovery error in folding numbers of the difference signals against e/λ for the power plant simulation. FIG.9D shows the size of (φ,λ/2 +∈ ) admissible partitions of V’ against e/λ for the power plant simulation. FIG. 9E shows average total recovery error in folding numbers of the difference signals against e/λ for the 2D lattice simulation. FIG.9F shows the size of (φ,λ/2 +∈ ) admissible partitions of V’ against e/λ for the 2D lattice simulation.

From the plots, we first notice that the performance gets better with less noise as expected. The errors are small if SNR = 20 dB, 40 dB. Most importantly, we see that generally as∈ increases, the average recovery error first drops and then increases. The error is small at the minimum. The best performance occurs for e when the size of(φ,λ 2 +∈ ) admissible partition of V’ is relatively small. In such a case, we have a balance between the partition size and amount of nodes in each partition. In addition, the best range of∈ is independent of the noise level.

The observation made above gives us a general guideline on how to choose e. One may first choose such that: at the number of partitions is * K + K\ while at €2, the number of partitions is * 1. Then one may perform a binary search scheme within the interval [ , ] to identify a range of∈ so that the partition size is appropriate.

For illustration FIG.10A and FIG.10B show examples of the recovery of continuous- time folded graph signals. The example heat maps of continuous-time folded graph signals: the original unfolded signals (left), and folded signals (middle) and the recovered signals (right). The horizontal axis is the time domain, while the vertical axis corresponds to the graph vertex domain. FIG.10A shows an example with relatively good recovery while FIG.10B shows one with less perfect recovery. However, even in FIG.10B, the error in relative difference between adjacent time slots in the recovered signals is still small as compared with the relative difference of the original signal. Now, we consider a real dataset. The graph G is a weather station network in US of size 197. The graph for the weather station network is constructed using the k-nearest neighbor algorithm based on actual geometric locations of the stations. The signals on G are real daily temperature recorded over the year 2013. We normalize the temperature readings by dividing by 100 so that each signal is within the interval [0, 1 ]. The folding rate λ is 0.75, and K = 80, K’ = 50. We perform recovery both on 2-days basis and 6-days basis. The recovery error is computed on the last day. The results are shown in FIG.11.

FIG.11 A and FIG.11 B show results of using embodiments of the present invention on weather station network data. FIG.11 A shows the average total recovery error in folding numbers of the US 2013 temperature dataset against e/λ and FIG.11 B shows the size of ( φ,λ/2 +∈ ) admissible partition of V’ against e/λ.

FIG.12A and FIG.12B show example heat maps of the temperature readings of the on weather station network data. FIG.12A shows smal∈l and FIG.12B shows large∈ on 6-days basis: the original unfolded signals (left), and folded signals (middle) and the recovered signals (right). The horizontal axis is the time domain, while the vertical axis corresponds to the graph vertex domain. First of all, we see that the average error is small even in the worst case (6-days, error * 3.5%). Moreover, the error is the largest on 6-days basis as expected. Slightly different from the synthetic dataset, the error first increases then drops as e/λ increases. However, we generally have a good performance when the number of partitions is about 10% of K + K’ in all the cases. From the heatmaps shown in FIG.12A and FIG.12B, we see that for large∈ when the number of partition is small, it is indeed true that the partition algorithm gathers nodes with similar signal value. In the following, we present simulation results for folded image recovery. As there are no time component, we remove the integral factor in equation (12) when performing folded image recovery. For each image, let P be the total number of pixels. We express parameters such as K, K’ as a factor of P for convenience.

FIG.13A, FIG.13B and FIG.13C show a folded image, an image recovered using a method according to an embodiment of the present invention and an image recovered using a variation minimization method respectively.

FIG.14A, FIG.14B and FIG.14C show a folded image, an image recovered using a method according to an embodiment of the present invention and an image recovered using a variation minimization method respectively.

For the examples in FIG.13 and FIG.14, the pixel values are scaled within [0, 1] and the folding rate is taken to be 0.75. FIG.13 has K = 0.2P, K’ = 0.05P. FIG.14 has K = 0.3P, K’ = 0.05P. FIG.13A and FIG.14A look darkened and noisy as we assign 0 at unobserved pixels, though they are not used in the recovery procedure. Using our approach described, both images can be recovered perfectly with e/λ = 12 (as well other choices of e/λ ). For comparison, we perform recovery using variation minimization. The results are shown in FIG.13C and FIG.14C and we see that the approach is not effective in the recovery task.

Next, we consider the recovery of a mouse brain image using a self-reset CMOS sensor. In this experiment, an image sensor was created and surgically implanted onto the brain surface of a mouse. The image sensor is designed with a self-reset circuit to enable it to work with high dynamic range of light intensity and SNR. The reset count of each pixel was recorded so that the original pixel light intensity value can be recovered.

FIG.15A to FIG.15C show images of a mouse brain to illustrate an application of a method according to an embodiment of the present invention. FIG.15A shows an original grayscale image with highlighted key feature. FIG.15B shows a colored image. FIG.15C shows the folded image of the colored image. We make use of the folded image created shown in FIG.15C to recover the original image. Our goal is to show that we can recover the original image without keeping track of the reset counts. The mouse brain surface shown on FIG.15A is obtained using a microscope with white light. The key feature captured by the image is the blood vessel highlighted within the dotted square. The colored version shown in FIG.15B was taken with bandlimit K * 0.57P. The blood vessel in the grayscale image is captured in the dark region towards the right end. If the folding rate λ = 0:7, we obtain the folded image shown in FIG.15C. A large part of the image has non-zero folding numbers: 89%; 83%; 8% for the red, green and blue channels respectively. In the folded image, the blood vessel feature is lost due to the folding effect.

To perform the recovery, we make we make use of all the pixels, i.e., K + K’ = P. In addition, we assume unfolded observations are made at * 0.05P pixels. We test extensively with different∈ values in forming (φ,λ/2 +∈) -admissible partitions of V.

FIG.16Ato FIG.16H show recovered images from a folded mouse brain surface image. The parameters are as follows FIG.16A to FIG.16H: e = 14λ, 16λ, ...,30λ. Size of an admissible (φ,λ/2 + ∈) partition: FIG.16A: 0.33P; FIG.16B: 0.34P; FIG.16C: 0.21P; FIG.16D: 0.20P; FIG.16E: 0.14P; FIG.16F: 0.076P; FIG.16G: 0.054P; FIG.16H 0.032P. The average recovery errors of the RGB channels were as follows: FIG.16A: 36.7%; FIG.16B: 37.0%; FIG.16C: 18.0%; FIG.16D: 11.0%; FIG.16E: 2.61%; FIG.16F: 0:41%; FIG.16G: 0.13%; FIG.16H: 0.74%.

We see that as∈ increases the size of a (φ,λ/2 +∈) -admissible partition decreases, while the recovery performance increases initially. Moreover, when the partition size becomes too small, the performance starts to drop again. For all the four recovered images at in FIG.16E to FIG.16H, the blood vessel feature can be seen clearly.

For comparison, we perform recovery using the variation minimization method. The results are shown in FIG.17A and FIG.17B. FIG.17A and FIG.17B show recovery of the folded mouse brain surface image using the variation minimization method. FIG.17A shows an image recovered by treating all the sampled pixel values as normal signals, and FIG.17B shows an image recovered by treating folded signals as corrupted signals.

As shown in FIG.17A and FIG.17B the variation minimization method fails to give a good recovery.

As described above, embodiments of the present invention provide a spatio-temporal sampling approach for graph signals while considering a practical scenario of modulo- based sampling using self-reset ADC for high dynamic range signals. A theoretical graph sampling rate for recovery from folded signals has been provided, which requires only the Nyquist-Shannon rate in the time direction. However, unconditional perfect recovery requires integer programming, which is intractable for large graphs, Under certain smoothness assumptions on the signals, embodiments of the present invention provide a new graph sampling scheme that minimizes certain complexity measures.

Whilst the foregoing description has described exemplary embodiments, it will be understood by those skilled in the art that many variations of the embodiments can be made within the scope and spirit of the present invention.