Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ULTRASONIC IMAGING METHOD AND SYSTEM FOR DETECTING MULTI-LAYERED OBJECT WITH SYNTHETIC APERTURE FOCUSING TECHNIQUE
Document Type and Number:
WIPO Patent Application WO/2014/086322
Kind Code:
A1
Abstract:
The present disclosure provides an ultrasonic imaging method and system for detecting a multi-layered object with synthetic aperture focusing technique. Based on SAFT technique, equating DAS and all-range dynamic focusing with drawing the arcs on the image using the arc scan conversion technique, a quick and accurate ultrasonic imaging method for a single-medium object is achieved. On this basis, a refraction vector calculation formula is used in combination with a refraction law to calculate a refraction path in a forward pattern, and the pixel points on the refraction path are scan-converted using the Bresenham line scan conversion algorithm.

Inventors:
QIN KAIHUAI (CN)
YANG CHUN (CN)
LI YAZHE (CN)
Application Number:
PCT/CN2013/088889
Publication Date:
June 12, 2014
Filing Date:
December 09, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV TSINGHUA (CN)
International Classes:
G01N29/06
Foreign References:
CN103018333A2013-04-03
CN103033816A2013-04-10
CN102608205A2012-07-25
US20030004415A12003-01-02
Attorney, Agent or Firm:
TSINGYIHUA INTELLECTUAL PROPERTY LLC (Trade Building Zhaolanyuan,Tsinghua University, Qinghuayuan, Haidian District, Beijing 4, CN)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A synthetic aperture focusing method in ultrasonic imaging for a multi- layered object, comprising:

step 1 : given that a horizontal length of the object in an X-axis direction is Xiength equally divided into Xength/Ax intervals, where Δχ is a length of the interval, a number of detecting points for an ultrasonic transducer scanning along the X-axis is M, M=\ +Xiength/Ax, and an index number of each detecting point is m, m=0, l , . .., -l , generating a transistor-transistor logic level pulse at each detecting point to trigger the ultrasonic transducer to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis; receiving and timing echo signals reflected from the object, sampling and storing the echo signals received at each detecting point N times, defining a sampling index number as n, n=0, \ , . ..,N-\, defining a preset value of a sampling frequency as fs, defining a sampling value sampled at an n h time and at the mth detecting point as sm(n), and defining a sampling time ofsm(n) as

step 2: reading in sequence sampling values sm(n) at a 0th detecting point from n=0 to n= N-l , and then reading in sequence sampling values sm(n) at each detecting point m=l , M-\ by repeating the reading process;

step 3: assuming a propagation velocity of an ultrasound in the object v=v\, v\ being a propagation velocity of the ultrasound in a first medium layer of the object, and generating an image block in an interval from zo=0 to ^-l in the depth direction Z on a longitudinal section, where Zdepth is a preset length of a generated image;

step 4: using the image block in the interval from zo to ^-l on the longitudinal section obtained in step 3 as an input value and extracting a boundary c\(x, z) between the first medium layer and a second medium layer;

step 5: correcting an image block in the interval from the boundary c\(x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer, comprising:

step 5.1 : assuming m=0, defining the mth detecting point as U(xu, 0), wherein xu=m Ax/accuracy, accuracy is an image accuracy, and calculating a sector image corresponding to the mth detecting point U(xu, 0), comprising:

step 5.1.1 : calculating a half-power beam angel of the ultrasonic transducer, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and calculating a left intersection point 5/(x/, z/) of a left boundary of the half-power beam angle and the boundary c\(x,z) and a right intersection point Br(xr, zr) of a right boundary of the half-power beam angle and the boundary c\{x,z) respectively, where

and ci(xr, zr)=0;

step 5.1.2: assuming a refraction point R(XR, Zr)=BI(XI, ζί) on the boundary ci(x, z) and calculating a normalized refraction vector RP = eta x RU - [eta x (RU e) + \[a ] x e , where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, ZR), a = \ .0 - eta x eta x [l .0 - (RU - e) x (RU - e)] , "·" represents a vector dot product operation, and " RU " is a normalized incidence vector from the detecting point U(xu, 0) to the refraction point R(XR, ZR);

step 5.1.3 : calculating a slope k of the refraction vector RP , calculating all pixels on a refraction path from the refraction point R(XR, ZR) to a left boundary x=0, or a right boundary x=(M-\) Ax/accuracy, or a lower boundary z=Z£ epi¾-l of the sector image, defining the refraction point R as a current point Rf, i.e. Rf (xf , zj) = R(XR, ZR), and defining the refraction path as R/Ef, Ef being an end point of R/Ef in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image; step 5.1.4: assuming xg=x +\ , seeking a next pixel Rg(xg, zg) adjacent to the current refraction point Rf on the boundary ci(x, z), where a coordinate of the pixel Rg satisfies ci(xg, zg)=0, defining Rg as a new refraction point, i.e. R(XR, zR)=Rg{xg,zg), executing step 5.1.2 and step 5.1.3 to scan-convert a new refraction path RgEg, and defining an end point of RgEg as Eg;

step 5.1.5: calculating a distance between Eg and Ef, equally dividing a curve segment between the new refraction point Rg and the current refraction point Rf on the boundary ci(x, z) into Ad segments, i.e. inserting Ad-\ points, an index number of an inserted point being recorded as τ =1 , 2, Ad-\ , and repeatedly executing following step 5.1.5.1 for T from τ =1 to τ =Ad with 1 as a step interval,

step 5.1.5.1 : calculating a coordinate of a rth inserted point RT by interpolation, wherein ZZ), executing step

5.1.2 and step 5.1.3 to scan-convert a refraction path starting from the inserted point Rz; step 5.1.6: assuming Rf=Rg and E^Eg, and executing step 5.1.4 and step 5.1.5; step 5.1.7: executing step 5.1.6 repeatedly until xg=xr+l;

step 5.2: assuming m=\, M-\ in sequence, executing step 5.1 repeatedly and generating the image block in the interval from the boundary c\(x,z) to Z^^-l on the longitudinal section;

step 6: using the image block in the interval from the boundary c\{x,z) to ^-l on the longitudinal section obtained in step 5 as an input value, extracting a boundary c2(x, z) between the second medium layer and a third medium layer formed between the boundary c\(x, z) and ^-l, and correcting an image block in an interval from the boundary c2(x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer; and

step 7: according to an arithmetic in step 6, processing in sequence all other boundaries on the longitudinal section to generate an image on the longitudinal section with a width of (M-\)Ax/accuracy+\ pixels and a length oiZdepth pixels.

2. The synthetic aperture focusing method according to claim 1, wherein step 3 comprises: step 3.1 : assuming m=0 and calculating a concentric arc centered at the mth detecting point U(xu, 0) on the longitudinal section, comprising:

step 3.1.1 : assuming a coordinate value zu=0 in the depth direction, calculating a vertical distance r=zu-accuracy between a coordinate point (xu, zu) and a center point (xu, 0), and then calculating a time t=r/v\ for the ultrasound to propagate over a distance r in the object and a sampling index number n=2t-fs corresponding to the time t;

step 3.1.2: judging whether the sampling value sm(n) is non-zero, if yes, executing step 3.1.3, and if no, executing step 3.1.4;

step 3.1.3: calculating the half-power beam angle of the ultrasonic transducer, calculating an x-coordinate of an intersection point of the arc and the right boundary of the half-power beam angle and the arc, assuming an initial value of a coordinate point (xp, zp) on the arc as (xu, zu), calculating an initial value of a discriminant A=(xp+i-xu)2+(zp-0.5)2-(zu)2 and executing circularly step 3.1.3.1 to step 3.1.3.3 for xp with 1 as the step interval until xp>zp:

step 3.1.3.1 : if Δ<0, increasing Δ by 2x^+3, otherwise, increasing Δ by 2(χ^-ζ^)+5 and decreasing zp by 1 ;

step 3.1.3.2: Mxp<q, increasing pixel values of coordinate points ixp, zp) and (2xu-xp, Zp) by co(xu, Xp)-sm{n)lr, where co(xu, xP) is an apodizing function, otherwise, keeping the pixel values of the coordinate points (xp, zp) and (2xu-xp, zp) unchanged;

step 3.1.3.3: if zp+xu<q, increasing pixel values of coordinate points (zp+xu, xp-xu) and (xu-Zp, xp-xu) by co(xu, zp+xu)-sm{n)lr, otherwise, keeping the pixel values of the coordinate points (zp+xu, xp-xu) and (xu-zp, xp-xu) unchanged;

step 3.1.4: assuming in sequence the coordinate value zu=\, 2, Z^^-l in the depth direction, and executing step 3.1.1 to step 3.1.3 repeatedly;

step 3.2: assuming in sequence m=\, M-\ and executing step 3.1 repeatedly to generate the image block in the interval from zo=0 to Z^^-l in the depth direction with a width of (M-\)Ax/accuracy+\ pixels on the longitudinal section.

3. The synthetic aperture focusing method according to claim 1 or 2, wherein in step 4, the boundary c\{x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method.

4. The synthetic aperture focusing method according to any one of claims 1-3, wherein in step 6, the boundary c2(x, z) between the second medium layer and the third medium layer formed between the boundary c\ix, z) and Z^^-l is extracted by using the edge detection method.

5. The synthetic aperture focusing method according to claim 3 or 4, wherein the edge detection method comprises a Canny edge detection algorithm.

6. The synthetic aperture focusing method according to claim 5, wherein the Canny edge detection algorithm comprises:

step (a): obtaining an image I(x, z) by smoothing the image block on the longitudinal section; step (b): calculating a gradient amplitude G(x, z) and a gradient direction Fix, z) of each pixel of the image I(x, z);

step (c): for each pixel, judging whether the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction Fix, z), and if yes, concluding that the pixel is not an edge point; and

step (d): for each pixel, concluding that the pixel is the edge point if the gradient amplitude Gix, z) is greater than a preset high threshold; concluding that the pixel is not the edge point if the gradient amplitude Gix, z) is less than a preset low threshold; if the gradient amplitude Gix, z) is between the preset high threshold and the preset low threshold, judging whether any of eight pixels adjacent to the pixel is greater than the preset high threshold, and if yes, concluding that the pixel is the edge point, if no, concluding that the pixel is not the edge point.

7. The synthetic aperture focusing method according to claim 6, wherein step (a) comprises: smoothing the image block on the longitudinal section by using a 2-dimensional Gaussian function.

8. The synthetic aperture focusing method according to claim 6 or 7, wherein step (b) comprises: calculating a first-order approximation of a partial differential in an X direction and a Z direction with a 2x2 template, i.e.

then calculating the gradient amplitude G and the gradient direction F by formulas:

G = j /p x p + q x q, = arctan — .

V P )

9. The synthetic aperture focusing method according to any one of claims 1-8, wherein in step 5.1.3, the all pixels on the refraction path from the refraction point R(XR, ZR) to the left boundary x=0, or the right boundary x=(M-\) Ax/accuracy, or the lower boundary z=Zdepth-^ of the sector image are calculated by using a Bresenham line scan conversion algorithm.

10. A synthetic aperture focusing system in ultrasonic imaging for a multi- layered object, comprising:

a positioning controller;

an analog digital converter;

an ultrasonic transducer, wherein an input end of the ultrasonic transducer is connected with an output end of the positioning controller, the ultrasonic transducer moves with a fixed velocity on a surface of the object under a control of the positioning controller, and an output end of the ultrasonic transducer is connected with an input end of the analog digital converter, given that a horizontal length of the object in an X-axis direction is Xiength equally divided into Xiength/Ax intervals, Δχ is a length of the interval, a number of detecting points for the ultrasonic transducer on the X-axis is M, M=\ +Xiength/Ax, and an index number of each detecting point is m, m=0, l , . .. , -l , a transistor-transistor logic level pulse is generated at each detecting point to trigger the ultrasonic transducer to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis, echo signals reflected from the object is received and timed by the ultrasonic transducer, the echo signals received at an mth detecting point is sampled N times by the analog digital converter, a sampling index number is defined as n, n=0, \, . .. ,N-\, a preset value of a sampling frequency is defined as fs, and a sampling value sampled at an n h time and at the mth detecting point is defined as sm(n), and a sampling time of sm(n) is defined as and a calculating device, wherein an input end of the calculating device is connected with an output end of the analog digital converter, and an output end of the calculating device is connected with an input end of the positioning controller, the calculating device is configured to store the sampling value sm(n) input from the analog digital converter, and to generate an image of the object on a longitudinal section by executing steps of:

step 1 : reading in sequence sampling values sm(n) at a 0th detecting point from n=0 to n= N-l , and then reading in sequence sampling values sm(n) at detecting points m=l , M-\ by repeating the reading process;

step 2: assuming a propagation velocity of an ultrasound in the object v=v\, v\ being a propagation velocity of the ultrasound in a first medium layer of the object, and generating an image block in an interval from zo=0 to Z^^-l in the depth direction Z on a longitudinal section, wherein Zdepth is a preset length of a generated image;

step 3 : using the image block in the interval from zo to Z^^-l on the longitudinal section obtained in step 3 as an input value and extracting a boundary c\(x, z) between the first medium layer and a second medium layer;

step 4: correcting an image block in the interval from the boundary c\(x, z) to Zdept - on the longitudinal section so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer, comprising:

step 4.1 : assuming m=0, defining the mth detecting point as U(xu, 0), wherein xu=m Ax/accuracy, accuracy is an image accuracy, and calculating a sector image corresponding to the mth detecting point U(xu, 0), comprising:

step 4.1.1 : calculating a half-power beam angel of the ultrasonic transducer, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and calculating a left intersection point B;(xi, z/) of a left boundary of the half-power beam angle and the boundary c\(x,z) and a right intersection point Br(xr, zr) of a right boundary of the half-power beam angle and the boundary c\{x,z) respectively, where

and ci(xr, zr)=0;

step 4.1.2: assuming a refraction point R(XR, Zr)=BI(XI, ζί) on the boundary ci(x, z) and calculating a normalized refraction vector RP = eta x RU - [eta x (RU e) + \[a ] x e , where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, ZR), a = \ .0 - eta x eta x [l .0 - (RU - e) x (RU - e)] , "·" is a vector dot product operation, and " RU " is a normalized incidence vector from the detecting point U(xu, 0) to the refraction point R(XR, ZR);

step 4.1.3 : calculating a slope k of the refraction vector RP ,, calculating all pixels on a refraction path from the refraction point R(XR, ZR) to a left boundary x=0, or a right boundary x=(M-\) Ax/accuracy, or a lower boundary z=Z£ epi¾-l of the sector image, defining the refraction point R as a current point Rf, i.e. Rf (xf , zj) = R(XR, ZR), and defining the refraction path as R/Ef, Ef being an end point of R/Ef in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image; step 4.1.4: assuming xg=x +\ , seeking a next pixel Rg(xg, zg) adjacent to the current refraction point Rf on the boundary c\(x, z), where a coordinate of the pixel Rg satisfies ci(xg, zg)=0, defining Rg as a new refraction point, i.e. R(XR, zR)=Rg{xg,zg), executing step 4.1.2 and step 4.1.3 to scan-convert a new refraction path RgEg, and defining an end point of RgEg as Eg;

step 4.1.5: calculating a distance between Eg and Ef, equally dividing a curve between the new refraction point Rg and the current refraction point Rf on the boundary ci(x, z) into Ad segments, i.e. inserting Ad-\ points, an index number of an inserted point being recorded as τ =1 , 2, Ad-\ , and repeatedly executing following step 4.1.5.1 for T from τ =1 to τ =Ad with 1 as a step interval;

step 4.1.5.1 : calculating a coordinate of a rth inserted point RT by interpolation, wherein ZZ), executing step 4.1.2 and step 4.1.3 to scan-convert a refraction path starting from the inserted point Rz, step 4.1.6: assuming Rf=Rg and Ef=Eg and executing step 4.1.4 and step 4.1.5; step 4.1.7: executing step 4.1.6 repeatedly until xg=xr+\ ;

step 4.2: assuming m=\ , M-\ in sequence, executing step 4.1 repeatedly and generating the image block in the interval from the boundary c\ix,z) to Z^^-l on the longitudinal section;

step 5: using the image block in the interval from the boundary c\{x,z) to ^-l on the longitudinal section obtained in step 4 as an input value, extracting a boundary c2(x, z) between the second medium layer and a third medium layer formed between the boundary c\{x, z) and ^-l, and correcting an image block in an interval from the boundary c2(x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer; and

step 6: according to an arithmetic in step 5, processing in sequence all other boundaries on the longitudinal section to generate an image on the longitudinal section with a width of (M-\)Ax/accuracy+\ pixels and a length oiZdepth pixels.

11. The synthetic aperture focusing system according to claim 10, wherein in step 3, the boundary c\(x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method.

12. The synthetic aperture focusing system according to claim 10 or 11, wherein in step 5, the boundary c2(x, z) between the second medium layer and the third medium layer formed between the boundary c\(x, z) and ^- 1 is extracted by using the edge detection method.

13. The synthetic aperture focusing system according to claim 11 or 12, wherein the edge detection method comprises a Canny edge detection algorithm.

14. The synthetic aperture focusing system according to claim 13, wherein the Canny edge detection algorithm comprises:

step (a): obtaining an image I(x, z) by smoothing the image block on the longitudinal section; step (b): calculating a gradient amplitude G(x, z) and a gradient direction Fix, z) of each pixel of the image I(x, z);

step (c): for each pixel, judging whether the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction Fix, z), and if yes, concluding that the pixel is not an edge point; and

step (d): for each pixel, concluding that the pixel is the edge point if the gradient amplitude

G(x, z) is greater than a preset high threshold; concluding that the pixel is not the edge point if the gradient amplitude G(x, z) is less than a preset low threshold; if the gradient amplitude Gix, z) is between the preset high threshold and the preset low threshold, judging whether any of eight pixels adjacent to the pixel is greater than the preset high threshold, and if yes, concluding that the pixel is the edge point, if no, concluding that the pixel is not the edge point.

15. The synthetic aperture focusing system according to claim 14, wherein step (a) comprises: smoothing the image block on the longitudinal section by using a 2-dimensional Gaussian function.

16. The synthetic aperture focusing system according to claim 14 or 15, wherein step (b) comprises: calculating a first-order approximation of a partial differential in an X direction and a Z direction with a 2x2 template, i.e. then calculating the gradient amplitude G and the gradient direction F by formulas:

G = j /p x p + q x q, = arctan — .

V P )

17. The synthetic aperture focusing system according to any one of claims 10-16, wherein in step 4.1.3, the all pixels on the refraction path from the refraction point R(XR, ZR) to the left boundary x=0, or the right boundary x=(M-\) Ax/accuracy, or the lower boundary of the sector image are calculated by using a Bresenham line scan conversion algorithm.

Description:
ULTRASONIC IMAGING METHOD AND SYSTEM FOR DETECTING MULTI-LAYERED OBJECT WITH SYNTHETIC APERTURE FOCUSING TECHNIQUE

FIELD

The present disclosure relates to an ultrasonic nondestructive detecting, and more particularly relates to an ultrasonic imaging method and system for detecting a multi-layered object with synthetic aperture focusing technique.

BACKGROUND

Currently, in a field of ultrasonic nondestructive testing, two solutions are primarily adopted in ultrasonic imaging for a multi-layered object containing interfaces, i.e., a synthetic aperture focusing technique (SAFT) in combination with a ray tracing, and a phase shift migration ultrasonic imaging technique. It should be noted that, an ultrasonic detecting can be done by two modes, i.e. a contact testing and an immersion ultrasonic testing. As the immersion testing can also be considered as the contact testing for the multi-layered object, if a coupling liquid is regarded as a part of the immersed object. In the present disclosure, the two modes are uniformly named as an ultrasonic detecting and imaging for the multi-layered object.

Derived from synthetic aperture radar (SAR) technique, SAFT is introduced to the field of ultrasonic imaging in 1970s due to advantages of breaking restriction of near- fie Id region, high azimuth resolution, and resolution only dependence on a size of an ultrasonic transducer rather than a distance. A principle of SAFT is, based on a pulse-echo measurement mechanism, using the ultrasonic transducer to scan an object along a fixed track in an orderly manner to obtain echo signals, and constructing a focusing image for the pulse-echo signals via a delay-and-sum (DAS) method, so as to achieve an aim of using a single ultrasonic transducer with a relatively small aperture to simulate a relatively large aperture array. A working model of the SAFT ultrasonic imaging is shown in Fig. 1(a), comprising: the ultrasonic transducer moving along a scanning direction (X) on a surface of the object with equal intervals, transmitting an ultrasonic signal at each scanning position to an inside of the object, simultaneously receiving, sampling and storing echo signals refiected back by refiectors inside the object, processing echo signals obtained at all scanning positions via DAS and all range dynamic focusing technique, and displaying an image.

DAS and the all range dynamic focusing technique are required to calculate different delay curves for each focus point in an imaging region. As shown in Fig. 1(a), in order to focus at a target point (x, z), DAS is performed for the echo signals received at each scanning position within an effective length L of a synthetic aperture. Specifically, assuming an echo signal received at u, by the ultrasonic transducer as Sj(t), where t is a sampling time, a delay time at Uj related to the target point (x, z) is calculated by a formula: t i = ^ L = - lz 2 + (x - u i f , i = 0, 1 , . .. , L-1. (1)

V V

where v is a propagation velocity of an ultrasound in a medium, r, is a distance between the target point (x, z) and u,. All delays within the effective length L of the synthetic aperture form a delay curve, which is a hyperbola. L is calculated by a formula:

L = 0M z/D (2) where λ is a wavelength of the ultrasound in the medium, and D is a diameter of the ultrasonic transducer. Thus the image at the target point (x, z) is calculated by a formula:

I(x,z) ^ (J) where <¾ is an apodizing function.

It is known from a principle of DAS that, the delay time is determined according to a sound velocity and a distance between the ultrasonic transducer and the reflector. However, for the multi- layered object, as shown in Fig. 1(b), the ultrasonic signal transmitted from the ultrasonic transducer is refracted at an interface, so a sound propagation path changes and the distance between the reflector (x, z) and u, is no longer the length of the straight line between the two points. Moreover, as mediums of layers in the multi-layered object are different from each other, propagation velocities of the ultrasound are not equal in the different layers. Thus the delay time may no longer be simply calculated by dividing the relative distance by the sound velocity, and the delay curve is no longer a regular hyperbola. To calculate the delay, a propagation path of the ultrasound between u, and the target reflector (x, z) is calculated first and then a length and a propagation time of each path piece (rn, r i2 and as shown in Fig. 1(b)) are calculated piecewise. A key point lies in a method for quickly and accurately obtaining the propagation path of the ultrasound between u, and the target reflector (x, z). The ray tracing is a popular method to realize this aim. Therefore, the ray tracing is naturally combined into SAFT to image for the multi-layered object.

A principle of the ray tracing is mainly based on Snell's law to obtain the propagation path of the ultrasound with a least propagation time by iteration. Thus, as shown in Fig. 1(d), for a method that combines SAFT with the ray tracing, when the ultrasound propagation delay t, at u, relative to the reflector (x, z) is calculated, it is required to perform the iteration for all points on a boundary c(x,z)=0 for calculating lengths and propagation times of the two path pieces rn, r i2 , so as to find a point with the least propagation time, and this point is a refraction point.

An advantage of this method is that there is no special requirement for an interface profile of the multi-layered object. That is to say, this method is applicable to not only the ultrasonic imaging for a regular multi- layered object with heterogeneous medias in the depth direction and a homogeneous medium in a horizontal direction, (that is, interfaces between medium layers are horizontal or parallel to each other or the like, ) but also the ultrasonic imaging for an irregular multi- layered object with heterogeneous medias in both the depth and the horizontal direction, i.e. containing irregular interfaces (such as a curve surface). However, a drawback of this method is that the ray tracing includes an iterative calculation with high time complexity. Besides, in the conventional SAFT, the formula (1) for calculating the delay includes one root-mean-square (RMS) calculation, which results in a large amount of calculations. After the ray tracing is introduced to SAFT, the amount of calculations is even greater for the calculation of delay because of multiple RMS calculation. As shown in Fig. 1(d), when the ultrasound propagation delay at relative to the reflector (x, z) is calculated, , the RMS is calculated twice for each point on the boundary c(x,z)=0 to obtain the lengths of the two path pieces rn and r i2 . Therefore, this method is quite time-consuming. As shown in Fig. 6(b), it takes 30 minutes to image an object (4.4cm*4.9cm) shown in Fig. 6(a) with this method. In addition, as the wavelength of the ultrasound is relevant to the medium, when the medium in the object is multi- layered, the formula (2) for calculating the effective length L of the synthetic aperture is no longer applicable. However, imaging accuracy of SAFT is closely relevant to an accuracy of L, namely, if L is too large, there is much noise in the image thus resulting in a low signal-to-noise ratio; and if L is too small, details of the object may be lost thus reducing the accuracy. However, for the complicated and irregular multi-layered object, currently, there is no formula for accurately calculating L. It is difficult to figure out an accurate L in the method that combines SAFT with the ray tracing, which leads to a poor imaging effect. For example, as shown in Fig. 6(b), a curved surface in a second interface layer is not well reconstructed in a longitudinal section image of the object with this method.

The phase shift migration technique is a frequency domain ultrasonic imaging method obtained by introducing a migration technique in reflection seismology to SAFT. In the method, an ultrasonic detecting model is regarded as an exploding reflector model. Given that reflectors in the object explode simultaneously at time t=0, and each reflector has a strength that is proportional to its reflectivity, a field that is simultaneously measured using an array of sensors is created. A primary idea of this method is, by extrapolating a sound field observed at a horizontal position (i.e. the first line in the depth direction), calculating sound fields at other positions in the depth direction. The algorithm specifically includes two main steps: step 1, conducting a two-dimensional Fourier transform for time domain data to obtain a two-dimensional spectrum,

5(A:,i») = 2D- r( j (ii i ,i))

and step 2, extrapolating in the depth direction, comprising: phase shifting for the two-dimensional spectrum obtained from a last image line, then conducting a two-dimensional inverse Fourier transform and assuming t=0 to obtain a new line of image,

s(u., t = 0) = 2D-IFT(S(k, ω)α(Αζ, k, ω)) where a(Az,k, a)) = exp is a phase shift factor corresponding to a step interval

Az in the depth direction, c with a constant value, is the propagation velocity of the ultrasound in the medium.

For detecting the multi-layered object, reconstruction speed is greatly improved with the phase shift migration method as no ray path is required to be calculated. For example, it only takes 62s to generate an image shown in Fig. 6(c). However, the ultrasound velocity is supposed to be constant, which makes the method only applicable for detecting a regular multi-layered object containing heterogeneous media in the depth direction but homogeneous medium in the horizontal direction. For an object containing irregular interfaces (i.e. containing heterogeneous media both in the depth and the horizontal direction), detecting results are inaccurate with this method. As shown in Fig. 6(c), a serious error occurs in the image of the second interface.

Therefore, currently, there is still lacking a quick, accurate and effective ultrasonic imaging method for the multi-layered object containing irregular interfaces.

SUMMARY

The object of the present disclosure is aimed to provide an ultrasonic imaging method and system for detecting a multi-layered object with synthetic aperture focusing technique so as to quickly reconstruct an accurate imaging of a surface profile and an internal structure of a regular multi-layered object containing horizontal or parallel interfaces or an irregular multi-layered object containing non-horizontal and unparallel interfaces.

According to one aspect of an embodiment, the ultrasonic imaging method with synthetic aperture focusing technique for the multi-layered object is provided, comprising:

step 1 : given that a horizontal length of the object in an X-axis direction is Xi ength equally divided into Xi ength /Ax intervals, where Δχ is a length of the interval, a number of detecting points for an ultrasonic transducer scanning along the X-axis is M, M=l +Xi ength /Ax, and an index number of each detecting point is m, m=0, l , . .., -l , generating a transistor-transistor logic level pulse at each detecting point to trigger the ultrasonic transducer to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis; receiving and timing echo signals reflected from the object, sampling and storing the echo signals received at each detecting point N times, defining a sampling index number as n, n=0, l , . ..,N-l, defining a preset value of a sampling frequency as f s , defining a sampling value sampled at an n h time and at the m th detecting point as s m (n), and defining a sampling time of s m (n) as

step 2: reading in sequence sampling values s m (n) at a 0 th detecting point from n=0 to n= N-\ , and then reading in sequence sampling values s m (n) at each detecting point m=l , M-l by repeating the reading process;

step 3: assuming a propagation velocity of an ultrasound in the object v=vi, v \ being a propagation velocity of the ultrasound in a first medium layer of the object, and generating an image block in an interval from zo=0 to ^-l in the depth direction Z on a longitudinal section, where Zd ep th is a preset length of a generated image;

step 4: using the image block in the interval from zo to ^-l on the longitudinal section obtained in step 3 as an input value and extracting a boundary c \ (x, z) between the first medium layer and a second medium layer;

step 5 : correcting an image block in the interval from the boundary c \ (x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer, comprising:

step 5.1 : assuming m=0, defining the m th detecting point as U(x u , 0), wherein x u =m Ax/accuracy, accuracy is an image accuracy, and calculating a sector image corresponding to the m th detecting point U(x u , 0), comprising: step 5.1.1 : calculating a half-power beam angel of the ultrasonic transducer, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and calculating a left intersection point i? / (x / , z / ) of a left boundary of the half-power beam angle and the boundary c \ (x,z) and a right intersection point B r (x r , z r ) of a right boundary of the half-power beam angle and the boundary c\{x,z) respectively, where

and ci(x r , z r )=0;

step 5.1.2: assuming a refraction point R(XR, Z r )=BI(XI, zj) on the boundary ci(x, z) and calculating a normalized refraction vector ?P = eta x RU - [eta x (RU - e) + \[a ] x e , where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, Z R ), a = \ .Q - eta x eta x [\ .Q - {RU - e) x {RU · <?)] , "·" represents a vector dot product operation, and " RU " is a normalized incidence vector from the detecting point U(x u , 0) to the refraction point R(XR, Z R );

step 5.1.3 : calculating a slope k of the refraction vector RP , calculating all pixels on a refraction path from the refraction point R(XR, ZR) to a left boundary x=0, or a right boundary x=(M-l) Ax/accuracy, or a lower boundary of the sector image, defining the refraction point R as a current point Rf, i.e. R/ (xf , Zf) = R(XR, Z R ), and defining the refraction path as R/Ef, Ef being an end point of R/Ef in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image; step 5.1.4: assuming x g =x +l , seeking a next pixel R g (x g , z g ) adjacent to the current refraction point Rf on the boundary c \ (x, z), where a coordinate of the pixel R g satisfies c \ {x g , z g )=0, defining R g as a new refraction point, i.e. R(XR, z R )=R g (x g ,z g ), executing step 5.1.2 and step 5.1.3 to scan-convert a new refraction path R g E g , and defining an end point of R g E g as E g ,

step 5.1.5 : calculating a distance between E g and Ef, equally dividing a curve segment between the new refraction point R g and the current refraction point Rf on the boundary ci(x, z) into Ad segments, i.e. inserting Ad-l points, an index number of an inserted point being recorded as τ =1 , 2, Ad-l , and repeatedly executing following step 5.1.5.1 for T from τ =1 to τ =Ad with 1 as a step interval,

step 5.1.5.1 : calculating a coordinate of a r th inserted point R T by interpolation, wherein Z Z ), executing step 5.1.2 and step 5.1.3 to scan-convert a refraction path starting from the inserted point R Z ; step 5.1.6: assuming R f =R g and E^E g , and executing step 5.1.4 and step 5.1.5; step 5.1.7: executing step 5.1.6 repeatedly until x g =x r +l ;

step 5.2: assuming m=\ , M-l in sequence, executing step 5.1 repeatedly and generating the image block in the interval from the boundary c \ x,z) to Z^^-l on the longitudinal section;

step 6: using the image block in the interval from the boundary c \ {x,z) to ^-l on the longitudinal section obtained in step 5 as an input value, extracting a boundary c 2 (x, z) between the second medium layer and a third medium layer formed between the boundary c \ {x, z) and ^-l, and correcting an image block in an interval from the boundary c 2 (x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer; and

step 7: according to an arithmetic in step 6, processing in sequence all other boundaries on the longitudinal section to generate an image on the longitudinal section with a width of (M-l)Ax/accuracy+l pixels and a length of Zd ep th pixels.

In one embodiment of the present disclosure, step 3 comprises:

step 3.1 : assuming m=0 and calculating a concentric arc centered at the m th detecting point U(x u , 0) on the longitudinal section, comprising:

step 3.1.1 : assuming a coordinate value z u =0 in the depth direction, calculating a vertical distance r=z u -accuracy between a coordinate point (x u , z u ) and a center point (x u , 0), and then calculating a time t=r/v \ for the ultrasound to propagate over a distance r in the object and a sampling index number n=2t-f s corresponding to the time t;

step 3.1.2: judging whether the sampling value s m (n) is non-zero, if yes, executing step 3.1.3, and if no, executing step 3.1.4;

step 3.1.3 : calculating the half-power beam angle of the ultrasonic transducer, calculating an x-coordinate of an intersection point of the arc and the right boundary of the half-power beam angle and the arc, assuming an initial value of a coordinate point (x p , z p ) on the arc as (x u , z u ), calculating an initial value of a discriminant A=(x p +l-x u ) 2 +(z p -0.5) 2 -(z u ) 2 and executing circularly step 3.1.3.1 to step 3.1.3.3 for x p with 1 as the step interval until x p >z p : step 3.1.3.1 : if Δ<0, increasing Δ by 2x^+3, otherwise, increasing Δ by 2(x p -z p )+5 and decreasing z p by 1 ;

step 3.1.3.2: ifx p <q, increasing pixel values of coordinate points (x p , z p ) and (2x u -x p , Zp) by co(x u , Xp)-s m (n)/r, where co(x u , x P ) is an apodizing function, otherwise, keeping the pixel values of the coordinate points (x p , z p ) and (2x u -x p , z p ) unchanged;

step 3.1.3.3 : if z p +x u <q, increasing pixel values of coordinate points (z p +x u , x p -x u ) and (xu-Zp, x p -x u ) by co(x u , z p +x u )-s m (n)/r, otherwise, keeping the pixel values of the coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) unchanged;

step 3.1.4: assuming in sequence the coordinate value z u =l , 2, Z^^-l in the depth direction, and executing step 3.1.1 to step 3.1.3 repeatedly;

step 3.2: assuming in sequence m=\ , M-l and executing step 3.1 repeatedly to generate the image block in the interval from zo=0 to Z^^-l in the depth direction with a width of (M-l)Ax/accuracy+l pixels on the longitudinal section.

In step 3, the SAFT ultrasonic imaging is achieved by, using an arc scan conversion technique, drawing arcs for the data sampled at each scanning position of the ultrasonic transducer. This method is preceded in a forward pattern without calculating a distance between the imaging point and each scanning position of the ultrasonic transducer, thus, avoiding the RMS calculation and saving a large amount of computations.

In one embodiment of the present disclosure, in step 4, the boundary c \ {x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method.

In one embodiment of the present disclosure, in step 6, the boundary c 2 (x, z) between the second medium layer and the third medium layer formed between the boundary c \ (x, z) and Zdepth- ^ is extracted by using the edge detection method.

In one embodiment of the present disclosure, the edge detection method comprises a Canny edge detection algorithm.

In one embodiment of the present disclosure, the Canny edge detection algorithm comprises: step (a): obtaining an image I(x, z) by smoothing the image block on the longitudinal section; step (b): calculating a gradient amplitude G(x, z) and a gradient direction F(x, z) of each pixel of the image I(x, z);

step (c): for each pixel, judging whether the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction F(x, z), and if yes, concluding that the pixel is not an edge point; and

step (d): for each pixel, concluding that the pixel is the edge point if the gradient amplitude G(x, z) is greater than a preset high threshold; concluding that the pixel is not the edge point if the gradient amplitude G(x, z) is less than a preset low threshold; if the gradient amplitude G(x, z) is between the preset high threshold and the preset low threshold, judging whether any of eight pixels adjacent to the pixel is greater than the preset high threshold, and if yes, concluding that the pixel is the edge point, if no, concluding that the pixel is not the edge point.

In one embodiment of the present disclosure, step (a) comprises: smoothing the image block on the longitudinal section by using a 2-dimensional Gaussian function. In one embodiment of the present disclosure, step (b) comprises: calculating first-order approximations of a partial differential in an X direction and a Z direction with a 2x2 template, i.e. then calculating the gradient amplitude G and the gradient direction F by formulas:

/ f q

G = Jp x p + q x q, = arctan — .

V P )

In one embodiment of the present disclosure, in step 5.1.3, the all pixels on the refraction path from the refraction point R(XR, ZR) to the left boundary x=0, or the right boundary x=(M-l) Ax/accuracy, or the lower boundary of the sector image are calculated by using a Bresenham line scan conversion algorithm.

According to another aspect of an embodiment of the present disclosure, the ultrasonic imaging system for detecting the multi- layered object with synthetic aperture focusing technique is provided, comprising: a positioning controller; an analog digital converter; an ultrasonic transducer, wherein an input end of the ultrasonic transducer is connected with an output end of the positioning controller, the ultrasonic transducer moves with a fixed velocity on a surface of the object under a control of the positioning controller, and an output end of the ultrasonic transducer is connected with an input end of the analog digital converter, given that a horizontal length of the object in an X-axis direction is Xi eng th equally divided into X engt h/Ax intervals, Δχ is a length of the interval, a number of detecting points for the ultrasonic transducer on the X-axis is M, M=l +X engt h/Ax, and an index number of each detecting point is m, m=0, l , . .. , -l , a transistor-transistor logic level pulse is generated at each detecting point to trigger the ultrasonic transducer to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis, echo signals reflected from the object is received and timed by the ultrasonic transducer, the echo signals received at an m th detecting point is sampled N times by the analog digital converter, a sampling index number is defined as n, n=0, l , . ..,N-l , a preset value of a sampling frequency is defined as f s , and a sampling value sampled at an n h time and at the m th detecting point is defined as s m (n), and a sampling time of s m (n) is defined as and a calculating device, wherein an input end of the calculating device is connected with an output end of the analog digital converter, and an output end of the calculating device is connected with an input end of the positioning controller, the calculating device is configured to store the sampling value s m (n) input from the analog digital converter, and to generate an image of the object on a longitudinal section by executing steps of:

step 1 : reading in sequence sampling values s m (n) at a 0 th detecting point from n=0 to n= N-\ , and then reading in sequence sampling values s m (n) at detecting points m=l , M-l by repeating the reading process;

step 2: assuming a propagation velocity of an ultrasound in the object v=vi, v \ being a propagation velocity of the ultrasound in a first medium layer of the object, and generating an image block in an interval from zo=0 to ^-l in the depth direction Z on a longitudinal section, wherein Zd ep th is a preset length of a generated image;

step 3 : using the image block in the interval from zo to ^-l on the longitudinal section obtained in step 3 as an input value and extracting a boundary c \ (x, z) between the first medium layer and a second medium layer;

step 4: correcting an image block in the interval from the boundary c \ (x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer, comprising:

step 4.1 : assuming m=0, defining the m th detecting point as U(x u , 0), wherein x u =m Ax/accuracy, accuracy is an image accuracy, and calculating a sector image corresponding to the m th detecting point U(x u , 0), comprising:

step 4.1.1 : calculating a half-power beam angel of the ultrasonic transducer, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and calculating a left intersection point B;(xi, z / ) of a left boundary of the half-power beam angle and the boundary c \ (x,z) and a right intersection point B r (x r , z r ) of a right boundary of the half-power beam angle and the boundary c\{x,z) respectively, where

and ci(x r , z r )=0;

step 4.1.2: assuming a refraction point R(XR, Z r )=BI(XI, zj) on the boundary ci(x, z) and calculating a normalized refraction vector ?P = eta x RU - [eta x (RU - e) + \[a ] x e , where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, Z R ), a = \ .Q - eta x eta x [\ .Q - {RU - e) x {RU - e)] , "·" is a vector dot product operation, and " RU " is a normalized incidence vector from the detecting point U(x u , 0) to the refraction point R(XR, Z R );

step 4.1.3 : calculating a slope k of the refraction vector RP„ calculating all pixels on a refraction path from the refraction point R(XR, ZR) to a left boundary x=0, or a right boundary x=(M-l) Ax/accuracy, or a lower boundary z=Z £ epi¾ -l of the sector image, defining the refraction point R as a current point Rf, i.e. R/ (xf , Zf) = R(XR, Z R ), and defining the refraction path as R/Ef, Ef being an end point of R/Ef in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image; step 4.1.4: assuming x g =x +l , seeking a next pixel R g (x g , z g ) adjacent to the current refraction point Rf on the boundary c \ (x, z), where a coordinate of the pixel R g satisfies c \ {x g , z g )=0, defining R g as a new refraction point, i.e. R(XR, z R )=R g (x g ,z g ), executing step 4.1.2 and step 4.1.3 to scan-convert a new refraction path R g E g , and defining an end point of R g E g as E g ,

step 4.1.5 : calculating a distance between E g and Ef, equally dividing a curve between the new refraction point R g and the current refraction point Rf on the boundary ci(x, z) into Ad segments, i.e. inserting Ad-l points, an index number of an inserted point being recorded as τ =1 , 2, Ad-l , and repeatedly executing following step 4.1.5.1 for T from τ =1 to τ =Ad with 1 as a step interval;

step 4.1.5.1 : calculating a coordinate of a r th inserted point R T by interpolation, wherein Z Z ), executing step 4.1.2 and step 4.1.3 to scan-convert a refraction path starting from the inserted point R z , step 4.1.6: assuming Rf=R g and Ef=E g and executing step 4.1.4 and step 4.1.5; step 4.1.7: executing step 4.1.6 repeatedly until x g =x r +l ; step 4.2: assuming m=\ , M-l in sequence, executing step 4.1 repeatedly and generating the image block in the interval from the boundary ci(x,z) to Z^^-l on the longitudinal section;

step 5 : using the image block in the interval from the boundary c \ {x,z) to ^-l on the longitudinal section obtained in step 4 as an input value, extracting a boundary c 2 (x, z) between the second medium layer and a third medium layer formed between the boundary c \ {x, z) and ^-l, and correcting an image block in an interval from the boundary c 2 (x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer; and

step 6: according to an arithmetic in step 5, processing in sequence all other boundaries on the longitudinal section to generate an image on the longitudinal section with a width of (M-l)Ax/accuracy+l pixels and a length of Zd ep th pixels.

In one embodiment of the present disclosure, step 2 comprises:

step 2.1 : assuming m=0 and calculating a concentric arc centered at the m th detecting point U(x u , 0) on the longitudinal section, comprising:

step 2.1.1 : assuming a coordinate value z u =0 in the depth direction, calculating a vertical distance r=z u -accuracy between a coordinate point (x u , z u ) and a center point (x u , 0), and then calculating a time t=r/v \ for the ultrasound to propagate over a distance r in the object and a sampling index number n=2t-f s corresponding to the time t;

step 2.1.2: judging whether the sampling value s m (n) is non-zero, if yes, executing step

2.1.3, and if no, executing step 2.1.4;

step 2.1.3 : calculating the half-power beam angle of the ultrasonic transducer, calculating an x-coordinate of an intersection point of the arc and the right boundary of the half-power beam angle, assuming an initial value of a coordinate point (x p , z p ) on the arc as (x u , z u ), calculating an initial value of a discriminant and executing circularly step 2.1.3.1 to step 2.1.3.3 for x p with 1 as the step interval until x p >z p :

step 2.1.3.1 : if Δ<0, increasing Δ by

and decreasing z p by 1 ;

step 2.1.3.2: ifx^< , increasing pixel values of coordinate points (x p , z p ) and (2x u -x p ,

Zp) by co(x u , Xp)-s m (n)/r, where co(x u , x P ) is an apodizing function, otherwise, keeping the pixel values of the coordinate points (x p , z p ) and (2x u -x p , z p ) unchanged; step 2.1.3.3 : if z p +x u <q, increasing pixel values of coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) by co(x u , z p +x u )-s m (n)/r, otherwise, keeping the pixel values of the coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) unchanged;

step 2.1.4: assuming in sequence the coordinate value z u =l , 2, Z^^-l in the depth direction, and executing step 2.1.1 to step 2.1.3 repeatedly;

step 2.2: assuming in sequence m=\ , M-l and executing step 2.1 repeatedly to generate the image block in the interval from zo=0 to Z^^-l in the depth direction with a width of (M-l)Ax/accuracy+l pixels on the longitudinal section.

In step 2, the SAFT ultrasonic imaging is achieved by, using an arc scan conversion technique, drawing arcs for the data sampled at each scanning position of the ultrasonic transducer. This method is preceded in a forward pattern without calculating a distance between the imaging point and each scanning position of the ultrasonic transducer, thus avoiding the RMS calculation and saving a large amount of computations.

In one embodiment of the present disclosure, in step 3, the boundary c \ {x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method.

In one embodiment of the present disclosure, in step 5, the boundary c 2 (x, z) between the second medium layer and the third medium layer formed between the boundary c \ (x, z) and Zdepth- ^ is extracted by using the edge detection method.

In one embodiment of the present disclosure, the edge detection method comprises a Canny edge detection algorithm.

In one embodiment of the present disclosure, the Canny edge detection algorithm comprises: step (a): obtaining an image I(x, z) by smoothing the image block on the longitudinal section; step (b): calculating a gradient amplitude G(x, z) and a gradient direction F(x, z) of each pixel of the image I(x, z);

step (c): for each pixel, judging whether the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction F(x, z), and if yes, concluding that the pixel is not an edge point; and

step (d): for each pixel, concluding that the pixel is the edge point if the gradient amplitude G(x, z) is greater than a preset high threshold; concluding that the pixel is not the edge point if the gradient amplitude G(x, z) is less than a preset low threshold; if the gradient amplitude G(x, z) is between the preset high threshold and the preset low threshold, judging whether any of eight pixels adjacent to the pixel is greater than the preset high threshold, and if yes, concluding that the pixel is the edge point, if no, concluding that the pixel is not the edge point.

In one embodiment of the present disclosure, step (a) comprises: smoothing the image block on the longitudinal section by using a 2-dimensional Gaussian function.

In one embodiment of the present disclosure, step (b) comprises: calculating first-order approximations of a partial differential in an X direction and a Z direction with a 2x2 template, i.e. then calculating the gradient amplitude G and the gradient direction F by formulas: G = jp x p + q x q, i = arctan[

In one embodiment of the present disclosure, in step 4.1.3, the all pixels on the refraction path from the refraction point R(XR, ZR) to the left boundary x=0, or the right boundary x=(M-l) Ax/accuracy, or the lower boundary z=Z depth -l of the sector image are calculated by using a Bresenham line scan conversion algorithm.

With the ultrasonic imaging method and system for detecting the multi- layered object with synthetic aperture focusing technique according to embodiments of the present disclosure, based on SAFT technique, equating DAS and all-range dynamic focusing with drawing the arcs on the image using the arc scan conversion technique, a quick and accurate ultrasonic imaging method for a single-medium object is achieved. On this basis, a refraction vector calculation formula is used in combination with a refraction law to calculate a refraction path in a forward pattern, and the pixel points on the refraction path are scan-converted using the Bresenham line scan conversion algorithm. An iterative calculation for calculating refraction points and the RMS calculation for calculating the propagation distance of the ultrasound are avoided by substituting the method for obtaining trajectory curves on the image using sampling data for the conventional SAFT in combination with the ray tracing, which saves a lot of computations. Moreover, an imaging error resulted from the inaccuracy in calculating an effective length of a synthetic aperture for the multi-layered object is avoided by substituting the half-power beam angel for the effective length. The ultrasonic imaging method and system for detecting the multi- layered object with synthetic aperture focusing technique according to embodiments of the present disclosure can construct an ultrasonic nondestructive imaging for longitudinal sections in both depth and horizontal directions of the multi- layered object.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made in conjunction with the accompanying drawings, in which:

Fig. 1(a) is a principle diagram of an inverse calculation process of SAFT when an object contains a single medium; Fig. 1(b) is an explanatory diagram of SAFT in a forward pattern when an object contains a single medium; Fig. 1(c) is an effect diagram of all data sampled at a scan position u, in SAFT when an object contains a single medium; Fig. 1(d) is a principle diagram of an inverse calculation process of SAFT combined with ray tracing when an object contains multi- layered media; Fig. 1(e) is an explanatory diagram of SAFT combined with ray tracing in a forward pattern when an object contains multi- layered media; and Fig. 1(f) is an effect diagram of all data sampled at a scan position u, in the conventional SAFT combined with ray tracing;

Fig. 2(a) is a schematic diagram of a symmetric arc segment in the arc scan conversion technique according to an embodiment of the present disclosure; and Fig. 2(b) is a schematic diagram of a position relation between a current point on a circle and two candidate pixels of a next point in an arc scan conversion technique;

Fig. 3 is a schematic diagram of pixels and errors in each iteration of Bresenham line scan conversion algorithm;

Fig. 4 is a flow chart of the synthetic aperture focusing method for ultrasonic imaging of the multi-layered object according to an embodiment of the present disclosure;

Fig. 5 is an algorithm flow chart of an imaging calculation part in the synthetic aperture focusing method for ultrasonic imaging of the multi-layered object according to an embodiment of the present disclosure;

Fig. 6 shows effects of various ultrasonic imaging methods in a same experimental environment: Fig. 6(a) is a schematic view of a longitudinal section of an object with irregular interfaces; Fig. 6(b) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by the conventional SAFT in combination with a ray tracing; Fig. 6(c) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by a conventional phase shift migration ultrasonic imaging technique; and Fig. 6(d) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by the synthetic aperture focusing method for ultrasonic imaging of the multi-layered object according to an embodiment of the present disclosure;

Fig. 7 is an analysis chart of an experimental data in Fig. 6(d): Fig. 7(a) is an error chart of a first boundary, and Fig. 7(b) is an error chart of a second boundary;

Fig. 8 is a structural schematic view of an ultrasonic imaging system for detecting a multi-layered object with synthetic aperture focusing technique; and

Fig. 9 is a schematic working diagram of an ultrasonic transducer.

DETAILED DESCRIPTION

Embodiments of the present disclosure will be described in detail in the following descriptions, examples of which are shown in the accompanying drawings, in which the same or similar elements and elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to the accompanying drawings are explanatory and illustrative, which are used to generally understand the present disclosure. The embodiments shall not be construed to limit the present disclosure.

An idea of the present disclosure is described as follows.

It is known from the principle of SAFT that the imaging method is an inverse calculation process, comprising: determining one pixel on an image; figuring out all scanning positions of an ultrasonic transducer corresponding to the pixel; calculating a distance r, and a delay between the pixel and each scanning position; and accumulating data on a delay curve to obtain a pixel value of the pixel. As a specific position of a target reflector inside the object is unknown in advance, the inverse calculation process is required to be performed for all the pixels using an all-range dynamic focusing technique to generate a whole image. In this way, ultrasonic signals are summed coherently at the pixel corresponding to the reflector so as to reach a maximal accumulation intensity to achieve focusing, while the ultrasonic signals are summed incoherently at other pixels so that their accumulation intensities are relatively low. This explains why a value obtained by accumulation of the pixel corresponding to the reflector is greater than that of other pixels.

In view of calculation processes of all pixels on the image as a whole, it is found that each data sampled at each scanning position acts not only on the pixel corresponding to the reflector but also on the pixels without reflector. As shown in Fig. 1(b), a sampling data Si{tj) at Uj is involved not only in the imaging calculation of a point (x, z), but also in the imaging calculation of other points on a curve segment arc s ,o,). The arc s ,o,) is a curve segment within a sound field transmitted by the ultrasonic transducer at u The distances between Uj and the points on the curve segment are the same, and the delays of the points on the curve segment are as also the same. Moreover, the sound field of the ultrasonic transducer at u, is within a half-power beam angle βο.5 of the ultrasonic transducer (i.e. between two dotted lines starting from Uj in Fig. 1(b)). As an effective length L of the synthetic aperture is also defined as L = z- ? 0 . 5 , in combination with formula (2), is an arc centered at Uj with an angle of ? 0 . 5 .

As shown in Fig. 1(b), if the whole calculation process is comprehensively understood in a forward pattern, an effect of the sampling data Sj(ti) in the imaging calculation of the whole image is equal to draw an arc in the image by using a data value iUA-(t;)/r;, i.e. the pixel value /(arcs,o,)) of each pixel on the arc meets <¾¾(t;)/r;. Within a range L, the intersection point of the arcs (arc¾(¾) to arc ¾ 1 ( tl )) in the image corresponding to all data (so(to) to ¾-i(tz_i)) on the delay curve where Sj(ti) is located is the target reflector (x, z) in a conventional inverse calculation process.

If only the effect of all the sampling data Si(t) at u, is considered in the whole image, the effect is corresponding to a sector image lu, centered at Uj with an angle of ¾.5 (as shown in Fig. 1(c)). The conventional DAS calculation formula (3) can be re-expressed as a superposition of sector images corresponding to all scanning positions, i.e.

M-\

/ (x, z) =∑/ a (x, z) (4)

;=0

where M is a total number of scanning positions of the ultrasonic transducer.

Thus, the conventional S AFT can be achieved by successively drawing concentric arcs on the image based on a size of a radius r, and centered at the scanning positions of the ultrasonic transducer. In a computer graphics, various mature circle scan conversion techniques have been developed to display geometric circles on a raster display device. To realize an application of drawing an arc, the general circle scan conversion techniques may be expanded as an arc scan conversion technique applicable to an arc segment shown in Fig. 2(a). Considering that a semicircle centered at Uj with a radius of is intersected with two boundaries with an angle of ¾.5 starting from the center, x-coordinates of two intersection points are q and 2u q respectively. It is known from a symmetry of the semicircle that if a known point (x p , z p ) is on the semicircle, three points (2uj-x p , z p ), (z χ ρ -ιιϊ) and (uj-z p , χ ρ -ιιϊ) are also on the semicircle. Thus, only 1/4 arc of the semicircle needs to be scanned and converted to calculate a pixel set of the whole semicircle using the symmetry. Among the points on the semicircle, a point with an x-coordinate between 2ui-q and q is on the arc. The calculation of the 1/4 arc is a key to the arc scan conversion technique, that is, if one point on the arc is known, how to select a next point. As shown in Fig. 2(b), if one point (x p , z p ) on the arc is known, which is the next point, P \ or P{1 Midpoint circle drawing algorithm is one of the circle scan conversion techniques. Take this technique as an example. Based on its principle, a mathematical function of the semicircle is constructed first:

F (x,y) = (x - u i ) 2 + z 2 - r i 2 , z > 0

For points on the semicircle, F(x, y) = 0; for points outside the semicircle, F(x, y) > 0; and for points inside the semicircle, F(x, y) < 0. A discriminant Δ is constructed as follows:

A = F(E) = F( Xp + l, z p -0.5) = (x p + l -u) 2 + (z p - 0.5)' -ή .

If Δ<0, then P \ is the next pixel and a discriminant for a next pixel is

A ' = F (x p + 2, z p - 0.5) = (x p + 2 - u i f + (z p - 0.5f - r? = A + 2x p + 3 ; if Δ^Ο, then 2 is the next pixel and the discriminant for the next pixel is

A ' = F (x p + 2, z p - l .5) = (x p + 2 - Ui + (z p - 1.5) 2 - r 2 = A + 2(x p - z p ) + 5 .

For the arc segment in Fig. 2(a), a first pixel (x p , z p ) is (½, ri). Thus, the conventional SAFT is implemented by using the arc scan conversion technique, to draw concentric arcs for the data sampled at each scanning position of the ultrasonic transducer. This method is a forward imaging calculation without calculating the distance between the imaging point and each scanning position of the ultrasonic transducer. Thus, RMS calculation is avoided, which saves a large amount of computations.

It is known from a principle of a conventional SAFT in combination with a ray tracing that a most time-consuming part of the method lies in the iterative calculation for seeking an optimal refraction point. It is known from principles of the ray tracing and DAS that this method is an inverse calculation process, comprising: determining one pixel on an image, figuring out all the scanning positions corresponding to the pixel; calculating an optimal refraction point by iterating on the boundary for each scanning position so as to obtain an optimal distance and a delay time between the pixel and the scanning position, and summing data on a delay curve to obtain a pixel value of the pixel. As a specific position of a target reflector inside the object is unknown in advance, the inverse calculation process is required to be performed for all the pixels using the all-range dynamic focusing technique to generate a whole image.

In view of calculation processes of all pixels on the image as a whole, it is found that each data sampled at each scanning position acts not only on the pixel corresponding to the reflector but also on the pixels without reflector. As shown in Fig. 1(e), a sampling data Si{tj) at Uj is involved not only in the imaging calculation of the point (x, z), but also in the imaging calculation of other points on a curve tra s ^. The tras, is a trajectory segment within a sound field transmitted by the ultrasonic transducer. The distances r; between the points on the trajectory segment and u, are not necessarily the same, but the delays of the points on the trajectory segment are the same. In a first medium layer, the sound field of the ultrasonic transducer at u, is within the half-power beam angle ? 0 . 5 of the ultrasonic transducer (i.e. between two dotted lines starting from Uj in Fig. 1(e)). As the effective length L of the synthetic aperture is also defined as L = z x ?o. 5 , in combination with formula (2), ? 0 . 5 =0.84 .Λ . In a second medium layer, as the medium changes, the ultrasound refracts at the boundary and the two boundaries (shown as dotted lines in Fig. 1(e)) of the original half-power beam angle βο.5 refract too. Thus, a range of the sound field of the ultrasonic transducer at Uj changes, but the sound field is still within a region between the two boundaries (shown as dotted lines in Fig. 1(e)). Therefore, the curve segment trchm can be regarded as a trajectory of the data Si(ti) sampled at u, on the image, when the refraction point moves from left to right along the boundary between the first medium layer and the second medium layer within the range of the sound field. When the first medium layer and the second medium layer are the same, that is, there is not any lamination, the trajectory is an arc centered at u, with a angle of βο.5 (as shown in Fig. 1(b)); when the first medium layer and the second medium layer are different, the profile of the track is related to that of the boundary.

Thus, if the whole calculation process is comprehensively understood in a forward pattern, the effect of the sampling data j(ti) in the image reconstruction calculation of the whole image is equal to, in the image, drawing a trajectory segment tra ^ m with a length of the data value <¾¾(t;)/r;, i.e. the pixel value I(tra s ) of each pixel on the trajectory meets I(tra s m)= <¾¾(t;)/r;. Within the range L, the intersection point of the trajectories (tr<¾„(¾) to in the image corresponding to all data (so(to) to ¾-i(tz_i)) on the delay curve where Si{tj) is located is the target reflector (x, z) in the conventional inverse calculation process. If only the effect of all the sampling data Si(t) at u, is considered in the whole image, the effect is corresponding to an irregular sector image lu, centered at Uj (as shown in Fig. 1(f)), i.e.

where N is a sampling number at each scanning position. The DAS calculation formula (3) in the conventional method that combines SAFT with the ray tracing can be re-expressed as a superposition of sector images corresponding to all scanning positions, i.e.

M-l

/ (x, z) =∑/ a (x, z) (6) where M is a total number of scanning positions of the ultrasonic transducer.

Thus, the conventional SAFT in combination with the ray tracing can be implemented by drawing trajectory curves on the image for the sampling data at each scanning position. A function of the trajectory trchm is required to be calculated. As shown in Fig. 1(e), assuming that P(xp, z P ) is an any point on the trajectory tra s ^, the ultrasound is transmitted from a position U(uj, 0) of the ultrasonic transducer, and it takes till for the ultrasound to reach the point P after refracting at a point R(XR, Z R ) on the boundary, i.e.

V

\ R U \i water + RP\ Ι^ = (V) where, v water is a propagation velocity of ultrasound in water and v 0bject is a propagation velocity of the ultrasound in the object. Moreover, based on the refraction law, a refraction rate eta is expressed by a formula:

eta = sin a, /sin a r = v water /v object (8) where, a, is an incidence angle, and a r is a refraction angle. Coordinates of the point P (i.e. the function of the trajectory curve tr<¾o,)) can be obtained. However, for a more simple calculation, a refraction formula in computer graphics can be used to calculate a refraction vector RP , i.e. :

RP = eta W -^eta x (W - e) + fa ^ x e (9) where, e is a normal vector at the refraction point R(XR, ZR) on the boundary, a = 1.0 - eta x eta x [1.0 - (R U e) x (RU e)] , and "·" represents the vector dot product operation.

As R is known, the coordinates of the point P can be obtained after RP is calculated by the formula (9). As the point R moves on the boundary, all the points on the trajectory trasm can be calculated accurately. However, in practical applications, if a next trajectory segment tra s ,{t,+i) is drawn after the last trajectory trasm at one scanning position, this continuous trajectory drawing method involves multiple repeated calculations of the refraction vector by formula (9). For example, as shown in Fig. 1(f), R is a refraction point on the boundary within the range of the sound field, P is a point on the trajectory tr<¾(¾), and P' is a point on the trajectory tra^+x). RP is calculated once during drawing the trajectory trchm and RP' is also calculated once during drawing the trajectory tra^+x). However, the normalized vectors RP and RP' are the same, thus, the calculation of RP' is repeated. In fact, to avoid the repeated calculation, all the pixels on the refraction line within the image can be processed at the refraction point R along the refraction vector RP , and then it is moved to the next refraction point R g to calculate the new refraction vector and process the pixels on the new refraction line. In this way, it is equivalent to simultaneously process all trajectories corresponding to one scanning position. However, at each refraction point, only one point on each trajectory is drawn. All points on all the trajectories are drawn by going through all refraction points within the range of the field sound. Thus, the whole image is obtained by processing all the scanning position.

An essence of processing all the pixels on the refraction line within the image at refraction point R along the refraction vector RP is drawing a line with value &½(t;)/r; along the refraction vector RP . Moreover, in the computer graphics, various line scan conversion techniques have been developed to display lines on raster display device, in which Bresenham algorithm is the most widely used one. The principle of the Bresenham algorithm is shown in Fig. 3. If a point (x p , Zp) on a straight line is known, a next pixel may be one of two candidate points z p ) and -Ρ 2 (½+1, Zp+1). A symbol of an error term ε is used to determine whether the next pixel is the right point ior the lower-right point P 2 , which is described in detail as follows.

Assuming that an equation of a straight line is z = kx+b, z p+ \ = z p +k{x p+ \-x p ) = z p +k, where k is a slope. If 1, it is known from the two candidate points of the next pixel that an x-coordinate of the next pixel is x p +\ and a y-coordinate of the next pixel is either z p or z p +\ . Whether the y-coordinate is increased by 1 depends on the error term ε. An initial value of ε is 0. When the x-coordinate is increased by 1 each time, the value of ε is increased correspondingly by k, i.e. ε = ε +k.

If \k\>\, whether x p is increased by 1 should be considered. An increment of ε is Ilk. When ε > 0.5, an intersection point of the straight line and pixel grids (as shown in Fig. 3) is closer to 2 and the y-coordinate is increased by 1, Meanwhile, 2 is taken as a new reference point for a next calculation and the value of ε is decreased by 1. When ε < 0.5, the intersection point of the straight line and the grid is closer to P\ and both the y-coordinate and the reference point for the next calculation remain unchanged.

For making the calculation easy, defining δ=ε-0.5, where an initial value of δ is -0.5 and k is an increment, when δ > 0, (x p +\, z p +\) is taken as the next pixel and the value of δ is decreased by 1; and when δ < 0, (x^+1, z p ) is taken as the next pixel and the value of δ remain unchanged. Thus, for a practical application situation shown in Fig. 1(f), in the Bresenham algorithm, the original point (x p , Zp)= (x R , Z R ) on the straight line, k is a slope of the refraction vector RP and pixels on the straight line along RP are scan-converted until reach the boundary of the image. Therefore, the line segment of the refraction vector RP within the image can be scan-converted conveniently by using the Bresenham algorithm.

Based on the above idea, an ultrasonic synthetic aperture focusing method for quickly and accurately imaging the multi-layered object is provided in the present disclosure. Fig. 4 is a flow chart of the ultrasonic synthetic aperture focusing method for detecting the multi-layered object according to an embodiment of the present disclosure. As shown in Fig. 4, the method comprises three parts: an ultrasonic data acquisition, an imaging calculation and an image display, and specifically comprises 7 steps which will be described in detail below, in which step 1 belongs to the ultrasonic data acquisition part, and steps 2-7 belong to the imaging calculation part, i.e. taking the sampling data at each detecting point on a longitudinal section of the object as an input and calculating the longitudinal section image of the object. Fig. 5 is an algorithm flow chart of the imaging calculation part. The image display part refers to displaying a two-dimensional image obtained in the imaging calculation part on a display. The image may be displayed in grey or in color.

Step 1 : given that a horizontal length of the object in an X-axis direction is Xiength which is equally divided into Xi engt h/Ax intervals, where Δχ is a length of the interval, i.e., a step interval of each movement of an ultrasonic transducer from a start point (0, 0) to a terminal point (Xiength, 0) along the X-axis, an arrival point of each movement of the ultrasonic transducer is called a detecting point, a number of detecting points for the ultrasonic transducer on the X-axis is M, M=\ +X eng th/Ax, and an index number of each detecting point is m, m=0,l,..., -l, a transistor-transistor logic (TTL) level pulse is generated at each detecting point to trigger the ultrasonic transducer to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis. Then the ultrasonic transducer is converted to a receiving mode, to receive and time echo signals reflected from the object. Then the echo signals received at an m th detecting point are sampled and stored N times, a sampling index number is defined as n, n=0,\,...,N-\, a preset value of a sampling frequency is defined as f s , a value oif s is a preset value in an analog digital converter, a sampling value sampled at an n h time and at the m th detecting point is defined as s m (n), and a sampling time of s m (n) is defined as t=n/f s .

In one embodiment, two modes are used to acquire ultrasonic data by the ultrasonic transducer, which comprises: a contact mode (that is, the ultrasonic transducer directly contacts a surface of the object) and an immersion mode (that is, the object is placed in a liquid, and the ultrasonic transducer measures the object on a liquid surface). According to different modes, a contact probe or an immersion probe is used in the ultrasonic transducer. The step interval Δχ is determined by an actual size of the object and an imaging accuracy. The less the value of Δχ is, the more accurate the imaging is, but the longer the calculation time is.

Step 2: sampling values at a 0 th detecting point are read in sequence from n=0 to n= N-l, and then sampling values at each detecting point m=l , M-\ are read in sequence by repeating the reading process.

step 3: assuming a propagation velocity of an ultrasound in the object v=v \ , where v \ is a propagation velocity of the ultrasound in a first medium layer of the object, an image block in an interval from zo=0 to ^-l in the depth direction Z on a longitudinal section is generated, wherein Zd ep th is a preset length of a generated image, i.e. a depth value expressed by the number of pixels in the depth direction of the generated image

In one embodiment of the present disclosure, in step 3, the image block in the interval from zo=0 to Zdepth- ^ in the depth direction Z on the longitudinal section is generated by the ultrasonic synthetic aperture focusing method based on the arc scan conversion, which comprises following sub-steps.

Step 3.1 : assuming m=0, defining the m th detecting point as U(x u , 0), where x u =m Ax/accuracy, accuracy is the image accuracy, i.e. a distance between two adjacent pixels on the generated image, a concentric arc centered at the m th detecting point U(x u , 0) on the longitudinal section is calculated, which comprises following sub-steps.

Step 3.1.1 : assuming a coordinate value z u =0 in the depth direction, a vertical distance r= z u -accuracy between a coordinate point (x u , z) and a center point (x u , 0) is calculated, and then a time t=r/v for the ultrasound to propagate over a distance r in the object and a sampling index number n=2tf- s corresponding to the time t are calculated, where v is a preset value. It is to be pointed out that in step 3.1.1, the radius r of the arc and the corresponding sampling time t and the sampling index number n need to be calculated for each coordinate value z u in the depth direction. As the coordinate value z u increases step by step from 0 to Z^^-l with 1 as the step interval, in a specific program, a radium increment Ar=accuracy, a time increment At=Ar/v and a sampling index number increment An=2Atf- s can be calculated in advance for a single step and the sampling index number increment An is saved as a global variable. Then as the value of z u increases, the radium r and the sampling index number n is increased step by step by the radium increment Ar and the sampling index number increment An respectively, i.e. r^-r+ accuracy and n^-n+An. Therefore, an accumulation is substituted for a conventional multiplication and division, which improves a computing efficiency.

Step 3.1.2: it is judged whether the sampling value s m (n) is non-zero, if yes, step 3.1.3 is executed, and if no, step 3.1.4 is executed.

It is to be pointed out that in step 3.1.1 , a calculated value of the sampling index number may be a non-integer. In this case, a linear interpolation is performed for the sampling value corresponding to two integer index numbers adjacent to n so as to obtain the sampling value s m (n). For example, if =24 and s m {n)=s m {ni 0W er)+{n-ni 0W er)[s m {n upP er)-s m {ni 0W er)], where |_ J is a ceiling function and | ~ ~ | is a flooring function. In addition, the value of s m {n)lr needs to be calculated in step 3.1.3.2 and step 3.1.3.3 respectively. In order to save one calculation, the sampling value s m (n) is updated to s m (n) -s m (n)/r as soon as the former is obtained in step 3.1.2 and thus s m (n) -s m (n)/r can be used directly in step 3.1.3.2 and step 3.1.3.3.

Step 3.1.3 : the half-power beam angle of the ultrasonic transducer is calculated, an x-coordinate of an intersection point of the right boundary of the half-power beam angle and the arc is calculated, assuming an initial value of a coordinate point (x p , Zp) on the arc as (x u , z u ), an initial value of a discriminant is calculated, and step 3.1.3.1 to step 3.1.3.3 are executed circularly for ^ with 1 as the step interval until x p >z p .

It is to be pointed out that in the imaging calculation, the half-power beam angle βο.5 in step 3.1.3 is used M-Z depth times. However, β 0 . 5 is only related to intrinsic parameters of the ultrasonic transducer. As only one ultrasonic transducer is used in the embodiment of the present disclosure, the half-power beam angle βο.5 is a fixed value. In order to avoid repeating calculation, is calculated in advance in the program and saved as a global variable. Then, the global variable is directly called to calculate the x-coordinate q=x u +z u ^o.5 of the intersection of the arc and the right boundary of the half-power beam angle.

Step 3.1.3.1 : if Δ<0, Δ is increased by and z p is decreased by 1.

Step 3.1.3.2: ifx^< , pixel values of coordinate points (x p , z p ) and (2x u -x p , z p ) are increased by co(x u , Xp)-s m (n)/r, where co(x u , x P ) is an apodizing function, otherwise, the pixel values of the coordinate points (x p , z p ) and (2x u -x p , z p ) are kept unchanged.

Step 3.1.3.3 : if z p +x u <q, pixel values of coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) are increased by co(x u , z p +x u )-s m (n)/r, otherwise, the pixel values of the coordinate points (z p +x u , x p -x u ) and (xu-z p , x p -x u ) are kept unchanged.

Step 3.1.4: assuming in sequence the coordinate value z u =\ , 2, Z^^-l in the depth direction, step 3.1.1 to step 3.1.3 are executed repeatedly.

Step 3.2: assuming in sequence m=\ , M-\ , step 3.1 is executed repeatedly to generate the image block in the interval from zo=0 to Z^^-l in the depth direction with a width of (M-\)Ax/accuracy+\ pixels on the longitudinal section.

In an embodiment of the present disclosure, a specific algorithm in an imaging calculation process in step 3 comprises an arc scan conversion subprogram ArcDrawingQ and a SAFT main program AD-SAFTQ based on the arc scan conversion, which are shown as follows.

ArcDrawing(x u , z u , β 0 . 5 , s m (nj)

{

{

if(A<0) Δ+=2χ^+3;

if(½<<?) {I(x P , Zp)+=co(x u , Xp)-s m (n); I(2x u -x p , z p )+=co(x u , Xp)s m (n);} if(z^+x< ) {I(z p +x u , x p -x u )+=co(x u , z p +x u )-s m (n); I(x u -z p , x p -x u )+=co(x u , z p +x u )-s m (n);} x P ++;

}

}

AD-SAFT(v)

{

Ax/=accuracy;

y?o.5=sin(O.5xO.842/0 ; =2- accuracy -f s /v;

m=0 to M-\ x u =m-Ax;

n=-An;

r =-accuracy;

for z u =0 to Zdepth-i r + =accuracy;

s m (n)l=r;

if(s m O)!=0) ArcDrawing(x u , z u , β 0 .5, s m (n)); }

In Step 3, the SAFT ultrasonic imaging is implemented by using an arc scan conversion technique to draw arcs for the data sampled at each scanning position of the ultrasonic transduce. This method is preceded in a forward pattern without calculating a distance between the imaging point and each scanning position of the ultrasonic transducer, and thus avoiding the RMS calculation and saving a large amount of computations.

Step 4: the image block in the interval from zo to Z^^-l on the longitudinal section obtained in step 3 is used as an input value and a boundary c \ {x, z) between the first medium layer and a second medium layer is extracted. In one embodiment of the present disclosure, the boundary c\{x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method such as a Canny edge detection algorithm. Specifically, a Canny edge detector software package can be used to extract the boundary c \ {x, z) between the first medium layer and the second medium layer of the object, which comprises following sub-steps.

Step (a): the image is smoothed. Specifically, an image I(x, z) is obtained by smoothing the original image using a two-dimensional Gaussian function.

Step (b): a gradient amplitude G(x, z) and a gradient direction Fix, z) of each pixel of the image I(x, z) are calculated.

Step (c): a non-maximum suppression of gradient image is performed. For each pixel, if the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction Fix, z), it is judged that the pixel is not an edge point and the grayscale of lix, z) is assumed as 0.

Step (d): a double-threshold processing is performed. A low threshold Low and a high threshold High is preset. It is judged that the pixel is the edge point if the gradient amplitude is greater than the high threshold High; it is judged that the pixel is not the edge point if the gradient amplitude is less than the low threshold Low; if the gradient amplitude is between the preset high threshold and the preset low threshold, it is judged whether any of eight pixels adjacent to the pixel is greater than the high threshold High, and if yes, it is judged that the pixel is the edge point, if no, it is judged that the pixel is not the edge point.

It is to be pointed out that, as accurate information about the boundary in the multi-layered object is unknown in advance, the ultrasonic imaging algorithm of single medium is used in step 3 to generate an image so that the boundary between the first medium layer and the second medium layer can be extracted. As the ultrasound velocity in the first medium layer is used in the algorithm, image of the first medium layer is accurate, and furthermore, as the boundary between the first medium layer and the second medium layer is also a part of the first medium layer, the boundary extracted in step 4 is also accurate. In fact, the ultrasonic imaging algorithm used in step 3 can be any imaging technique for detecting a single medium, such as the conventional SAFT or the conventional phase shift migration technique.

Step 5: an image block in the interval from the boundary c \ (x, z) to Z^^-l on the longitudinal section is corrected so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer. This step may further comprise following sub-steps.

Step 5.1 : assuming m=0, a sector image corresponding to the m th detecting point U(x u , 0) is calculated according to step 5.1.1. It is to be pointed out that, in a specific program, the step interval Δχ of the ultrasonic transducer in the preceding step 1 can be converted to the pixel number by executing Ax - Ax/accuracy. Thus, the x-coordinate x u =mAx can be directly calculated in step 3.1 and step 5.1 and the operator dividing each m by image accuracy to generate pixel coordinates on the image is avoided, which saves a large amount of division computations.

Step 5.1.1 : a half-power beam angel of the ultrasonic transducer is calculated, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and a left intersection point B;(xi, z / ) of a left boundary of the half-power beam angle and the boundary c \ (x,z) and a right intersection point B r (x r , z r ) of a right boundary of the half-power beam angle and the boundary c \ (x,z) are calculated respectively, where

It is to be pointed out that in the imaging calculation, the half-power beam angle βο.5 in step 5.1.1 is used M times. However, ¾.s is only related to the intrinsic parameters of the ultrasonic transducer. As only one ultrasonic transducer is used in the embodiment of the present disclosure, the half-power beam angle /¾.5 is a fixed value. In order to avoid repeating calculation, β 0 . 5 <- tg(0.5 x 0.84/l/Z ) ) is calculated in advance in the program and is saved as a global variable. Thus, the global variable ? 0 . 5 tg(0.5x0.84/1//)) is directly called to calculate and

In addition, the left intersection point 5 / (x / , z / ) of the left boundary of the half-power beam angle and the boundary c \ (x,z) and the right intersection point B r (x r , z r ) of the right boundary of the half-power beam angle and the boundary c \ (x,z) are calculated respectively. In practice, the boundary c \ {x,z) extracted by an edge detection method such as a Canny operator is saved as a pixel set. Values in the pixel set are arranged in order of increasing the x-coordinate thereof. The intersections Bj and B r are calculated just by searching these pixels in sequence. If the coordinate of a pixel point (x, z) satisfies the intersection is Bf, and if the coordinate of the pixel point (x, z) satisfies the intersection is B r . As in the whole imaging calculation, the two (left and right) intersections B; and B r are calculated for each m, but the half-power beam angel corresponding to each m is parallel to each other (as shown in Fig. 1(e)), the left intersection Bi corresponding to a current m is sure to be on the right of the left intersection of a last m, and the right intersection B r corresponding to the current m is sure to beon the right of the right intersection of the last m. Thus, the left intersection is calculated in each round only by searching the pixels on the boundary c \ (x,z) from B; of a last round. Accordingly, the right intersection is calculated in each round only by searching the pixels on the boundary c \ (x,z) from B; of the last round.

Step 5.1.2: assuming a refraction point R(XR, Z R )=BI(XI, ζί) on the boundary ci(x, z), a normalized refraction vector is calculated by a formula RP = eta x R U - [eta x

where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, ZR), a = 1.0 - eta x eta x [1.0 - (RUUe) x (RUUe)] , "U" is a vector dot product operation, and " RU " is a normalized incidence vector from the detecting point U(x u , 0) to the refraction point R(XR, Z R ); Step 5.1.3: a slope k of the refraction vector RP is calculated, all pixels on a refraction path from the refraction point R(X R , Z R ) to a left boundary x=0, or a right boundary x=(M- 1 ) Ax/accuracy, or a lower boundary z=Zd ept -\ of the sector image are calculated, the refraction point R is defined as a current point Rf, i.e. Rf(xf, zj) = R(X R , Z R ), and the refraction path is defined as R/Ef, where E is an end point ofR/E/ in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image. In one embodiment of the present disclosure, a Bresenham line scan conversion algorithm is used to calculate all the pixels on the refraction path.

Step 5.1.4: assuming x g =x +\, a next pixel R g (x g , %) adjacent to the current refraction point Rf on the boundary c\{x, z) is sought, where a coordinate of the pixel R g satisfies c\{x g , %)=0, defining R g as a new refraction point, i.e. R(X R , z R )=R g {x g ,z g ), step 5.1.2 and step 5.1.3 are executed to scan-convert a new refraction path R g E g , and an end point ofR g E g is defined as E g .

Step 5.1.5: a distance Ad=\EfE g \ between the terminal point E g of the refraction path R g E g of the current refraction point R g and the terminal point Ef of the refraction path R/Ef of the last refraction point Rf is calculated, a curve between the new refraction point R g and the current refraction point Rf on the boundary c\{x, z) is equally divided into Ad segments, i.e. Ad-\ points are inserted, an index number of an inserted point is recorded as τ =1, 2, Ad-\, and following step 5.1.5.1 is repeatedly executed for τ from τ =1 to τ =Ad with 1 as a step interval.

Step 5.1.5.1 : a coordinate of a τ h inserted point R z is calculated by interpolation, and z z =Zf +(z g -Zf)r/Ad, assuming R(X R , Z r )=R z (X Z , Z Z ), step 5.1.2 and step 5.1.3 are executed to scan-convert a refraction path starting from the inserted point R z ,

Step 5.1.6: assuming R f =R g and E^E g , step 5.1.4 and step 5.1.5 are executed.

Step 5.1.7: step 5.1.6 is executed repeatedly until x g =x r +\ , that is, all the refraction points and refraction paths between Bj{xi, z / ) and B r (x r , z r ) on the boundary c \ {x,z) are processed to obtain the sector image corresponding to the m th detecting point U(x u , 0).

Step 5.2: assuming m=\, M-\ in sequence, step 5.1 is executed repeatedly and the image block in the interval from the boundary c \ x,z) to Z^^-l on the longitudinal section is generated.

Step 6: using the image block in the interval from the boundary c\{x,z) to ^-l on the longitudinal section obtained in step 5 as an input value, a boundary c 2 (x, z) between the second medium layer and a third medium layer formed between the boundary c \ {x, z) and / ^-l is extracted by using an edge detection method such as the Canny edge detector software package, and an image block in an interval from the boundary c 2 (x, z) to Z^^-l on the longitudinal section is corrected so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer.

Step 7: according to an arithmetic in step 6, all the other boundaries on the longitudinal section are processed in sequence to generate an image on the longitudinal section with a width of (M-\)Ax/accuracy+\ pixels and a length oiZd ep th pixels.

In the embodiments of the present disclosure, the imaging calculation process comprises a

Bresenham line scan conversion subprogram BresenhamLineQ and a main program

SAFT-RefractionQ of the synthetic aperture focusing algorithm in ultrasonic imaging for the multi-layered object, which are shown as follows.

BresenhamLine(U, R, k)

{

r 0 = \RU\; t 0 = 2r 0 * s / v if(£<l) Ar= ; else Ar= I k ;

An =2Ar%/ v 2 ;

P(x, z)= R(x R , ZR); δ=-0.5; r = r 0 ; n = t 0 ;

while( (x, z) is within in Γ)

{

if(|£|≤l) { x=x+l ; S=S+k; ίΐ(δ>0) {z++; δ=δ- ί ;} }

=z+l ; S=S+ l/k; ίΐ(δ>0) {x++; δ=δ- ί ;} }

}

SAFT-Refraction( )

{

Ax/=accuracy;

vi , 2 are the ultrasound velocities in upper two medium layers of the object in the depth direction;

I tmp = I = AD-SAFT(v l );

while(a boundary exists

{

c = Ca y(I tmp );

Bixi, zi)=B r (x r , z r )=(0,

for(m=0; m<M; m++)

{ x u =mAx; U=(x u , 0);

for(x= xi, x<Xi ength ; x++)

{

{ Bixi, z / )=(x, z); break; }

}

for(x= x r ; x<Xie„ g th, x++)

{

{ B r (x r , z r )=(x, z); break; }

}

R g (x g , Zg)=(xi, c(x/)); R P = eta x R U - [eta x (RJ7¾) + 4a] x e ; kg =the slope of R P ; E g =the intersection of R P and the boundary of /;

Ad=l ;

for(x= x / ; x< x r ; x+=Ad)

{

if(x is an integer)

{

R g (x g , Zg) =(x+ 1 , c(x+ 1 )); R P = eta x R U - [eta x (R t/Cfe) + Va ] x e :

Λ¾ = the slope of ? g ; E g = the intersection of R g P and the boundary of/;

A =l/|EE g |;

}

else

{

i?(x«, z«)=(x, c(x)); RP = eta x R ~ U - [eta x (R ~ U^) + a] x e ;

A=the slope of i? ;

}

BresenhamLineiU, R, k);

}

}

Itmp is updated as an image in an interval from the boundary c to Z^^-l in /;

i and v 2 are updated as the ultrasound velocities in upper two medium layers in

Ifmpi

}

}

It is to be pointed out that, for a regular multi-layered object, all the boundaries c(x, z) extracted by the edge detection method such as the Canny edge detector software package are straight lines. Each imaging calculation step remains unchanged.

Compared with the prior art, advantages of the synthetic aperture focusing method in ultrasonic imaging for the multi- layered object according to embodiments of the present disclosure lie in a quick and accurate imaging for the multi-layered object containing regular or irregular interfaces.

Fig. 6 shows effects of various ultrasonic imaging methods in a same experimental environment, in which Fig. 6(a) is a schematic view of a longitudinal section of an object with irregular interfaces; Fig. 6(b) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by the conventional SAFT in combination with the ray tracing; Fig. 6(c) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by the conventional phase shift migration ultrasonic imaging technique; and Fig. 6(d) is an image of a longitudinal section of the object shown in Fig. 6(a) generated by the method according to an embodiment of the present disclosure, and white curves in Figs. 6(b)-6(c) are upper and lower boundaries obtained in experiments. For the object shown in Fig. 6(a), it takes 16s, which is approximately 1/112 of a time taken by the conventional SAFT in combination with the ray tracing and 1/4 of a time taken by the phase shaft migration, to generate the image shown in Fig. 6(d) by the method according to an embodiment of the present disclosure on an experimental machine (Intel Core Duo 2.66GHz CPU and 2.0GB RAM), given that for the ultrasonic transducer, a diameter of is 0.5 mm, the step interval is 0.7 mm, a center frequency of the ultrasound is 5 MHz, the sampling frequency is 100 MHz and the imaging accuracy is 0.05 mm. For the object in Fig. 6(d), a maximal error of the upper boundary is 0.5 mm (as shown in Fig. 7(a)) and a maximal error of the lower boundary is 0.9 mm (as shown in Fig. 7(b)), which is of high imaging accuracy. The imaging quality of the lower boundary in Fig. 6(b) is not good enough and even serious errors occur in the imaging of the lower boundary in Fig. 6(c). By comparing with methods shown in Fig. 6(b) and Fig. 6(c), it is known that the imaging quality of the lower boundary by using the imaging method according to embodiments of the present disclosure is significantly superior to that by using the above two methods.

A synthetic aperture focusing system in ultrasonic imaging for a multi-layered object is further provided in the present disclosure. As shown in Fig. 8. This system comprises: a positioning controller 100; an analog digital converter 200; an ultrasonic transducer 300 and a calculating device 400. An input end of the ultrasonic transducer 300 is connected with an output end of the positioning controller 100, an output end of the ultrasonic transducer 300 is connected with an input end of the analog digital converter 200, an input end of the calculating device 400 is connected with an output end of the analog digital converter 200, and an output end of the calculating device 400 is connected with an input end of the positioning controller 100. The positioning controller 100 is a driving device controlling a movement of the ultrasonic transducer 200, and parameters of the positioning controller 100 are input by the calculating device 400.

In an embodiment of the present disclosure, a contact probe or an immersion probe may be used in the ultrasonic transducer 300. Fig. 9 is a schematic working diagram of the ultrasonic transducer 300. As shown in Fig. 9, under a control of the positioning controller 100, the ultrasonic transducer 300 moves at a fixed velocity (such as 1 step interval/ms) with a uniform step interval Δχ long X-axis on a surface of the object. The step interval Δχ of the ultrasonic transducer 300 is determined comprehensively by an actual size of the object and a requirement for the detecting accuracy. The less the step interval is, the more accurate the generated image is, but the longer the calculation time is.

Given that a horizontal length of the object in an X-axis direction is Xi ength which is equally divided into Xi ength /Ax intervals, Δχ is a length of the interval, a number of detecting points for the ultrasonic transducer 300 on the X-axis is M, M=\ +Xi ength /Ax, and an index number of each detecting point is m, m=0, \, . .. , -1, a TTL level pulse is generated by the positional controller 100 at each detecting point to trigger the ultrasonic transducer 300 to transmit a pulse in a depth direction Z of the object perpendicular to the X-axis, echo signals reflected from the object are received and timed by the ultrasonic transducer 300, the echo signals received at an m th detecting point is sampled N times by the analog digital converter 200, a sampling index number is defined as n, n=0, \ , . ..,N-\ , a preset value of a sampling frequency is defined as f s , f s is preset by the analog digital converter 200, a sampling value sampled at an n h time and at the m th detecting point is defined as s m (n), and a sampling time of s m (n) is defined as and the calculating device 400 is configured to store the sampling value s m (n) input from the analog digital converter, and to generate an image of the object on a longitudinal section by executing steps of:

step 1 : reading in sequence sampling values s m (n) at a 0 th detecting point from n=0 to n= N-l , and then reading in sequence sampling values s m (n) at detecting points m=l , . .. , M-\ by repeating the reading process; step 2: assuming a propagation velocity of an ultrasound in the object v=v \ , v \ being a propagation velocity of the ultrasound in a first medium layer of the object, and generating an image block in an interval from zo=0 to Z^^-l in the depth direction Z on a longitudinal section, wherein Z depth is a preset length of a generated image;

step 3 : using the image block in the interval from zo to Z^^-l on the longitudinal section obtained in step 3 as an input value and extracting a boundary c \ (x, z) between the first medium layer and a second medium layer;

step 4: correcting an image block in the interval from the boundary c \ (x, z) to Z dept - on the longitudinal section so as to eliminate an error caused by a heterogeneity between the first medium layer and other medium layers below the first medium layer, comprising:

step 4.1 : assuming m=0, defining the m th detecting point as U(x u , 0), wherein x u =m Ax/accuracy, accuracy is an image accuracy, and calculating a sector image corresponding to the m th detecting point U(x u , 0), comprising:

step 4.1.1 : calculating a half-power beam angel of the ultrasonic transducer, where λ is a wavelength of the ultrasound transmitted in the object and D is a diameter of the ultrasonic transducer, and calculating a left intersection point 5 / (x / , z / ) of a left boundary of the half-power beam angle and the boundary c \ x,z) and a right intersection point B r (x r , z r ) of a right boundary of the half-power beam angle and the boundary c\{x,z) respectively, where

and ci(x r , z r )=0;

step 4.1.2: assuming a refraction point R(XR, Z R )=BI(XI, ζί) on the boundary ci(x, z) and calculating a normalized refraction vector RP = eta x RU - [eta x (RU - e) + \[a ] x e , where eta is a relative refractive index between two medium layers adjacent to the boundary ci(x, z), e is a unit normal vector at the refraction point R(XR, Z R ), a = l .0 - eta x eta x [\ .0 - (RU - e) x (RU - e)] , "·" is a vector dot product operation, and

" RU " is a normalized incidence vector from the detecting point U(x u , 0) to the refraction point R(XR, Z R );

step 4.1.3 : calculating a slope k of the refraction vector RP ,, calculating all pixels on a refraction path from the refraction point R(XR, ZR) to a left boundary x=0, or a right boundary x=(M-\) Ax/accuracy, or a lower boundary z=Z £ epi¾ -l of the sector image, defining the refraction point R as a current point R j , i.e. Rf (xf , zj) = R(XR, Z R ), and defining the refraction path as R/Ef, Ef being an end point of R/Ef in the sector image, i.e. an intersection point of the refraction path R/Ef and a boundary of the sector image; step 4.1.4: assuming x g =Xf+\ , seeking a next pixel R g (x g , %) to the current refraction point Rf on the boundary c\{x, z), where a coordinate of the pixel R g satisfies c\{x g , %)=0, defining R g as a new refraction point, i.e. R(X R , z R )=R g (x g ,z g ), executing step 4.1.2 and step 4.1.3 to scan-convert a new refraction path R g E g , and defining an end point ofR g E g as E g ;

step 4.1.5: calculating a distance between E g and Ef, equally dividing a curve between the new refraction point R g and the current refraction point Rf on the boundary c\{x, z) into Ad segments, i.e. inserting Ad-\ points, an index number of an inserted point being recorded as τ =1, 2, Ad-\, and repeatedly executing following step 4.1.5.1 for T from τ =1 to τ =Ad with 1 as a step interval;

step 4.1.5.1 : calculating a coordinate of a r th inserted point R z by interpolation, wherein Z Z ), executing step 4.1.2 and step 4.1.3 to scan-convert a refraction path starting from the inserted point R z , step 4.1.6: assuming Rf=R g and Ef=E g and executing step 4.1.4 and step 4.1.5; step 4.1.7: executing step 4.1.6 repeatedly until x g =x r +\;

step 4.2: assuming m=\, M-\ in sequence, executing step 4.1 repeatedly and generating the image block in the interval from the boundary c \ x,z) to Z^^-l on the longitudinal section;

step 5: using the image block in the interval from the boundary c\{x,z) to ^-l on the longitudinal section obtained in step 4 as an input value, extracting a boundary c 2 (x, z) between the second medium layer and a third medium layer formed between the boundary c \ {x, z) and ^-l, and correcting an image block in an interval from the boundary c 2 (x, z) to Z^^-l on the longitudinal section so as to eliminate an error caused by a heterogeneity between the second medium layer and other medium layers below the second medium layer; and

step 6: according to an arithmetic in step 5, processing in sequence all other boundaries on the longitudinal section to generate an image on the longitudinal section with a width of (M-\)Ax/accumcy+\ pixels and a length oiZd ep th pixels.

In one embodiment of the present disclosure, step 2 comprises:

step 2.1 : assuming m=0 and calculating a concentric arc centered at the m th detecting point U(x u , 0) on the longitudinal section, comprising:

step 2.1.1 : assuming a coordinate value z u =0 in the depth direction, calculating a vertical distance r=z u -accuracy between a coordinate point (x u , z u ) and a center point (x u , 0), and then calculating a time t=r/v\ for the ultrasound to propagate over a distance r in the object and a sampling index number n=2t-f s corresponding to the time t;

step 2.1.2: judging whether the sampling value s m (n) is non-zero, if yes, executing step 2.1.3, and if no, executing step 2.1.4;

step 2.1.3: calculating the half-power beam angle of the ultrasonic transducer, calculating an x-coordinate of an intersection point of the right boundary of the half-power beam angle and the arc, assuming an initial value of a coordinate point (x p , z p ) on the arc as (x u , z u ), calculating an initial value of a discriminant and executing circularly step 2.1.3.1 to step 2.1.3.3 for x p with 1 as the step interval until x p >z p :

step 2.1.3.1 : if Δ<0, increasing Δ by

and decreasing z p by 1 ;

step 2.1.3.2: increasing pixel values of coordinate points (x p , z p ) and (2x u -x p , Zp) by co(x u , Xp)-s m {n)lr, where co(x u , x P ) is an apodizing function, otherwise, keeping the pixel values of the coordinate points (x p , z p ) and (2x u -x p , z p ) unchanged;

step 2.1.3.3: if z p +x u <q, increasing pixel values of coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) by co(x u , z p +x u )-s m (n)/r, otherwise, keeping the pixel values of the coordinate points (z p +x u , x p -x u ) and (x u -z p , x p -x u ) unchanged;

step 2.1.4: assuming in sequence the coordinate value z u =\ , 2, Z^^-l in the depth direction, and executing step 2.1.1 to step 2.1.3 repeatedly;

step 2.2: assuming in sequence m=\, M-\ and executing step 2.1 repeatedly to generate the image block in the interval from zo=0 to Z^^-l in the depth direction with a width of (M-\)Ax/accuracy+\ pixels on the longitudinal section.

In step 2, the SAFT ultrasonic imaging is achieved by, using an arc scan conversion technique, drawing arcs for the data sampled at each scanning position of the ultrasonic transducer. This method is preceded in a forward pattern without calculating a distance between the imaging point and each scanning position of the ultrasonic transducer, thus avoiding the RMS calculation and saving a large amount of computations. In one embodiment of the present disclosure, in step 3, the boundary c \ {x, z) between the first medium layer and the second medium layer is extracted by using an edge detection method.

In one embodiment of the present disclosure, in step 5, the boundary c 2 ( , z) between the second medium layer and the third medium layer formed between the boundary c \ {x, z) and Zdepth- ^ is extracted by using the edge detection method.

In one embodiment of the present disclosure, the edge detection method comprises a Canny edge detection algorithm.

In one embodiment of the present disclosure, the Canny edge detection algorithm comprises: step (a): obtaining an image I(x, z) by smoothing the image block on the longitudinal section; step (b): calculating a gradient amplitude G(x, z) and a gradient direction Fix, z) of each pixel of the image I(x, z);

step (c): for each pixel, judging whether the gradient amplitude G(x, z) is less than gradient magnitudes of two adjacent pixels in the gradient direction Fix, z), and if yes, concluding that the pixel is not an edge point; and

step (d): for each pixel, concluding that the pixel is the edge point if the gradient amplitude Gix, z) is greater than a preset high threshold; concluding that the pixel is not the edge point if the gradient amplitude Gix, z) is less than a preset low threshold; if the gradient amplitude Gix, z) is between the preset high threshold and the preset low threshold, judging whether any of eight pixels adjacent to the pixel is greater than the preset high threshold, and if yes, concluding that the pixel is the edge point, if no, concluding that the pixel is not the edge point.

In one embodiment of the present disclosure, step (a) comprises: smoothing the image block on the longitudinal section by using a 2-dimensional Gaussian function.

In one embodiment of the present disclosure, step (b) comprises: calculating first-order approximations of a partial differential in an X direction and a Z direction with a 2x2 template, i.e.

then calculating the gradient amplitude G and the gradient direction F by formulas:

G = jp x p + q x q, =

In one embodiment of the present disclosure, in step 4.1.3, the all pixels on the refraction path from the refraction point R(XR, ZR) to the left boundary x=0, or the right boundary x=(M-\) Ax/accuracy, or the lower boundary of the sector image are calculated by using a Bresenham line scan conversion algorithm.

With the ultrasonic imaging method and system for detecting the multi-layered object with synthetic aperture focusing technique according to embodiments of the present disclosure, based on SAFT technique, equating DAS and all-range dynamic focusing with drawing the arcs on the image using the arc scan conversion technique, a quick and accurate ultrasonic imaging method for a single-medium object is achieved. On this basis, a refraction vector calculation formula is used in combination with a refraction law to calculate a refraction path in a forward pattern, and the pixel points on the refraction path are scan-converted using the Bresenham line scan conversion algorithm. An iterative calculation for calculating refraction points and the RMS calculation for calculating the propagation distance of the ultrasound are avoided by substituting the method for obtaining trajectory curves on the image using sampling data for the conventional SAFT in combination with the ray tracing, which saves a lot of computations. Moreover, an imaging error resulted from the inaccuracy in calculating an effective length of a synthetic aperture for the multi-layered object is avoided by substituting the half-power beam angel for the effective length. The ultrasonic imaging method and system for detecting the multi-layered object with synthetic aperture focusing technique according to embodiments of the present disclosure can construct an ultrasonic nondestructive imaging for longitudinal sections in both depth and horizontal directions of the multi-layered object.