Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FIXTURE WITH VISION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2021/237351
Kind Code:
A1
Abstract:
A fixture assembly for holding at least two parts at a location and orientation to form an interface surface therebetween such that they can be connected to form a larger component. The fixture assembly includes an imaging device for capturing at least one of the shape, presence, location, and orientation of the at least two parts. The fixture assembly further includes a processor and a memory device. The memory device includes a component profile data and receives captures from the at least one imaging device. The processor is configured to compare the captures from the imaging device with the component profile data and indicate when at least one of the shape, presence, location, and orientation of the at least two parts matches the component profile data. Once the parts match the component profile, the interface surfaces can be connected.

Inventors:
DENIJS ERIC (CA)
Application Number:
PCT/CA2021/050710
Publication Date:
December 02, 2021
Filing Date:
May 26, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAGNA INT INC (CA)
International Classes:
G01B21/20; B23K37/04; B23Q16/00; B25J19/04; G01B11/245; G01S17/894
Foreign References:
US20180243897A12018-08-30
Attorney, Agent or Firm:
GOWLING WLG (CANADA) LLP et al. (CA)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A fixture assembly comprising: at least one holding device for holding at least two parts at a location and orientation to form an interface surface therebetween; at least one imaging device spaced from the at least one holding device for capturing at least one of a shape, the location, and the orientation of the at least two parts; a processor; and a memory device having a component profile data that includes a shape, location, and orientation of a component to be formed from the at least two parts, the memory device further containing instructions that, when executed by the processor, cause the processor to: receive the at least one capture from the at least one imaging device; compare the at least one capture from the at least one imaging device with the component profile data and generate a signal when at least one of the shape, location, and orientation of the at least two parts matches the component profile data.

2. The fixture assembly of Claim 1, wherein, in response to the at least two parts matching the component profile data, the processor is further caused to perform a connecting operation along the interface surface with a robotic arm and a connecting unit.

3. The fixture assembly of Claim 2, wherein the connecting unit performs at least one of a welding or riveting operation at the interface surface.

4. The fixture assembly of Claim 1, further including at least two target units placed on the fixing assembly for providing a scale to the at least one capture from the at least one imaging device.

5. The fixture assembly of Claim 4, wherein the target units include at least one of a color and shape that the at least one imaging device is configured to recognize.

6. The fixture assembly of Claim 5, wherein the target units include surface markings.

7. The fixture assembly of Claim 5, wherein the target units include removable bodies.

8 The fixture assembly of Claim 4, wherein the target units include location aware technology.

9. The fixture assembly of Claim 1, wherein the memory further includes connection operations data associated with the component profile data.

10. The fixture assembly of Claim 1, wherein the memory includes a plurality of component profile data, and wherein one of the component profile data is selected before comparing to the at least one capture from the at least one imaging device.

11. The fixture assembly of Claim 1, wherein the at least one imaging device is configured for light detection and ranging.

12. The fixture assembly of Claim 11, wherein the at least one imaging device includes a laser source, a fixed mirror, a rotating mirror, and a laser reader.

13. The fixture assembly of Claim 12, wherein the processor is further caused to generate a signal to the laser source to release a plurality of pulses of light.

14. The fixture assembly of Claim 13, wherein the plurality of pulses of light include one of ultraviolet (UV), infrared (IR), and near IR wavelengths.

15. The fixture assembly of Claim 1, wherein the at least imaging device includes a plurality of imaging devices.

Description:
FIXTURE WITH VISION SYSTEM

CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This PCT International Patent Application claims the benefit of and priority to

U.S. Provisional Patent Application Serial No. 63/030,191 filed on May 26, 2020, titled “Fixture With Vision System,” the entire disclosure of which is hereby incorporated by reference.

BACKGROUND OF THE INVENTION 1. Field of the Invention

[0002] The present invention relates to a fixture assembly with a vision system for capturing the shape, presence, location, and orientation of two parts. The two parts being joined to form a larger component.

2. Related Art

[0003] This section provides background information related to the present disclosure which is not necessarily prior art.

[0004] Productivity and efficiency are the goals in any production cycle. For some industries, like the automobile industry, production cycles can include large, multi-step operations, wherein a component is assembled out of several smaller parts. Production cycles often begin by forming the smaller parts with one of a large number of complex and expensive forming assemblies, such as stamping, extruding, or casting assemblies. While forming assembly technology has advanced enough that individual parts can be formed with great precision, connecting formed parts to one another with accuracy and uniformity can be difficult and often times components that have been assembled in the same production cycle have variances. However, as industry standards continue to increase stricter and sticker tolerances are required. [0005] To improve uniformity between components, many manufactures use fixture assemblies for locating the various formed parts before they are connected together. More particularly, fixture assemblies provide a template with clamps and other holding devices so that when each formed part is placed in a respective holding device, they form an accurate representation of the component and can then be connected to one another. Fixture assemblies also typically include a series of integrated sensors that are used to detect the location of the formed part. While the use of sensors results in more accurate component construction, the sensors also requires a significant amount of complicated wiring, which adds large upfront capital and also negatively impacts productivity as it takes a large amount of time to integrate. In addition, when sensors are integrated into fixture assemblies, they are prone to damage and displacement.

[0006] Accordingly, there is a continuing desire to develop fixture assemblies to maintain accurate and efficient production cycles.

SUMMARY OF THE INVENTION

[0007] This section provides a general summary of the disclosure and is not to be interpreted as a complete and comprehensive listing of all of the objects, aspects, features and advantages associated with the present disclosure.

[0008] The subject invention provides a fixture assembly. The fixture assembly comprises at least one holding device for holding at least two parts at a location and orientation to form an interface surface therebetween. At least one imaging device is spaced from the at least one holding device for capturing at least one of a shape, the location, and the orientation of the at least two parts. The fixture assembly includes a processor and a memory device. The memory device has a component profile data that includes a shape, location, and orientation of a component to be formed from the at least two parts. The memory device further contains instructions that, when executed by the processor, cause the processor to: receive the at least one capture from the at least one imaging device; compare the at least one capture from the at least one imaging device with the component profile data and generate a signal when at least one of the shape, location, and orientation of the at least two parts matches the component profile data. [0009] Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS [0010] The drawings described herein are for illustrative purposes only of selected embodiments and are not intended to limit the scope of the present disclosure. The inventive concepts associated with the present disclosure will be more readily understood by reference to the following description in combination with the accompanying drawings wherein:

[0011] Figure 1 is a schematic view of a fixture assembly with a vision system;

[0012] Figure 2 is a schematic view of a imaging device for detecting the shape, presence, location, and orientation of at least two parts that are to be connected to one another; [0013] Figure 3 is a is a schematic view of a vision system circuit;

[0014] Figure 4A is a method flow chart illustrating steps the of assembling a component out of two or more parts; and

[0015] Figure 4B is a continuation of the method flow chart in Figure 4A.

DESCRIPTION OF THE ENABLING EMBODIMENT [0016] Example embodiments will now be described more fully with reference to the accompanying drawings. In general, the subject embodiments are directed to a fixture assembly with a vision system. However, the example embodiments are only provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well- known device structures, and well-known technologies are not described in detail.

[0017] Referring to the Figures, wherein like numerals indicate corresponding parts throughout the views, the fixture assembly with a vision system, e.g., fixture assembly 20 is intended to provide a template for accurately arranging two or more parts before they are connected.

[0018] Referring initially to Figure 1, the fixture assembly 20 is schematically illustrated.

The fixture assembly 20 includes a fixture holding tool 22 and a vision system 24 that includes at least one imaging device 26. The at least one imaging device 26 is spaced from the fixture holding tool 22 and may employ one of numerous techniques to detect the shape, presence, location, and orientation of the fixture holding tool 22 and at least two parts 28 so that the at least two parts 28 can be connected together to form a component 30. The fixture assembly 20 includes a series of holding devices 32, such as clamps, slides, cylinders, fasteners, nuts, other holding mechanisms, or a combination thereof, to hold the at least two parts 28 in a location and orientation where they form an interface surface 31 that can be connected to one another. The vision system 24 further includes at least one controller 34 in operable communication with the imaging device 26 for receiving image data from the at least one imaging device 26 and comparing it to at least one predetermined parameter. The vision system 24 further includes a user interface 36 in operable communication with the controller 34 for additional functionality, for example, changing the at least one predetermined parameter.

[0019] With continued reference to Figure 1, the vision system 24 may further include a series of target units 38, connected to various locations on the fixture assembly 20 and/or parts 28. The target units 38 provide a frame of reference for the parts 28 as they are located via the holding devices 32. For example, target units 38 may be connected to the holding devices 32 (e.g., clamps) to monitor if the holding device 32 is in an open, partially tightened, or tightened position. The fixture assembly 20 further includes a connection assembly 40 for connecting the at least two parts 28, once the at least two parts 28 are located in the predetermined parameter. The connection assembly 40 may include a robotic arm 42 carrying a connecting unit 44, such as a welding or riveting tool. The connection assembly 40 may be in operable communication with the controller 34 for automatically connecting the at least two parts 28 once they are located in the predetermined parameters. It should be appreciated the connection assembly 40 could alternatively include other tools and mechanisms that connect the two or more parts 28 and may also be manually operated instead of being attached to a robot arm 42.

[0020] In addition to helping locate parts 28, the target units 38 may provide a scale reference, so that the connection assembly 40 can weld along a predetermined distance or the rivet tool can place rivets within a certain distance of one another. For example, at least two target units 38 may be located on one of the parts 28, each adjoining part 28, the fixture assembly 22, or a combination thereof such that the spacing and orientation between the target units 38 provide orientation and distance information. In some embodiments, at least one target unit 38 or a plurality of target units 38 may be placed on or around the interface 31 or one or two adjoining parts 28. The target units 38 may include a specific color and/or shape that the imaging device is configured to recognize. Therefore, in some embodiments, the target units 38 may include surface markings (e.g., paint) or removable bodies (e.g., magnetic buttons). In some embodiments, the target units 38 include location aware electronics, such as RFID technology. [0021] With reference now to Figure 2, an example imaging device 26 is illustrated. In some embodiments, the imaging device 26 utilizes light detection and ranging (LiDAR) functionality and includes a laser source 46 that projects pulses of light onto a fixed mirror 48, the fixed mirror 48 then reflects the pulses of light to a rotating mirror 50, and the rotating mirror 50 then reflects the pulses of light to the part 28 and/or target units 38. The pulses of light that contact the part 28 and/or target units 38 are then reflected back between mirrors 48 and 50 towards a laser reader 52 located near the laser source 46. The time that it takes the pulses of light to leave the laser source 46 and return to laser reader 52 thus provides an accurate representation of the part 28 and/or target unit 38 presence, shape, location, and orientation. The pulses of light may be ultraviolet (UV), infrared (IR) or near IR, or other wavelengths. In addition, the imaging device 26 may employ other technologies such as depth cameras, 3D imaging cameras, RFID tracking, etc. For example, the imaging device 26 may include SICK 3D Vision sensors, Zivid One Plus, Intel® RealSense™ Depth Camera D435i, or other instrumentations. In some embodiments, the imaging device 26 includes one or more technologies that simultaneously develop a shape and orientation of the part 28 and a location and orientation of the target units 38, wherein readings can be compared for accuracy confirmation.

[0022] With reference now to Figure 3, a vision system circuit 200 is schematically illustrated. Elements of the vision system circuit 200 may be in a local or remote location. The various elements provided therein allow for a specific implementation. Thus, one of ordinary skill in the art of electronics and circuits may substitute various components to achieve a similar functionality. The vision system circuit 200 includes a CPU circuit 202 associated with controller 38, an imaging system 204 associated with the imaging device 26, a user interface system 205 associated with user interface 36, and a connecting operations circuit 206 associated with the connection assembly 40.

[0023] The CPU circuit 202 includes the controller 38 that includes a processor 210, a communications unit 212 (for example associated with wired 220 or wireless 222 internet, Bluetooth, or other short and long range connections), and a memory 214 having machine- readable non-transitory storage. The memory 214 may include instructions that, when executed by the processor 210, cause the processor 320 to, at least, perform the methods described herein. Programs and/or software 216 are saved on the memory 214 and so is data 218 obtained (e.g., captures) via the imaging system 204 and the user interface system 205 (operation selections). The processor 210 carries out instructions based on the software 216 and data 218, for example, providing instructions to the connecting operations circuit 206 to perform one of the welding, riveting, and/or fastening operations to the parts 28. Communications between the CPU circuit 202, the imaging system 204, the user interface system 205, and the connecting operations circuit 206 are communicated to and from the communications unit 212 (wired 220 or wireless 222), allowing one or both of transmittal and receipt of information. As such, software 216 and data 218 may be updated via instructions from the user interface system 205, which may be in communication to a central server, a cloud server, or a combination thereof.

[0024] The imaging system 204 includes imaging devices 26A-26N, A equaling one and

N being all natural numbers. The imaging devices 26A-26N communicate captures of part 28 to the CPU circuit 202, which, in response can extrapolate the captures into a shape, presence, location, and orientation of the part and then compares the extrapolation to predetermined parameters (e.g., a 3D computer rendition of the component 30). Once the predetermined parameters are met, the CPU circuit 202 then communicates to the connecting operations circuit 206 to begin connecting the parts 28. The CPU circuit 202 may further include an alarm 224 for providing a visual or auditory notice to an operator once the parts 28 match the predetermined parameters. As such, certain safety protocols may be stored within the memory 214 to prevent any operations until the parts 28 match the predetermined parameters.

[0025] The predetermined parameters (e.g., a 3D computer rendition of the component

30) and their association with certain types of parts 28 or components 30 can be saved in the memory 214. For example, a component profile data 226 (e.g., a 3D computer rendition of the component 30) may be saved in memory 214. The component profile data 226 may include several profiles 226 related to specific components, such as a variety of automobile components. Each component profile data 226 may include the number of parts 28 needed to form the component 30 as well as the shape, location, and orientation of each part 28. The component profile data 226 may further include interface surface locations and connection instructions for the connecting operations circuit 206, such as location information for welding, riveting, or other fastening/connecting means.

[0026] Target location data 228 associated with target units 38 may also be saved in the memory 214. Target location data 228 may be initially gathered by communications from the imaging devices 26 to the CPU circuit 202. Memory 214 may also include connecting operations data 230 that are associated with the component profile data 226, such that when a component profile data 226 is selected only connecting operations data 230 that can implemented on the component 30 associated with the component profile data 226 can also be selected. In other words, if the connecting operations data 230 provides that a certain type of connection technique (e.g., rivets) is not appropriate for a certain component, the CPU circuit 202 may generate a warning, prevent selection of the inappropriate connection technique, or require a bypass password.

[0027] Target location data 228 may be used to modify the scale of the component profile data 226 and the connecting operations data 230. In addition, CPU circuit 202 may be configured to periodically check the target location data 228 to ensure a uniform orientation between cycles via detections from the imaging system 204. As such, if one of the target units 38 is moved with respect to the other target units 38, the alarm 224 may provide a visual or auditory notice to an operator and the CPU circuit 202 may generate a safety protocol to prevent any further operations until the displaced target unit 38 is realigned. In some embodiments, the target location data 228, the component profile data 226, the image capturing data 218, or a combination thereof are compared before the parts 28 are joined.

[0028] The subject invention further includes a method 300 including several steps of assembling a component out of two or more parts with a fixture assembly. The method 300 includes providing 302 a fixture assembly having at least one holding device for holding at least two parts in a specific location and orientation to form an interface surface therebetween. The method may further includes placing 304 target units at various locations on the fixture assembly to form a frame of reference. A component profile data is then selected 306 that corresponds to the component that is to be assembled. Once the component profile data is selected 306, a connecting operation data can be selected 308 based at least in part by which component profile data was selected. The method 300 further includes placing 310 at least two parts into the fixture assembly. Step 310 may further include placing 312 the at least two parts in holding devices on the fixture assembly. Step 310 may further include partially tightening 314 the holding devices. Step 310 may further include placing target units on at least one of the parts. The method 300 further include adjusting 316 the parts until they fit a component profile. Step 316 may further include capturing 318 an image of the parts, via an imaging device, and comparing 320 the captured image to the component profile. Step 320 may include capturing 322 the shape, presence, location, and orientation of the at least two parts. Therefore, if a part with a non- conforming shape is present, a notification may be generated. Step 320 may further include using the target units as a scale 324 for a frame of reference, scale, or orientation.

[0029] With reference now to a continuation of the method 300 provided on Figure 4B, the method 300 may also include generating a signal 326 to an operator that the parts match the component profile (e.g., size, location, and orientation). After the parts match the component profile, the holding devices may be completely tightened 328 and the parts can be re-compared 330 to the component profile data and the target unit data to insure no displacement during the tightening of the holding devices. After the parts are held in conformance with the component profile, a connecting operation 332 is performed. Based on the reference to the connections operations data at step 308, the method 300 further include controlling 336 a robotic arm to connect two parts. Step 336 may include welding, riveting, or other fastening operations 338 and may further include using the target unit as a reference to scale 340 the size of a connection operation between parts. The method 300 may further include using the imaging device to check 342 the quality of the connection between the at least two parts and may rely on the connection operations data when assessing the quality (e.g., weldment size and location, rivet location, etc.). [0030] Implementations the systems, algorithms, methods, instructions, etc., described herein can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably.

[0031] Further, in one aspect, for example, systems described herein can be implemented using a general-purpose computer or general-purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.

[0032] It should be appreciated that the foregoing description of the embodiments has been provided for purposes of illustration. In other words, the subject disclosure it is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varies in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of disclosure.