Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR DYNAMIC PROCESSING OF OBJECTS PROVIDED IN VEHICLES WITH OBSTRUCTION REMEDIATION
Document Type and Number:
WIPO Patent Application WO/2023/059830
Kind Code:
A1
Abstract:
An object processing system is disclosed for facilitating removal of objects from a trailer (12) of a tractor trailer. The object processing system includes an object obstruction detection system for identifying an obstruction during removal of objects, a force application system for applying any of a plurality of defined forces to the obstruction, an assessment system for collecting force feedback information responsive to the application of force to the obstruction and providing assessment information, and a control system for providing a removal model for removing the obstruction based on the assessment information.

Inventors:
ALLEN THOMAS (US)
COHEN BENJAMIN (US)
AMEND JOHN (US)
ROMANO JOSEPH (US)
MASON MATTHEW (US)
WU YANCHUN (US)
SINGH JAGTAR (US)
Application Number:
PCT/US2022/045947
Publication Date:
April 13, 2023
Filing Date:
October 06, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BERKSHIRE GREY OPERATING COMPANY INC (US)
International Classes:
B65G67/24; B25J9/16; B25J13/08; G05B19/00
Foreign References:
EP2500150A22012-09-19
US20200394747A12020-12-17
US20210198090A12021-07-01
US20200223066A12020-07-16
US200762632528P
US201162632528P
Attorney, Agent or Firm:
HILTON, William, E. et al. (US)
Download PDF:
Claims:
CLAIMS

1. An object processing system for facilitating removal of objects from a trailer of a tractor trailer, said object processing system comprising: an object obstruction detection system for identifying an obstruction during removal of objects; a force application system for applying any of a plurality of defined forces to the obstruction; an assessment system for collecting force feedback information responsive to the application of force to the obstruction and providing assessment information; and a control system for providing a removal model for removing the obstruction based on the assessment information.

2. The object processing system as claimed in claim 1, wherein the force application system applies forces in each of three mutually orthogonal directions.

3. The object processing system as claimed in any of claims 1 - 2, wherein the force application system applies rotational forces along each of three mutually orthogonal axes.

4. The object processing system as claimed in any of claims 1 - 3, wherein the object processing system further includes a perception system for providing perception data regarding movement of any of the objects responsive efforts to rock the obstruction.

5. The object processing system as claimed in any of claims 1 - 4, wherein the control system records data from prior attempts and applies machine learning in providing the removal model.

6. The object processing system as claimed in any of claims 1 - 5, wherein the removal model includes information regarding whether an end-effector attached to a programmable motion device should be changed.

7. The object processing system as claimed in any of claims 1 - 6, wherein the object processing system further includes a pallet removal system of engaging the pallet and removing the pallet and the subset of the plurality of objects from the trailer.

8. The object processing system as claimed in claim 7, wherein the pallet removal system includes pallet lift forks that are mounted on a swing arm under the dynamic engagement system.

9. The object processing system as claimed in any of claims 1 - 8, wherein the removal model provides a classification of objects to facilitate processing of unknown objects.

10. The object processing system as claimed in any of claims 1 - 9, wherein the object processing system further includes an output perception system for providing perception data regarding objects provided by an object engagement system, and a diverting system for diverting certain selected objects responsive to the perception data.

11. A method of facilitating removal of objects from a trailer of a tractor trailer, said method comprising: identifying an obstruction during removal of objects within the trailer; applying any of a plurality of defined forces to the obstruction; collecting force feedback information responsive to the application of force to the obstruction and providing assessment information; and providing a removal model for removing the obstruction based on the assessment information.

12. The method as claimed in claim 11, wherein the force application system applies forces in each of three mutually orthogonal directions.

13. The method as claimed in any of claims 11 - 12, wherein the force application system applies rotational forces along each of three mutually orthogonal axes.

14. The method as claimed in any of claims 11 - 13, wherein the method further includes providing perception data regarding movement of any of the objects responsive efforts to rock the obstruction.

15. The method as claimed in any of claims 11 - 14, wherein the method further includes recording data from prior attempts and applying machine learning in providing the removal model.

16. The method as claimed in any of claims 11 - 15, wherein the removal model includes information regarding whether an end-effector attached to a programmable motion device should be changed.

17. The method as claimed in any of claims 11 - 16, wherein the method further includes engaging a pallet within the trailer and removing the pallet from the trailer.

18. The method as claimed in any of claims 11 - 17, wherein the removal model provides a classification of objects to facilitate processing of unknown objects.

19. The method as claimed in any of claims 11 - 18, wherein the method further includes providing perception data regarding objects provided by an object engagement system, and diverting certain selected objects responsive to the perception data.

20. An obstruction assessment system for assessing an obstruction within a trailer of a tractor trailer, said obstruction assessment system comprising: a force application system for applying a plurality of defined forces to an obstruction said plurality of forces including forces with a defined axis wherein the axes are mutually orthogonal; an assessment system for collecting force feedback information responsive to the application of force to the obstruction and providing assessment information; and a control system for providing a removal model for removing the obstruction based on the assessment information.

21. The obstruction assessment system as claimed in claim 20, wherein the force application system applies forces in each of three mutually orthogonal directions.

22. The obstruction assessment system as claimed in any of claims 20 - 21, wherein the force application system applies rotational forces along each of three mutually orthogonal axes.

23. The obstruction assessment system as claimed in any of claims 20 - 22, wherein the object processing system further includes a perception system for providing perception data regarding movement of any of the objects responsive efforts to rock the obstruction.

24. The obstruction assessment system as claimed in any of claims 20 - 23, wherein the control system records data from prior attempts and applies machine learning in providing the removal model.

25. The obstruction assessment system as claimed in any of claims 20 - 24, wherein the removal model includes information regarding whether an end-effector attached to a programmable motion device should be changed.

Description:
SYSTEMS AND METHODS FOR DYNAMIC PROCESSING OF OBJECTS PROVIDED IN

VEHICLES WITH OBSTRUCTION REMEDIATION

PRIORITY

[0001] The present application claims priority to U.S. Provisional Patent Application No. 63/252,807 filed October 6, 2021 and U.S. Provisional Patent Application No. 63/252,811 filed October 6, 2021 , the disclosures of which are hereby incorporated by reference in their entireties.

BACKGROUND

[0002] The invention generally relates to automated, robotic and other object processing systems such as sortation systems, and relates in particular to automated and robotic systems intended for use in environments requiring, for example, that a variety of objects (e.g., parcels, packages, and articles etc.) be processed and distributed to several output destinations.

[0003] Many parcel distribution systems receive parcels from a vehicle, such as a trailer of a tractor trailer. The parcels are unloaded and delivered to a processing station in a disorganized stream that may be provided as individual parcels or parcels aggregated in groups such as in bags, and may be provided to any of several different conveyances, such as a conveyor, a pallet, a Gaylord, or a bin. Each parcel must then be distributed to the correct destination container, as determined by identification information associated with the parcel, which is commonly determined by a label printed on the parcel or on a sticker applied to the parcel. The destination container may take many forms, such as a bag or a bin.

[0004] The sortation of such parcels from the vehicle has traditionally been done, at least in part, by human workers that unload the vehicle, then scan the parcels, e.g., with a hand-held barcode scanner, and then place the parcels at assigned locations. For example many order fulfillment operations achieve high efficiency by employing a process called wave picking. In wave picking, orders are picked from warehouse shelves and placed at locations (e.g., into bins) containing multiple orders that are sorted downstream. At the sorting stage individual articles are identified, and multi-article orders are consolidated, for example into a single bin or shelf location, so that they may be packed and then shipped to customers. The process of sorting these objects has traditionally been done by hand. A human sorter picks an object from an incoming bin, finds a barcode on the object, scans the barcode with a handheld barcode scanner, determines from the scanned barcode the appropriate bin or shelf location for the object, and then places the object in the so-determined bin or shelf location where all objects for that order have been defined to belong. Automated systems for order fulfillment have also been proposed, but such systems still require that objects be first removed from a vehicle for processing if they arrive by vehicle.

[0005] Such systems do not therefore, adequately account for the overall process in which objects are first delivered to and provided at a processing station by a vehicle such as a trailer of a tractor trailer. Additionally, many processing stations, such as sorting stations for sorting parcels, are at times, at or near full capacity in terms of available floor space and sortation resources, and there is further a need therefore for systems to unload vehicles and efficiently and effectively provide an ordered stream of objects.

SUMMARY

[0006] In accordance with an aspect, the invention provides an object processing system for facilitating removal of objects from a trailer of a tractor trailer. The object processing system includes an object obstruction detection system for identifying an obstruction during removal of objects, a force application system for applying any of a plurality of defined forces to the obstruction, an assessment system for collecting force feedback information responsive to the application of force to the obstruction and providing assessment information, and a control system for providing a removal model for removing the obstruction based on the assessment information.

[0007] In accordance with another aspect, the invention provides a method of facilitating removal of objects from a trailer of a tractor trailer. The method includes identifying an obstruction during removal of objects within the trailer, applying any of a plurality of defined forces to the obstruction, collecting force feedback information responsive to the application of force to the obstruction and providing assessment information, and providing a removal model for removing the obstruction based on the assessment information.

[0008] In accordance with a further aspect, the invention provides an obstruction assessment system for assessing an obstruction within a trailer of a tractor trailer. The obstruction assessment system includes a force application system for applying a plurality of defined forces to an obstruction said plurality of forces including forces with a defined axis wherein the axes are mutually orthogonal, an assessment system for collecting force feedback information responsive to the application of force to the obstruction and providing assessment information, and a control system for providing a removal model for removing the obstruction based on the assessment information. BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The following description may be further understood with reference to the accompanying drawings in which:

[0010] Figure 1 shows an illustrative diagrammatic view of an object processing system in accordance with an aspect of the present invention;

[0011] Figure 2 shows an illustrative diagrammatic end view of the object processing system of Figure 1;

[0012] Figure 3 shows an illustrative diagrammatic functional flow diagram of a load assessment routine in a system in accordance with an aspect of the present invention;

[0013] Figure 4 shows an illustrative diagrammatic enlarged view of a dual-purpose tool in the object processing system of Figure 1 showing a pull side of the tool;

[0014] Figure 5 shows an illustrative diagrammatic enlarged view of the dual-purpose tool in the object processing system of Figure 4 showing the pull side of the tool engaging an object;

[0015] Figure 6 shows an illustrative diagrammatic enlarged view of a dual-purpose tool in the object processing system of Figure 1 showing a rake side of the tool;

[0016] Figure 7 shows an illustrative diagrammatic enlarged view of the dual-purpose tool in the object processing system of Figure 4 showing the rake side of the tool engaging an object;

[0017] Figure 8 shows an illustrative diagrammatic functional flow diagram of an object assessment routine in a system in accordance with an aspect of the present invention;

[0018] Figures 9A and 9B show illustrative diagrammatic end views of the object processing system of Figure 1 with the end-effector engaging a plurality of objects in a cross-direction (Figure 9A), and moving the engaged plurality of objects by rotation (Figure 9B);

[0019] Figures 10A and 10B show illustrative diagrammatic side views of the object processing system in accordance with another aspect of the present invention that includes a folding two sub-panel collection panel shown in an elevated position (shown in Figure 10A) and in a lowered position (shown in Figure 10B);

[0020] Figuresl 1A and 1 IB show illustrative diagrammatic side views of the object processing system in accordance with a further aspect of the present invention that includes a folding three sub-panel collection panel shown in an elevated position (shown in Figure 11 A) and in a lowered position (shown in Figure 1 IB);

[0021] Figures 12A and 12B show illustrative diagrammatic side views of the object processing system in accordance with a further aspect of the present invention that includes a telescoping multi-panel collection panel shown in an elevated position (shown in Figure 12A) and in a lowered position (shown in Figure 12B);

[0022] Figure 13 shows an illustrative diagrammatic view of an object processing system in accordance with a further aspect of the present invention that includes tandem coordinated articulated arms;

[0023] Figure 14 shows an illustrative diagrammatic enlarged view of an object engagement portion of the system of Figure 13;

[0024] Figure 15 shows an illustrative diagrammatic flow diagram of a load assessment system in accordance with a further aspect of the present invention;

[0025] Figure 16 shows an illustrative diagrammatic flow diagram of a mi object assessment system in accordance with a further aspect of the present invention;

[0026] Figure 17 shows an illustrative diagrammatic view of an object processing system in accordance with an aspect of the present invention engaging a particularly long object;

[0027] Figure 18 shows an illustrative diagrammatic view of an object processing system in accordance with an aspect of the present invention engaging an object that is blocked from movement;

[0028] Figure 19 shows an illustrative diagrammatic functional flow diagram of an obstruction resolution routine in a system in accordance with an aspect of the present invention;

[0029] Figure 20 shows an illustrative diagrammatic view of an obstructed object being subjected to applied forces in each of three mutually orthogonal directions;

[0030] Figure 21 shows an illustrative diagrammatic view of the obstructed object of Figure 20 being subjected to forces in each of yaw, pitch and roll rotational directions;

[0031] Figure 22 shows an illustrative diagrammatic view of an object processing system in accordance with an aspect of the present invention that includes a plurality of substitutable endeffector tools for use with the object processing system;

[0032] Figure 23 shows an illustrative diagrammatic view of the object processing system of Figure 22 with and plurality of substitutable end-effector tools with one end-effector tool being accessed by a programmable motion device;

[0033] Figure 24 shows an illustrative diagrammatic elevated view of a retention detection system in an object processing system in accordance with an aspect of the present invention;

[0034] Figure 25 shows an illustrative diagrammatic end view of an object processing system in accordance with an aspect of the present invention encountering a netting within a trailer; [0035] Figure 26 shows an illustrative diagrammatic end view of an object processing system in accordance with an aspect of the present invention encountering a loaded wrapped pallet within a trailer;

[0036] Figure 27 shows an illustrative diagrammatic end view of an object processing system in accordance with an aspect of the present invention encountering a netting within a trailer including tandem coordinated arms;

[0037] Figure 28 shows an illustrative diagrammatic end view of an object processing system in accordance with an aspect of the present invention encountering a loaded wrapped pallet within a trailer including tandem coordinated arms;

[0038] Figure 29 shows an illustrative diagrammatic enlarged front view of a pallet removal system in accordance with an aspect of the present invention with the pallet forks in a lowered position;

[0039] Figure 30 shows an illustrative diagrammatic enlarged front view of a pallet removal system in accordance with an aspect of the present invention with the pallet forks in a raised position;

[0040] Figure 31 shows an illustrative diagrammatic enlarged front view of the pallet removal system of Figure 29 with the pallet removal system in a partially rotated position;

[0041] Figure 32 shows an illustrative diagrammatic enlarged front view of the pallet removal system of Figure 29 with the pallet removal system in a fully rotated position;

[0042] Figure 33 shows an illustrative diagrammatic underside view of the pallet removal system of Figure 29 with the pallet removal system in a nonrotated position;

[0043] Figure 34 shows an illustrative diagrammatic underside view of the pallet removal system of Figure 32 with the pallet removal system in a fully rotated position;

[0044] Figure 35 show an illustrative diagrammatic side view of an object processing system in accordance with an aspect of the present invention in which a wrapped pallet is being removed from the trailer;

[0045] Figure 36 shows an illustrative diagrammatic view of the object processing system of Figure 35 wherein the wrapped pallet is being lowered onto a shipping and receiving dock;

[0046] Figure 37 shows an illustrative diagrammatic view of the object processing system of Figure 36 with the pallet removal system in a lowered position;

[0047] Figure 38 shows an illustrative diagrammatic view of the object processing system of Figure 37, wherein the wrapped pallet is removed from the trailer;

[0048] Figures 39A and 39B show illustrative diagrammatic views of an object processing system that includes a pallet removal system in accordance with another aspect of the present invention, showing the pallet removal system in a retracted position (Figure 39A) and extended but not yet rotated (Figure 39B);

[0049] Figure 40 shows an illustrative diagrammatic underside view of the object processing system of Figure 39A that includes a pallet removal system in accordance with another aspect of the present invention, showing the pallet removal system in a retracted position;

[0050] Figure 41 shows an illustrative diagrammatic underside view of the object processing system of Figure 39B that includes a pallet removal system in accordance with another aspect of the present invention, showing the pallet removal system in extended but not yet rotated position; and

[0051] Figure 42 shows an illustrative diagrammatic view of the object processing as claimed in claim 38 wherein the objects are diverted based on any of weight or incompatibility.

[0052] The drawings are shown for illustrative purposes only.

DETAILED DESCRIPTION

[0053] In accordance with various aspects, the invention provides a dynamic engagement system for engaging objects within a trailer of a tractor trailer. With reference for example to Figure 1, a dynamic engagement system 10 may engage objects within a trailer 12, and include a chassis 14 that couples to a warehouse conveyor 16 via couplings 18. The chassis 14 (and the conveyor 16) are movable on wheels for permitting the engagement system 10 to enter into (and back out of) the trailer 12. The wheels on the chassis 14 are powered and the control system is remotely coupled to one or more computer processing systems 100.

[0054] With further reference to Figure 2, the dynamic engagement system includes a collection panel 20 that may be pivoted about its bottom edge to facilitate drawing objects from within the trailer 12 onto the conveyor chassis 14. In particular, an upper edge of the collection panel 20 may be positioned adjacent an upper level of a stack of objects within the trailer using one or more powered rotational assist units 22 (e.g., two on each side as further shown in Figure 14). Each assist unit 22 may also include force torque sensor feedback for measuring any forces acting on the panel 20. The powered rotational assist units 22 rotate the panel upward and downward about an axis 19 at the bottom of the panel 20 (shown in Figure 10A). Using for example, the force torque sensor feedback, the system may lower the panel toward a stack of objects, detect that the panel has made contact with the stack, and may remain in position or back up a small distance until the panel is no longer contacting the stack of objects. Once the panel 20 is positioned adjacent a stack of objects (e.g., just below a top row of a stack of objects), two articulated arms 24, 26 are employed adjacent the panel to urge objects from the stack onto the panel 20 (which may include one or more guides 21). [0055] Initially, the load of objects within a trailer may be assessed. With reference to Figure 3, a load assessment routine may begin (step 1000) by lowering the panel 20 to a position that is approximately horizontal (step 1002), the conveyor chassis 14 may move toward the trailer (step 1004), and the panel may be then raised to a generally vertical position (step 1006). This may ensure that the dynamic engagement system does not begin too close to the objects. The panel is then lowered until the top of the trailer is visible (step 1008), and then continued to be lowered until at least one upper object is detected (step 1010). Distance and position detection sensors in the perception unit 28 are then used to determine a height of the at least one upper object (step 1012) as well as a distance to the at least one upper object (step 1014). The panel is then further lowered to determine whether (and if so where) any lower objects are provided in the trailer that are closer in distance to the dynamic engagement system than the at least one upper object (step 1016). The highest object height, distance to the highest object, and distances to any closer objects are noted (step 1018), and the system them sets the panel rotation elevation and distance to be moved forward toward the trailer for unloading responsive to the highest object height, distance to the highest object, and distances to any closer objects (step 1020). In particular, the panel is positioned to be near but below the closest and highest objects so that programmable motion devices (e.g., articulated arms) may be used to urge objects onto the panel 20, from which the objects will be guided along conveyor section 14 to warehouse conveyor 16.

[0056] Each articulated arm 24, 26 may include a multi-purpose end effector 30 that includes a retrieving tool 34, a distal side of which includes one or more vacuum cups 32 coupled to a vacuum source, and a proximal side 36 that may be used to pull objects over an upper edge the collection panel 20. In particular, Figure 4 shows a plurality of vacuum cups 32 on one side of the tool 34. The vacuum cups are employed (with vacuum from a vacuum source) to grasp objects and pull them over the upper edge of the collection panel 20 as shown in Figure 5. The object (e.g., 38 may then be dropped onto the collection panel by turning off the vacuum to the cups 32. Figure 6 shows the multi-purpose end effector 30 of the articulated arm 26 (again with the vacuum cups 32 on the tool 34), and Figure 7 shows the second side 36 of the tool 34 used to pull one or more objects (e.g., 40, 42) over the upper edge of the collection panel 20 and onto which they fall, optionally guided by one or more guides 21.

[0057] A side of an object may also be engaged to dislodge one or more objects from a stack or set of objects onto the panel. For example, Figure 9A shows the side 36 of the end effector tool 34 engaging an object 44 from a side of the object. The side of the object 44 may have been associated with having an opening (e.g., 45) adjacent a side of the object 44. Once engaged, the object 44 may be moved sideways by the tool 34, and then rotated toward the articulated arm to draw the object(s) over the panel 20. Figure 9A shows the tool 34 engaging not only object 44 but also objects 40 and 42, and urging all three objects against each other and against an inner wall of the trailer. Figure 9B shows all three objects 40, 42, 44 being rotated over the panel 20 such that the objects will fall onto the collection panel 20, optionally engaging guides 21, to be collected by the conveyor 14.

[0058] Once the panel is positioned, each facing object is assessed. In particular for example and with reference to Figure 8, an object assessment routine may begin (step 2000) by evaluating object boundaries. For a panel elevation, for each object encountered top down and across, all boundaries of a front face of each object of interest are identified (step 2002). For each object of interest, the system will also determine any boundaries of a top face associated with the front face (step 2004). With this information, the system may determine whether the front face include a surface suitable for vacuum cup grasping, and provide gasp assessment data (step 2006). The system may also determine whether any rear boundaries of the top face are spaced apart from any neighboring objects, and provide pull assessment data (step 2008). Additionally, the system may determine whether any side boundaries of the top face are spaced apart from any neighboring objects, and provide sideways move assessment data (step 2010). The system may then provide dynamic object engagement instructions responsive to the grasp assessment data, the pull assessment data, and the sideways move assessment data (step 2012). [0059] The top edge 23 of the panel 20 should be positioned to permit objects (e.g., 38, 40, 42) to be moved over the panel 20 so that they may be dropped onto the panel (and thereby urged along the chassis conveyor 14 to the warehouse conveyor 16). The objects may generally be removed from top to bottom of an exposed stack of objects. As the objects are removed (and provided onto the panel 20), the panel is lowered to receive further objects. The panel 20 may be lowered, by pivoting the panel (using the assist unit(s) 22) that rotate the panel with respect to a bottom edge thereof, as well as the powered wheels of the chassis 14 that move the dynamic engagement system backward to accommodate the lowering of the panel (as it rotates). In this way, a lower portion of the exposed stack of objects may be processed, and when further lowered and moved as discussed above, a still further lower portion of the exposed stack of objects may be processed.

[0060] Control of the rotation of the panel 20 about the axis 19 at the bottom of the panel 20, is provided by the panel assist units 22, each of which includes, for example, an offset pair of actuators 70, 72. Each actuator 70, 72 is offset from the axis 80 by a small difference, and the combination of the movement of the actuators (in cooperation with the actuators 70, 72 on the other side of the engagement system 10) causes the panel 20 to be rotated upward or downward with respect to the conveyor of the chassis 14. One or each of the actuators may further include force torque sensors that measure any forces acting on the panel 20 other than gravity. [0061] The objects may thereby be removed from the trailer using the articulated arms 24, 26 to moved objects onto the panel 20 as the panel travels (rotationally and linearly) through the trailer. When the perception system 28 detects that an opening (e.g., at 41, 43) may exist behind objects (e.g., 40, 42) sufficient to receive at least part of a tool 34 of an end effector 30, the system may position a tool of an end effector behind the object such that the second side 36 of the tool 34 may be used to pull one or more objects over the panel 20. Similarly, if an opening is determined to exist adjacent a side face of an object, the system may position a tool of an end effector adjacent the object such that the second side 36 of the tool may be used to urge one or more objects over the panel 20. If an object cannot be moved (or for example cannot be grasped or the end effector tool 34 may not be able to get behind the object), then the system will note this and move on to another object.

[0062] Movement of the dynamic engagement system is provided through the one or more processing systems 100 in communication with the perception system 28, the articulated arms 24 26, the rotational assist units providing rotation and the conveyor wheels actuators (e.g., 15) providing linear movement.

[0063] In accordance with further aspects, the collection panel may include sub-panels that may be rotatable with respect to one another such that the panel may be collapsed when it is lowered toward the floor of the trailer. This may facilitate reaching objects without extending the articulated arms 24, 26 significant distances to clear the edge of the collection panel. For example, Figure 10A shows an object processing system that includes a collection panel 50 with two sub-panels 52, 54. When extended to an upper elevation, the sub-panels are maintained in an extended position (end-to-end) by actuators 53. The panel 50 may include guides 51 to facilitate dropping objects onto the chassis 14. When the actuators 53 release the upper sub-panel 52, it will swing under and be captured on the underside of the sub-panel 54, leaving only the sub-panel to extend toward the objects within the trailer. The outer edge of the sub-panel 54 is therefore now the leading edge of the collection panel 50, permitting the articulated arms 24, 26 to not be required to reach as far away from the chassis 14. Figure 10A shows a side view of the collection panel 50 in the elevated position, and Figure 10B shows the collection panel 50 in the folded and lowered position.

[0064] The collection panel may include any number of such folding sub-panels. Figures 11 A and 1 IB show side views of an object processing system with a collection panel 60 that includes three sub-panels 62, 64, 66. Figure 11 A shows the collection panel 60 in an elevated position with each sub-panel 62, 64, 66 extending end-to-end, and Figure 1 IB shows the collection panel 60 with the sub-panel 62 folded with respect to the sub-panel 64, and the sub- panel 64 folded with respect to the sub-panel 66. In Figure 1 IB, the panel assembly is in a lowered position such that the outer edge of the sub-panel 66 is the leading edge of the collection panel 60, permitting the articulated arms 24, 26 to not be required to reach as far away from the chassis 14.

[0065] The collection panel may further include any number of telescoping sub-panels.

Figures 12A and 12B show side views of an object processing system with a collection panel 60’ that includes multiple telescoping sub-panels 62’, 64’, 66’. Figure 12A shows the collection panel 60’ in an elevated position with each sub-panel 62’, 64’, 66’ extending end-to- end, and Figure 12B shows the collection panel 60’ with the sub-panels 62’, 64’, 66’ collapsed in a telescoping manner. Any guides, e.g., 61’, are mounted in stand-offs with sufficient clearance to permit the sub-panels to be drawn together. In Figure 12B, the panel assembly is in a lowered position such that the outer edge of the sub-panel 66’ is the leading edge of the collection panel 60’, permitting the articulated arms 24, 26 to not be required to reach as far away from the chassis 14.

[0066] In accordance with various further aspects, the dynamic engagement system 210 for engaging objects within a trailer of a tractor trailer 212 may include tandem coordinated articulated arms as shown in Figure 13. The dynamic engagement system 210 may engage objects within a trailer 212 and include a chassis 214 that couples to a warehouse conveyor 216 via couplings 218. The chassis 214 (and the conveyor 216) are movable on wheels for permitting the engagement system 210 to enter into (and back out of) the trailer 212. The wheels on the chassis 214 are powered and the control system is remotely coupled to one or more computer processing systems 200.

[0067] With further reference to Figure 14, the engagement system 210 includes two programmable motion devices 224, 226, each of which includes an end effector 230 with, for example, a plurality of vacuum cups that are coupled to a vacuum source. The programmable motion devices 224, 226 are mounted on automated adjustable height bases 220, 222 that cooperate with the programmable motion devices 224, 226 to provide further vertical ranges of motion to the end effectors 230. The bases 220, 222 are mounted on a support structure 234 that also includes a chassis conveyor 236 that couples to the warehouse conveyor 216 (shown in Figure 13) via couplings 218. In accordance with various aspects, the programmable motion devices 224, 226 are cooperatively operable to efficiently unload objects from the trailer 212 as discussed in more detail below.

[0068] Initially, the load of objects within a trailer may be assessed. With reference to Figure 15, a load assessment routine may begin (step 3000) by moving the conveyor chassis toward the trailer (step 3002). The system may then scan until at least one upper object is visible (step 3004). Distance and position detection sensors in the perception units 228 are then used to determine a height of the at least one upper object (step 3006) as well as a distance to the at least one upper object (step 3008). The panel is then further lowered to determine whether (and if so where) any lower objects are provided in the trailer that are closer in distance to the dynamic engagement system than the at least one upper object (step 3008). The highest object height, distance to the highest object, and distances to any closer objects are noted (step 3012), and the system the sets the distance to be moved forward toward the trailer for unloading responsive to the highest object height, distance to the highest object, and distances to any closer objects (step 3014). In particular, the chassis is positioned near the nearest objects.

[0069] Once the chassis is positioned, each facing object is assessed. In particular for example and with reference to Figure 16, an object assessment routine may begin (step 4000) by evaluating object boundaries. For each object encountered top down and across, all boundaries of a front face of each object of interest are identified (step 4002). For each object of interest, the system will also determine any boundaries of a top face associated with the front face (step 4004). With this information, the system may determine whether the top face includes a surface suitable for vacuum cup grasping, and provide top gasp assessment data (step 4006). The system may also determine whether the front face includes a surface suitable for vacuum cup grasping, and provide front gasp assessment data (step 4008). The system may also determine whether any exposed side face includes a surface suitable for vacuum cup grasping, and provide side gasp assessment data (step 4010). The system may then provide dynamic object engagement instructions responsive to the top grasp assessment data, the front grasp assessment data, and the side grasp assessment data (step 4012). The processing is performed by one or more computer processing systems 200 in communication (either directly or wirelessly) with the programmable motion devices, perception units, conveyor systems and pallet removal systems discussed herein.

[0070] In accordance with various aspects, the programmable motion devices 224, 226 are cooperatively operable to provide dynamic engagement of the objects in the trailer. The cooperative operability, for example, may minimize any time that one programmable motion device must wait for the other programmable motion device. This may be achieved by recognizing that the processing of objects that are closer to the upper elevations within the trailer may require more time (to be grasped, moved and placed onto the conveyor 236) than do objects that are not near the upper elevations within the trailer. The programmable motion devices 224, 26 may both process objects at comparable lower elevations at the same time, while the programmable motion devices 224, 226 may both process objects at comparable upper elevations at the same time. In this way, the devices 224, 226 may most efficiently alternate grasping objects within the trailer and placing them onto the chassis conveyor 236. With both devices grasping objects at the same elevation, the travel time from grasping an object to placement on the conveyor 36 is closely matched as between the devices 224, 226. [0071] In accordance with further aspects, the cooperative operability may involve one programmable motion device grasping an object, and moving the object toward a commonly reachable (by both devices 224, 226) location. The other (second) programmable motion device may then grasp the object, the first programmable motion device then releases the object, and the second programmable motion device then moves the object to the conveyor 236 on which it is placed. One programmable motion device (e.g., 226) may grasp an object. The programmable motion device may then move the object to a commonly reachable location. A commonly reachable location may be a location at which both programmable motion devices may be able to reach the object at the same time. The other programmable motion device 224 may then also grasp the object, and the first programmable motion device 226 may then release the object. The programmable motion device 224 may then move the object 238 to the conveyor 236, and place the object onto the conveyor 326. With both devices working together in this way, the time to unload objects from the trailer may be improved in certain applications, for example, where the programmable motion devices must reach further distances to grasp objects, or where object are required to be placed slowly onto the conveyor 236.

[0072] In accordance with further aspects, the cooperative operability may involve the two programmable motion devices working together to lift a heavy object. The cooperative operability may involve one programmable motion device grasping an object 240, and moving sliding the object to a commonly reachable (by both devices 224, 226) location. The other (second) programmable motion device may then also grasp the object, and the first and second programmable motion devices may then together lift the object to the conveyor 236 on which it is placed. One programmable motion device (e.g., 224) may grasp an object. If the object is determined to be heavy (either by having the device 224 try to lift the object, or by accessing a database with a known identity of the object) the programmable motion device may then slide the object toward a commonly reachable location at which both programmable motion devices 224, 226 may be able to grasp the object at the same time. The other programmable motion device (e.g., 226) may then also grasp the object. The grasp position of the first programmable motion device 224 may also be adjusted, and the object 240 may be further moved with the new grasp position. The two programmable motion devices may then cooperate to lift the object onto the conveyor 236. With both devices working together in this way, the system may be able to handle the processing of heavier objects, may increase speed of handling heavier objects, and may increase system life by reducing loads on individual programmable motion devices. Movement of the dynamic engagement system is provided through the one or more processing systems 200 in communication with the perception system 228, the articulated arms 224 226, the automated adjustable bases 220, 222, and the conveyor wheels actuators.

[0073] In various applications, obstructions may be encountered, and these obstructions may be addressed in any of a variety of ways using modelling and machine learning. For example, a particularly large object may be encountered (e.g., that is very long) as shown in Figure 17. The long object 72 may be encountered when only an exposed side is visible or it may be apparent when the object is encountered that it will be long (e.g., an exposed end of a kayak). If the system is unable to move an object, it will move on to the task of moving other objects (as discussed a above) until the object is sufficiently free. Additionally, there may be further objects (e.g., 74) on top of the object 72 that are not yet reachable by the articulated arms 24, 26. In further applications, obstructions may be encountered where the object is too heavy to be moved or cannot be freed from surrounding objects. Figure 18 shows an object 76 that is blocked from being moved by the end-effector 30 by surrounding objects 73, 75, 77, 78, 79. [0074] In either of these situations, the system may apply a maximum normal run-time vacuum pressure, and if this fails, the system may set a signal indicating the need for human personnel intervention. Alternatively, the system may conduct some analyses and develop a removal model. The system may characterize the end-effector movements in terms of the forces and torques it can apply to the load, and then look at the ensemble of objects, the wall, and 1 all the places the effector could be placed, and the forces and torques that could be applied. The system may estimate what any resulting motion would occur. Sometimes an object move, e.g., lift up, slide out of the wall, or slide onto the platform. Sometimes the object may pivot to a more accessible pose. Sometimes, however, the object may pivot to be cocked and harder to remove, which information should be provided by the model (to be avoided). Sometimes multiple objects move, which is generally ok. Simulation modules characterize the possible outcomes of feasible end-effector actions. Machine learning may further be used to learn a mapping from loads to provide good end-effector behaviors, given the wide variability of events such as the object doesn’t move, the object is heavier than anticipated, or the friction is more significant than anticipated, or neighboring objects move in unwanted ways. That modelled outcome could be observed and integrated in the modeling system so that removal models may be developed accordingly.

[0075] For example, Figure 19 shows the functional process of an obstruction resolution routine that may begin (step 5000) by noting for each insufficient grasp or insufficient move, the perception data regarding the obstructed object (step 5002). The system may then grasp the object and try to move in it each of x, y and z directions noting feedback from joint force torque sensors on the robot (step 5004). The system may then try to move the object in each of yaw, pitch and roll directions noting feedback from joint force torque sensors on the robot (step 5006). This sensor feedback information may provide significant data that facilitates not only identifying an efficient removal model, but may help classify objects for facilitating handling unknown objects. The system may then access the database regarding any modelled motion (step 5008), and if no removal model is found, the system may rock the object horizontally to try to free the object from side obstructions (step 5010). Such rocking may sufficiently loosen the object for removal. The system may then record image(s) of the rocking and any movement of surrounding objects (step 5012). If it is determined that a different end-effector should be used, the system may swap out the end-effector for any desired different end-effector (step 5014) as also shown in Figures 27 and 28. The system may then access machine learning database regarding data collected for the object (step 5016) and the develop obstruction removal model (step 5018).

[0076] With reference to Figure 20, for example, the end-effector 30 may try to move an obstructed object 76 in each of x, y, and z directions, noting the feedback on joint force torque sensors on the articulated arm. With reference to Figure 21, the end-effector 30 may further try to move the obstructed object 76 in each of yaw, pitch and roll directions, again noting the feedback on joint force torque sensors on the articulated arm. This non- visually observable feedback information may provide valuable insights for the machine learning system in developing efficient removal models.

[0077] Figure 22 shows an object processing system in accordance with an aspect of the present invention as discussed above that includes a pair of end-effector swap racks 48, 58 on which a plurality of further end-effectors 30’, 30” may be provided for use by the articulated arms 24, 26. As shown in Figure 23, each articulated arm (e.g., 24 as shown) may access each further end-effector for automatically swapping out the end-effector as the removal model may require. [0078] A retention detection system may also be employed to determine whether a retention system is present within a trailer (e.g., such as a restraining net, wall, or set of objects that are wrapped together, for example and provided on a pallet). With reference to Figure 24, the retention detection system begins (step 6000) by being triggered for each object that may not be sufficiently processed. In particular, for each insufficient grasp of an object or insufficient attempted move of an object, the following data is collected (step 6002). This is done until the panel is lowered to its lowest point and all movable objects are moved. The system then records instances of net lines across front faces of retained objects (step 6004), and then record instances of net lines extending horizontally across multiple retained objects (step 6006). The system then records instances of net lines extending vertically across multiple retained objects (step 6008), and then records any image of any portion of a pallet near the floor of the trailer (step 6010). The system them sets a net detection signal responsive to any instances of net lines in connection with a plurality of retained objects (step 6012), and then sets a pallet detection signal responsive to any image of any portion of a pallet near the floor of the trailer in connection with a plurality of retained objects (step 6014). The system then engages the automated pallet removal system responsive to the pallet detection signal (step 6016).

[0079] The programmable motion devices 24, 26, 224, 226t each include force torque sensors at each of the their respective joints, and the output information from the force torque sensors is provided to the one or more computer processing systems 100, 200.

[0080] During the removal of objects therefore, if any object may not be removed (either may not be able to be grasped properly, or may not be movable due to an obstruction), the system will run a retaining detection routine to determine whether any of the objects are retained within the trailer. The system will continue, moving to a next object until all objects that may be moved are moved onto the panel 20. Each time an object is identified as being not movable (again either not graspable or is blocked), the system will run the retaining detection routine. The retaining detection routine may be run on the one or more computer processing systems 100 with perception data from the perception unit 28, and may analyze image data in combination with object grasp attempt data to identify whether any retention system is inhibiting the removal of objects from the trailer. If a retaining feature is present, the system will run for each object that is found to be not movable. A combination of the results of the multiple executions of the routine provides duplicative results that should confirm the type of retaining feature present. For example, Figure 25 shows a netting 56 that spans the width and height of the trailer and are attached to mounts 55. Such netting 56 may be installed manually upon loading of the trailer, and may be required to be removed manually with unpacking the trailer. An alarm (light and/or sound) will be triggered if a netting is detected by the system, and removed by human personnel.

[0081] Alternatively, as the movable objects are removed, an image of an exposed end of a pallet at the floor of the trailer may be detected. The objects on the pallet may be wrapped (accounting for the system being unable to move individual objects), and upon detection of the pallet, the system will trigger a pallet removal command. Figure 26 for example, shows a pallet 57 on which objects are provided within a wrapping (e.g., clear plastic) 58. Objects within the wrapping 58 on the pallet will not be movable by the end effector 30, and the system will run the retaining detection routine. Once the bottom of the trailer becomes clear, the pallet 57 will become visible to the perception system 28, and the system will register that a pallet is present. Again, an alarm (light and/or sound) will be triggered if a pallet is detected by the system, and the pallet and its associated objects may be removed by human personnel.

[0082] During the removal of objects therefore, if any object may not be removed (either may not be able to be grasped properly or may not be movable due to an obstruction), the system will run the retaining detection routine to determine whether any of the objects are retained within the trailer. The system will continue, moving to a next object until all objects that may be moved. Each time an object is identified as being not movable (again either not graspable or is blocked), the system will run the retaining detection routine. The retaining detection routine may be run on the one or more computer processing systems 200 with perception data from the perception units 228, and may analyze image data in combination with object grasp attempt data to identify whether any retention system is inhibiting the removal of objects from the trailer. If a retaining feature is present, the system will run for each object that is found to be not movable.

[0083] A combination of the results of the multiple executions of the routine provides duplicative results that should confirm the type of retaining feature present. For example, Figure 27 shows a netting 250 fastened to an attachment mount that spans the width and height of the trailer. Such netting may be installed manually upon loading of the trailer, and may be required to be removed manually with unpacking the trailer. An alarm (light and/or sound) will be triggered if a netting is detected by the system, and removed by human personnel.

[0084] Alternatively, as the movable objects are removed, an image of an exposed end of a pallet at the floor of the trailer may be detected. The objects on the pallet may be wrapped (accounting for the system being unable to move individual objects), and upon detection of the pallet, the system will trigger a pallet removal command. Figure 28 for example, shows a pallet 260 on which objects are provided within a wrapping (e.g., clear plastic) 262. Objects within the wrapping 262 on the pallet will not be movable by the end effector 230, and the system will run the retaining detection routine. Once the bottom of the trailer becomes clear, the pallet 260 will become visible to the perception system 228, and the system will register that a pallet is present. Again, an alarm (light and/or sound) will be triggered if a pallet is detected by the system, and the pallet and its associated objects may be removed by human personnel.

[0085] In accordance with further aspects, when the system detects the presence of a pallet as above, the system may employ an automated pallet removal system. In particular and with reference to Figure 29, the system may include a pallet removal system 80 that includes a fixed pivot end 82 that rotates with respect to the chassis conveyor 14 by a pivot pin 84, and a rotating swing end 86, both of which ends 82, 86 being coupled to a swing bar 88. The swing bar 88 is attached to a counterweight portion 110 that is supported by a plurality of heavy-duty casters 112. The pallet removal system 80 also includes a pair of forks 94, 96 that are mounted to a cross bar 98, and the cross bar 98 may be actively raised or lowered along tracks 102, 104 as controlled by the one or more computer processing systems. Figure 30 shows the forks 94, 96 and the cross bar 98 in the raised position. The pallet removal system 80 may also include one or more perception systems at the cross bar 98 to aid the pallet removal process (the perception system 28 may be blocked by the panel 20).

[0086] With reference to Figures 31 and 32, the pallet removal system 80 may be rotated with respect to the chassis 14 about the pin 84 (e.g., to 45 degrees as shown in Figure 34, and to 90 degrees as shown in Figure 32). Figure 33 shows an underside view of the pallet removal system 80 under the chassis conveyor 14, and Figure 34 shows an underside view of the pallet removal system rotated 90 degrees (as in Figure 32). The counterweight 110 facilitates lifting of a pallet, and casters 112 (together with wheels under the pivot end 82 and swing end 86) support the weight of the counterweight 110 and the pallet. Figure 35 shows a pallet of objects being removed from the trailer, and Figure 36 shows the pallet of objects rotated 90 degrees by the removal system. Figure 37 shows an opposite side view of the pallet are rotated 90 degrees in Figure 36, and Figure 38 shows the pallet removed and unloaded from the removal system. The removed and unloaded pallet is no longer obstructing the removal of objects from the trailer, and the dynamic engagement system may re-enter the trailer and again begin removing objects. The removed and unloaded pallet may be processed by human personnel.

[0087] In accordance with further aspects, when the system employing tandem coordinated articulated arms detects the presence of a pallet as above, the system may employ an automated pallet removal system. In particular and with reference to Figures 39A and 39B, the system may include a pallet removal system 280 that includes a slidable pivot end 282 that first moves linearly along a track 283, and then rotates with respect to the chassis conveyor 214 by a pivot pin. The system also includes a rotating swing end 286, and both of the ends 282,2 86 are coupled to a swing bar. The swing bar is attached to a counterweight portion 210 that is supported by a plurality of heavy-duty casters as shown in Figure 40 (shown retracted) and Figure 41 (shown extended but not yet rotated). The system includes the track 283 along which the pin is guided as the assembly moves linearly (powered by the active wheels) to move the assembly to the end of the chassis 214. The pallet removal system 280 also includes a pair of forks 294, 296 that are mounted to the cross bar, and the cross bar may be actively raised or lowered along tracks as controlled by the one or more computer processing systems 200. The pallet removal system 280 may also include one or more perception systems on the cross bar to aid the pallet removal process.

[0088] Figure 42 shows an output system in accordance with further aspects of the present invention that includes a perception system 120 with a diverting system 122. The perception system may include one or more perception units 124 that provide any of camera images, or scan data such as 2D or 3D scan data, or perception information regarding any identification code such as a barcode, QR code of other unique identification markings. The perception system may further include a weight sensing conveyor section as part of the diverting system 122 (e.g., with rollers or diverter belts mounted on load cells). The diverter 122 may include bi-directional belts that are selectively elevatable between the rollers. The perception and diverting systems sit between the chassis 14 and the warehouse conveyor 16, and permit outlier items such as heavy or large items to be diverted to one or another diverter paths 126, 128 for either alternate automated processing or for processing by human personnel. The decision to divert any object is based on the perception information from the perception system (e.g., size, weight, identification etc.).

[0089] Those skilled in the art will appreciate that numerous modifications and variations may be made to the above disclosed embodiments without departing from the spirit and scope of the present invention.

[0090] What is claimed is: