Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MEDIDATA CLINICAL TRIAL SYSTEM INTEGRATION WITH ORACLE CODING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2018/031697
Kind Code:
A1
Abstract:
A method of collecting, monitoring and comparing data in a multi-variable, multi-participant clinical analysis system and integrating the collection and monitoring with a thesaurus database is disclosed. The method utilizes one or more computer-based units for collecting, validating, integrating and storing the integrated data from multiple participants in a clinical study. The method also utilizes a remote main computer system connected to the computer-based collection units for receiving the uploaded integrated data from the collection unit for further processing and downloading new or modified data and collection instructions to the remote collection units with software and interfacing apparatus cooperating for downloading and inputting new or updated data, the collection units and the main computer being synchronized so that the data in all the connected units is the same,

Inventors:
CARUSO DONNA (US)
CLARK DENISE (US)
Application Number:
PCT/US2017/046168
Publication Date:
February 15, 2018
Filing Date:
August 09, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DBMS CONSULTING INC (US)
International Classes:
G06F19/00
Foreign References:
US20150178447A12015-06-25
US20080016111A12008-01-17
US20060147099A12006-07-06
US20100049740A12010-02-25
KR20140099506A2014-08-12
Attorney, Agent or Firm:
RAM, Michael, J. et al. (US)
Download PDF:
Claims:
We Claim

1. A method of collecting, monitoring and comparing data in a multi-variable, multi- participant clinical analysis system and integrating the collection and monitoring with a thesaurus database, comprising:

a. providing one or more computer-based collection units at user-facility

locations for collecting medical data, validating said collected data, integrating the data with the thesaurus data base, and storing the integrated data therein from multiple participants in a clinical study, said computer-based collection unit including software-based instruction protocols for guiding the user of the data collection, monitoring and integration system through various procedural steps for the collection, analysis, comparison and storage of data entered for each of the multiple participants,

b. a remote main computer system connected to the computer-based

collection unit at the user-facility, the main computer system receiving the uploaded integrated data from said computer-based collection unit for further processing and downloading new or modified collection instructions to the remote collection units,

c. providing a display means in communication with the computer-based unit for displaying data, instruction windows and graphics pertaining to the clinical study and the participants therein to the user of the remote collection unit,

d. providing data input software and an interfacing apparatus connected to the computer-based collection unit at the user-facility location to allow a user of the computer-based collection unit to interact with the data displayed in real time, wherein the user can edit the information shown on the display or input new information into the remote collection unit, wherein the input software and the interface apparatus cooperate for

downloading and/or inputting new or updated clinical study details regarding the participants in the clinical study and uploading data from the remote data collection units to a data storage unit and wherein the computer-based collection units and the main computer system are synchronized so that the data accessed using the remote data collection units and the main computer system is the same.

2. The method of claim 1 , wherein the data collection comprising creating, changing, or deleting data details, information parameters, and data values for each participant in the clinical study and storing those details, parameters, and values in the data storage unit for access by the main computer and the remote data collection units and the subsequent generation of reports of the data entered for each or all participants.

3. The method of claim 2 further comprising the steps of exporting data from the one or more computer-based collection units at user-facility locations, the exported data from the computer-based collection units imported to and consolidated at the main computer unit.

Description:
MEDIDATA CLINICAL TRIAL SYSTEM INTEGRATION

WITH ORACLE CODING SYSTEM

[0001] Benefit is claimed of Provisional Application Serial No. 62/372,727 filed

August 09, 2016.

SUMMARY

[0002] The System described herein, referred to as the DBMS/TMS to Medidata

RaveOlnteg ration System, or TMSINT, is provided as a Commercial off the shelf (COTS) solution that enables a client (the System user) using the Medidata Rave electronic data capture (EDC) system to integrate the clients source study data requiring medical coding with the Oracle ® Thesaurus Management System (TMS) Repository to perform medical coding and return the classifications back to the

Medidata Rave EDC system. While not limited in use to data collection and subsequent analysis and tabulation of data, the primary use of the system is to collect, enter and compare patient outcomes in clinical trials of experimental drugs, medical procedures and medical devices, particularly clinical trials where there are multiple variables.

MEDIDATA and MEDIDATA RAVE are registered trademarks of Medidata Solutions, Inc of New York for downloadable software for enabling clinical researchers to collect data on individuals subject to clinical trials. Once the client and participant identity is defined and configured, the TMS to Medidata Rave Integration application is available to run at a scheduled frequency or manually (ad-hoc) to extract clinical data from the client's Medidata Rave EDC source system, process the data in the Thesaurus

Management System and subsequently write applicable data back to the client's Medidata Rave source system.

[0003] The key benefits of the TMS to Medidata Rave Integration application are:

[0004] 1. Bi-directional integration of data in the Medidata Rave EDC system and the TMS Repository using the DBMS TMS to Medidata Rave Integration (TMSINT) application , [0005] 2. Secure data transfer via Rave Web Services (RWS) technology and encrypted authentication,

[0006] 3. Coding and Receding data from the Medidata Rave EDC system in the IMS Repository,

[0007] 4. Delivering the data coded and receded in the TMS Repository to the Medidata Rave EDC system,

[0008] 5. Delivering the TMS actions (queries) in the TMS Repository to the Medidata Rave EDC system, and

[0009] 6. Providing a query response from the Medidata Rave EDC system to the TMS Repository.

[0010] The key processes of the TMS to Medidata Rave Integration application are:

[0011] 1. Data Extraction processes that read and stage data from the Medidata Rave EDC system to the TMS to Medidata Rave Integration application's custom staging tables in the Oracle RDBMS.

[0012] 2. INBOUND Data Formatting where the TMS to Medidata Rave Integration application's custom Staging APIs format the data in the application's custom staging tables over to the inbound tables in the Oracle RDBMS.

[0013] 3. INBOUND Data Processing where the TMS to Medidata Rave Integration application's custom INBOUND APIs processes the data in the application's inbound tables with the Oracle Thesaurus Management System (TMS) Repository. This process includes maintaining the client data "Synchronized" with the TMS Repository and addresses and interrogates the following conditions:

• Has the Verbatim Term Assignment (VTA) been declassified? (patient coding is now a patient omission)

• Has the VTA been classified (patient omission is now a patient coding record)

• Has the VTA been re-classified? (manual reclassification or direct-match reclassification during versioning)

• Has the VTA's parent DT been updated? (dictionary version upgrade) • Has the VTA's parent DT been made inactive? (dictionary version

upgrade)

• Has the VTA's parent DT been promoted? (dictionary version upgrade)

• Has the VTA's parent DT been demoted? (dictionary version upgrade)

• Has one or more components of the "Deriviation Path" changed? (i.e., has the dictionary version been upgrade)

[0014] 4. INBOUND data is processed in the TMS Repository and INBOUND Data is moved to the CODING Tables:

• Programmatically codes an exact match to a dictionary term or an existing VTA in the Domain linking the TMS verbatim term to a dictionary term, and

• Programmatically uses the DBMS custom Autoencoder Search Objects to code near-matches to a dictionary term or an existing VTA in the Domain linking the TMS verbatim term to a dictionary term.

• If a match is still not found, using manual intervention, The TMS to

Medidata Rave Integration application's custom APIs create an omission in TMS and a record in the source terms table.

• An omission is handled in TMS by either coding the term or by applying an Action.

[0015] 5. The TMS to Medidata Rave Integration application enables the user to proceed with Medical Coding in the TMS Repository to:

• Classify Omissions,

• Create, Maintain, Approve Action for Query in the Medidata Rave EDC system,

• Approve Classifications (optional),

• Reclassify Classifications, and

• Declassify with Action

[0016] 6. OUTBOUND Data Processing: The TMS to Medidata Rave Integration application's custom OUTBOUND APIs process the data in the Oracle Thesaurus Management System (TMS) Repository with the application's outbound tables. This process includes keeping client data "Synchronized" with the TMS Repository and addresses and interrogates the following conditions:

• Has the VTA been declassified? (patient coding is now a patient omission)

• Has the VTA been classified (patient omission is now a patient coding record)

• Has the VTA been re-classified? (manual reclassification or direct-match reclassification during versioning)

• Has the VTA's parent DT been updated? (dictionary version upgrade)

• Has the VTA's parent DT been made inactive? (dictionary version upgrade)

• Has the VTA's parent DT been promoted? (dictionary version upgrade)

• Has the VTA's parent DT been demoted? (dictionary version upgrade)

• Has one or more components of the "Derivation Path" changed? (dictionary version upgrade)

[0017] 7. OUTBOUND Data Formatting: The TMS to Medidata Rave Integration application's custom Staging APIs process the data in the application's custom outbound tables to the import staging tables.

[0018] 8. Data Import: Writes the import staging data containing the following data to the Medidata Rave EDC system.

• Coding classifications and related dictionary details.

• Open queries for the Sites to respond to the coder's queries.

BACKGROUND

[0019] Generally speaking, the technology at issue teaches a method that allows users to make object (software) applications and relational databases to work together, despite the differences in the ways software applications and database systems organize data. Specifically, the technical problem is that no prior approach was available to provide real-time transactional XML data exchange interfaces and Data Processing APIs (Application Programming Interfaces) that support data integration and data processing via database manipulations for clinical EDC (Electronic Data Capture) applications' (e.g., Rave EDC) coding data with clinical terminology and coding applications (e.g., Oracle Thesaurus Management System).

[0020] An example of such a rational database system is provided by Oracle

Corporation, Redwood City CA. This database, as described by Oracle, is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information and provide a means for managing voluminous quantities of related information. Generally, a server is programmed to reliably manage the large amount of data in a multiuser environment while providing multiple users the capability of concurrently accessing the same data. The database server also prevents

unauthorized access, prevents unintentional changes to the data in the system and provides solutions for failure recovery.

[0021] The Oracle Database was one of the first databases designed for enterprise grid computing while providing flexible ways to manage information and applications.

Enterprise grid computing establishes large pools of industry-standard, modular storage and servers such that each new system can be rapidly provided from the pool of components and capacity can be readily added or reallocated from the resource pools as required. The database has both logical and physical structures and because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.

[0022] Oracle has several patents covering their database including, but not limited to US5,850,547, US85,991 ,754, US6.023.895, US7.165.065, US7.171.427, US7.299.223, US7.366.730, US7.415,457, US8.001.112, US8.200.612. US8.209.280. US8.566.341 , US8.612.421 , US8.812,488, US8.868.544, US9.230.007, US9.317,529, US Application 2004/0254947, US Application 2004/0260690, US Application 2005/0004904, US Application 20050033741 , US Application 20050076065 and US Application

20110113031.

[0023] Prior approaches to allow software to operate in conjunction with the Oracle Database were addressed in several patents.

[0024] US Patent Number 5.857,197, issued January 5, 1999, "System and Method for Accessing Data Stores as Objects," (the Ί97 Patent) teaches the use of an adapter abstraction layer using two adapters (interfaces). According to the patentee, in regard to the two adapters or interfaces, one specializes in the object model and the other specializes in the re!ationai database model. The Ί97 Patent is said to teach "basic persistence architecture" of the method with advanced features related to development, modeling, and caching which were claimed in subsequent patents.

[0025] US Patent 6,385,604 is directed to a method of and apparatus for joining and aggregating data elements integrated within a relational database management system

(RDBMS) using a non-relational multi-dimensional data structure

[0026] US Patent 6,741 ,983 is directed to a method of indexed storage and retrieval of multidimensional information.

[0027] US Patent 6,985,912, issued January 10, 2006, "Dynamic Object-Driven

Database Manipulation and Mapping System Having a Simple Global Interface and an Optional Multiple User Need Only Caching System With Disable and Notify Features," (the '912 Patent) is directed to the problem of user manipulation of stored data and addresses the synchronization of various copies of data.

[0028] US Patent 6,999,956, issued February 14, 2006, "Dynamic Object-Driven

Database Manipulation and Mapping System," (the '956 Patent) teaches mapping of data between the object and relational programs using metadata in a library.

[0029] US Patent 7,043,481 , issued May 9, 2006, "System, Method and Software for Creating, Maintaining, Navigating or Manipulating Complex Data Objects and Their Data Relationships," (the '481 Patent) describes a Complex Data Object Graph (CDOG) model that captures the various relationships between objects and teaches how to represent, manage, an effectively access the various relationships.

[0030] US Patent 7,103,600, issued September 5, 2006 is titled "Displayable

Presentation Page and SQL Searchable Relational Data Source Implementation of a System, Method and Software for Creating or Maintaining Distributed Transparent Persistence of Complex Data Objects and Their Data Relationship." (the '600 Patent) and US Patent Number 7,167,62, issued January 23, 2007 is titled, "Session Bean Implementation of a System, Method and Software for Creating or Maintaining

Distributed Transparent Persistence of Complex Data Objects and Their Data Relationships." (the '862 Patent) teach how to maintain persistence in saving and storing objects and their relationships.

[0031] US Patent 7,149,730, issued December 12, 2006, "Dynamic Class Inheritance and Distributed Caching with Object Relational Mapping and Cartesian Model Support in a Database Manipulation and Mapping System," (the '730 Patent) is directed to working with multiple data stores that contain information organized differently.

[0032] US Patent 7,272,612 is directed to method of partitioning data records.

[0033] US Patents 7,454,411 and 7,882,106 are directed to methods of indexed storage and retrieval of multidimensional information.

[0034] US Patent 8,099,733 is directed to parallel data processing architecture.

[0035] U.S. Patent Number Re. 40,526, which is a reissue of U.S. Patent Number 5,617,567, is directed to a data processing system and method for retrieving an entity specified in a search path record from a relational database.

[0036] U.S. Patent Number Re. 40,520, which is a reissue of U.S. Patent Number 5,826,259, is directed to an expandable data processing system and method.

[0037] According to the Oracle Systems literature a user can partially or fully integrate the Oracle ® Thesaurus Management System (IMS) with non-Oracle source data systems, but they leave it to the user to handle the data exchange. If full integration is desired, the user must "customize" the external system to make it compatible with TMS.

[0038] More specifically, full integration requires the installation of TMS objects into the external system, for stable operation of the external system in the global Oracle environment. If fully integrated, TMS maintains external data in both the

tms_source_terms table and the tms_vt_omissions table. The fully integrated system will then benefit from the full range of functionality provided by the TMS. The fully integrated external system will then feed source terms to TMS with contextual data specified by the user (such as Document Number) so that if the data is reclassified or declassified TMS can then send information about each affected source term back to the external source data system.

[0039] In TMS, the user runs Autoclassification, manually classifying remaining terms (omissions), assigning Actions, and reclassifying or declassifying as necessary and specifies the information derived from TMS for each source term. IMS then sends that data to the source data system associated with each source term.

[0040] However, for a user to fully integrate a non-Oracle system with TMS, Oracle leaves it up to the user to devise ways to:

• Associate the source term collection unit, (for example a study or case

depending on the source data system)-with TMS domains and dictionaries (the X Area is the source term collection unit in both the tms_source_terms table and the tms_vt_onlissions table)

• Define objects to receive values derived from TMS

• Integrate TMS with the external system's discrepancy management function, and

• Exchange data between the two systems

[0041 ] The technical problem is that Oracle TMS does NOT provide users with any instructions or suggestions on how to fully integrate with non-Oracle source data systems. The customers "must handle the data exchange", "must devise ways to: 1 ) associate the source term collection unit for a study or case, with TMS domains and dictionaries 2) Define objects to receive values derived from TMS, 3) Integrate TMS with the external system's discrepancy management function, and 4) Exchange data between the two systems". Extensive customization of the external system is also required, such as installing Oracle TMS objects into other proprietary external data system environments, which may not be possible, or in fact prohibits, when those systems are hosted by a third party.

[0042] The DBMS TMS I NT system described herein provides a technical solution that provides users with a system, which fully integrates a non-Oracle source data systems, referred to as Medidata Rave. The DBMS TMS I NT system handles the data exchange and associates the source term collection unit with TMS domains and dictionaries. TMS I NT defines objects to receive values derived from TMS and integrates TMS with the external system's query management functionality. TMS I NT exchanges data between the two systems in a scheduled or on-demand (immediate) manner and does NOT require the external system to install Oracle TMS objects into other proprietary external data system environments. BRIEF DESCRIPTION OF THE DRAWINGS

[0043] Figure 1 is a schematic diagram showing the data flow for the TMS to Medidata Rave Integration application.

[0044] Figure 2 is a schematic representation of the extraction of Ciient Source Data via Web-Services.

[0045] Figure 3 is an example of a Client TMS Integration Definition Worksheet.

[0046] Figure 4 is an example of an auto match display.

[0047] Figure 5 is an example of an auto match display showing locating equivalent terms.

[0048] Figure 6 is an example of an auto match display showing locating non- equivalent terms.

[0049] Figures 7 and 8 are example displays showing re-submit terms to be re-coded.

[0050] Figure 9 is an example of a screen display showing export coding data and queries from TMS.

[0051] Figures 9 and 10 are examples of a screen display showing associated source data and actions/queries created in TMS.

[0052] Figure 12 is an example of a screen display showing multiple synonym lists created to enable different synonym lists to be linked to different studies with the TMS Domain/Dictionary functionality.

[0053] Figure 13 is an example of multiple screen displays showing different TMS enabled approval workflows by domain illustrating that one study/program can automatically approve a VTA when it is classified while the other study can enforce the manual approval process after classification.

[0054] Figure 14 is an example of a screen display showing TMS accessed by multiple users at the same time allowing the coder to filter using the TMS Omission Filter.

[0055] Figure 15 is an example of a screen display showing, while coding, that the user can sort coded data within the TMS Classify VT Omissions.

[0056] Figure 16 is an example of multiple displays on a single screen showing TMS browsing capabilities that makes ingredients visible when coding in WHO DDE and all multi-axial relationships are able to be seen. [0057] Figure 17 illustrates that the TMSINT application then propagates the Ingredients and Dictionary Version to Medidata Rave.

[0058] Figure 18 illustrates that the TMSINT application allows the user to "Drill Down" through the data to obtain auxiliary information.

[0059] Figure 19 are three examples of screens showing the ability of TMS to attach and view status codes or notes to coded and uncoded terms.

[0060] Figure 20 is an example of a screen display showing approval of VTAs.

[0061] Figure 21 illustrates that the user has the option to use a two-step coding process with a second coder acknowledging approval of the code.

[0062] Figure 22 is a screen shot illustrating that TMS is able to perform ATC coding for medications in a single step.

[0063] Figure 23 is a screen shot illustrating that for Drug Dictionaries without Primary Links or Paths on the ATC Levels, the Indication Omissions window is available.

[0064] Figure 24 illustrates Auto-coding to previously coded verbatim code individual (VTIs).

[0065] Figure 25 illustrates automatic ATC classification of a drug based on previous ATC classification of the same drug having the same indication.

[0066] Figure 26 illustrates the TMSINT application Custom auto-coding search object algorithms automatically coding verbatim terms to the Medical and Drug Dictionaries.

[0067] Figure 27 is a flow diagram illustrating the Oracle TMS to Medidata Rave

Integration (TMSINT) application with Autocoder.

[0068] Figure 28 is a screen image ii!ustrating the Custom Autocoder Partial Word Substitution Design.

[0069] Figure 29 is a screen image illustrating the Custom Autocoder Full Word

Substitution Design.

[0070] Figure 30 is a schematic representation of the Oracle TMS to Medidata Rave

Integration (TMSINT): Architecture incorporating features of the invention.

[0071] Figure 31 is a schematic representation of the OHS 5.1.x Logical Architecture. [0072] Figure 32 Is a screen image illustrating user administration of overview of the system comprising creating, configuring, modifying, and inactivating portions of the stored records.

[0073] Figure 33 is a sample of a Client definition based on a new client worksheet. DETAILED DISCUSSION

[0074] The TMSINT system described herein provides the approaches and methods for:

1 ) Real-time transactional XML data exchange interfaces and Data

Processing APIs (Application Programming Interfaces) that support data integration and data processing via database manipulations for clinical EDC (Electronic Data Capture) applications' (e.g., Rave EDC) coding data with clinical terminology and coding applications (e.g., Oracle Thesaurus Management System).

2) A user interface to configure the XML data exchange between the EDC (Electronic Data Capture) applications and clinical terminology and coding applications.

[0075] The XML data exchange can be scheduled to run at predetermined frequencies and run on-demand.

[0076] The key processes of the application are:

[0077] Integration Configuration: Allows the transactional data exchange interfaces to be configured at the clinical study level via a user interface.

[0078] Data Extract: Reads and stages data from the clinical EDC applications to the TMSINT system's custom staging tables in the Oracle RDBMS.

[0079] INBOUND Data Formatting: The TMSiNT system's custom Staging APIs process the data in the application's staging tables to the application's inbound tables in the Oracle RDBMS.

[0080] INBOUND Data Processing: The TMSINT system's custom INBOUND APIs process the data in the application's inbound tables with the clinical terminology and coding application. This process includes keeping client data "Synchronized" with the clinical terminology and coding application and interrogates the clinical coding conditions such as declassification, classification, reclassification, data updates, data promotions, data demotions and changes to the terminology hierarchy derivational path.

[0081] Processing INBOUND data in the clinical terminology and coding

application and Moving INBOUND Data to the CODING Tables:

• Programmatically codes an exact match to a dictionary term or an existing VTA in the Domain linking the coding application verbatim term to a dictionary term.

• Programmatically uses the DBMS Auto encoder Search Objects to code near-matches to a dictionary term or an existing VTA in the Domain linking the coding application verbatim term to a dictionary term.

• If a match is still not found, manual intervention is required. The TMSINT system's custom APIs create an omission in coding application and a record in the source terms table.

• An omission must be handled in the coding application by either coding the term or by applying an Action.

[0082] The TMSINT system enables the user to proceed with Medical Coding in the coding application to:

• Classify Omissions

• Create, Maintain, Approve Action for Query in the EDC application.

• Approve Classifications (optional)

• Reclassify Classifications

• Declassify with Action

[0083] OUTBOUND Data Processing and Database Manipulation: The TMSINT system's custom OUTBOUND APIs process the data in the coding application with the system's outbound tables. This process includes keeping client data "Synchronized" with the TMS Repository and interrogates the clinical coding conditions such as declassification, classification, reclassification, data updates, data promotions, data demotions and changes to the terminology hierarchy derivational path.

[0084] OUTBOUND Data Formatting: The TMSINT system's custom Staging APIs process the data in the application's outbound tables to the import staging tables. [0085] Data Import: Writes the import staging data containing the following data to the clinical EDC application.

• Coding classifications and related dictionary details.

• Open queries for the Sites to respond to the coder's queries.

[0086] The mandatory and optional components that are required for client usage of the DBMS TMS to Medidata Rave Integration (TMSINT) application are set forth below in detail. A series of metadata tables are used to store specific information about a client's Medidata Rave source system data. The TMS to Medidata Rave Integration application relies on the application metadata tables to programmatically parse, extract, process and re-import data back into the client's Medidata Rave source system.

[0087] The main components of the client requirements definition and configuration processes include the following topics:

[0088] 1. TMS Setup & Definition

• TMS Dictionaries

• TMS External System Definitions

• TMS Drill-Down Views

• TMS Custom Search Objects

• TMS Custom VTA Load Utility

• TMS Custom Dictionary VTA Copy Utility

[0089] 2. Oracle Account Setup

• Application Administrator Account

• Data-Transfer Account

• Data-Processing Account

[0090] 3. TMS to Medidata Rave Integration Setup & Requirements Definition

• Client Definition

• Datafile Definition

• Dictionary Mapping

• External System Mapping

• Ancillary Data

• Account Registration [0091]4. IMS to Medidata Rave Integration Client Metadata Requirements Definition

Template

[0092] 5. Gathering Client Requirements

[0093] TMS Setup - Described below are the setup and definition within the IMS application. Before the client can be configured within the IMS for Medidata Rave Integration application, the TMS setup must be completed.

[0094] TMS Dictionaries - Any TMS dictionaries that are used by the TMS to Medidata Rave Integration application are defined and created in TMS before they can be referenced in the TMS to Medidata Rave integration application. TMS dictionaries are not shared between clients. All dictionaries used by the TMS to Medidata Rave

Integration application are specific to a given client and are determined based on the individual client's requirement.

[0095] However, several prewritten "standard" dictionary packages are available that can be easily implemented for a client. Each standard dictionary package, as shown below, includes the TMS dictionary definition, initial dictionary loading, dictionary versioning upgrade and canned impact reporting. The client specifies at the onset what vendor dictionary format (if applicable) and dictionary version are to be initially loaded for TMS to Medidata Rave Integration use and specify a version upgrade schedule. Specific information on the TMS dictionary definition and "derivation path" are separately provided to the client.

[0096] All dictionary versioning requires the creation of a TMS virtual dictionary upon upgrade. However, the creation and copy of vtas to virtual dictionary domains are not required unless the client wishes to allow coding to TMS virtual dictionaries which are specified by requirement at the onset of the project.

[0097] The DBMS prewritten "standard" TMS dictionary packages are as follows

[0098] SO meddra Dictionary -The MSSO meddra dictionary package includes the TMS dictionary definition of a standard "Primary Path" meddra dictionary. The dictionary classification level is the LLT level. The "derivation path" includes the LLT, PT and the HLT, HLGT and SOC of the primary path. As LLTs are made non-current by the vendor, they are logically deleted within the TMS repository. Due to the limited variation of the meddra dictionary implementation, this dictionary package may be used by most if not ail clients.

[0099] UMC WHO Drug Dictionary (B2 format)-The UMC WHO drug B2 dictionary package includes the TMS dictionary definition of a standard "Primary Link" drug dictionary. The dictionary classification level includes both the preferred drug term (PT) and the trade drug term synonym (SYN) drug levels. The dictionary includes the UMC ATC5 dictionary level. The UMC ATC long text value is provided as an additional attribute (VALUE_1 ) for all ATC levels. The UMC provided ingredient-list is

implemented as a TMS informative note ("INGLIST") assigned to the PT dictionary level. The defined dictionary levels are ATC1-ATC5, PT, SYN, ING, SRC and CCODE.

Manufacturers are considered obsolete in the B2 format by the vendor and are not used. The "derivation path" includes the SYN, PT and the ATC hierarchy of the designated primary ATC. Any of the UMC provided B2 formats may be used with this dictionary definition such as the DD, DDE or DDE-HD formats.

[0100] UMC WHO Drug Dictionary (C format)- The UMC WHO drug C dictionary package includes an implementation as available from Oracle. Any if the UMC provided C formats may be used with this dictionary definition such as the DD, DDE or DDE-HD formats.

[0101] UMC WHO Drug Dictionary (B3 format) - The UMC WHO drug B3 format will be implemented as a standard dictionary package in future releases. This version of the drug dictionary is not currently compatible for use with the TMS application and will be implemented for later releases of TMS with expanded column lengths.

[0102]MSSO meddra & SMQ Filter Dictionary - The MSSO meddra & SMQ dictionary package includes the standard meddra dictionary as specified above as well as the additional SMQ (Standard meddra Query) filter dictionary. This dictionary is a vendor- only dictionary and does not support custom SMQs. When selecting this dictionary package, the meddra and SMQ filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmaticaily as one dictionary in the context of loading and upgrading. The SMQ filter dictionary is implemented as a standard SMQ filter dictionary as specified by the vendor. [0103]UMC WHO Drug & SDG Filter Dictionary - The UMC WHO Drug & SDG dictionary package includes the standard 82 dictionary as specified above as weii as the additional SDG (Standard Drug Query) filter dictionary. This dictionary is a vendor- only dictionary and does not support custom SDGs. When selecting this dictionary package, the WHO Drug and SDG filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading. The SDG filter dictionary is implemented as a standard SDG filter dictionary as specified by the vendor. Functionality will be provided to programmatically allow the creation of ATC/SDG and PT/SDG relationships that are overlapping or the removal of overlapping ATC/SDG and PT/SDG relationships.

[0104] Custom meddra SMQ Filter Dictionary - The Custom meddra & SMQ dictionary package includes the standard meddra dictionary as specified above as well as the additional SMQ (Standard meddra Query) filter dictionary with additional custom SMQ dictionary levels (CMQs). The meddra dictionary is a vendor-only dictionary. However, the SMQ dictionary supports the creation of custom CMQs. When selecting this dictionary package, the meddra and SMQ filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading.

[0105] Custom WHO Drug SDG Filter Dictionary - The Custom WHO Drug & SDG dictionary package includes the standard B2 WHO Drug dictionary, as specified above. As well as the additional SDG (Standard Drug Groups) filter dictionary with additional custom SDG dictionary levels (CDGs). The WHO Drug dictionary is a vendor-only dictionary. However the SDG dictionary supports the creation of custom cdgs. When selecting this dictionary package, the WHO Drug and SDG filter dictionary will be defined, loaded and versioned at the same time. Both dictionaries will be treated programmatically as one dictionary in the context of loading and upgrading.

[01061TMS External Systems -The TMS to Medidata Rave Integration application requires that the TMS external system the client will use for the TMS to Rave integration is defined within the TMS repository prior to defining and configuring the TMS to

Medidata Rave Integration application. [0107] IMS allows for up to eight external column values for a given external system. Each external value may be up to 500 characters in length. These external values are "key" values within the client's source system used to identify unique patient records within the client's source system. These values typically include Study, Site, DCM, Patient, etc. And may be customized as needed. Since these values are "key" values, they are expected to remain constant. The TMS to Medidata Rave Integration application has been designed to allow for a Site location to change provided one of the eight external column names is specified as "SITE". This value is the designated "snowbird" column to accommodate those patients that live in a different location during different calendar months, for example the winter months. If no external column name is named "SITE", this functionality will not be enabled and any update to a "key" value will cause the system to treat the patient record as if it were a brand new patient record and will disconnect any audit trail capability between previous versions of the patient record.

[0108] if more than the eight values provided by TMS are needed to uniquely identify a patient record within the client's source system, the TMS to Medidata Rave Integration application can store up to two values per individual column allowing the ability to store sixteen external values. If columns are "shared" in this manner, there may be no more than two values stored per external column and the data values will be delimited with the T character within TMS. Any custom omission status requirements will need to be specified as a requirement by the client at the onset of the project. If no requirement is specified, the default omission status "Site from Coder" will be used from the TMS predefined "DISCREPANCY REV STATUS CODE" codelist.

[01091 TMS Custom Drili-Down Views - An optional Custom TMS Drill-Down views solution for the medical coder to see auxiliary study data from the Rave source system while coding in TMS (e.g., the county, indication, route, dosage for medications or any other applicable study data the coder requires to make the right choices while coding) is provided. These Custom TMS Drill-Down views are optionally available and may be defined for any client TMS external system. The number of drill-down views that may be created in TMS is contingent on the number of external columns defined within the external system. Additional client requirements will need to be specified to indicate what data will be displayed in each drill-down view as well what external column the drill-down view will be associated with. When one or more drill-down views are specified, the corresponding ancillary data is defined within the IMS to Medidata Rave Integration application. See Ancillary Data Definition section below.

[0110] IMS Custom Search Objects An optional Custom TMS search object solution within TMS to enable Verbatim terms to code to non-equivalent dictionary terms with the use of word swapping, word substitutions, special character/number/punctuation/stop word removal, and other types of matching algorithms during the classification process in TMS is provided. The Custom TMS search object solution that is provided is designed for use with the meddra AE dictionary and/or WHO drug dictionary. The defined word substitutions and stop word removal are configurable for each client using a TMS codelist.

[0111]TMS Custom VTA Load Utility - An optional Custom TMS VTA load utility to enable a client to populate the TMS meddra or whodrug dictionaries with their existing synonym or verbatim term assignment (VTA) lists is provided The client provides a spreadsheet containing desired Synonym or VTA mappings that may be loaded into the client's TMS meddra or whodrug dictionaries.

[0112] TMS Dictionary Custom VTA Copy Utility An optional Custom TMS dictionary VTA copy utility is provided which will enable a client to copy VTAs from one TMS dictionary to another TMS dictionary within the same TMS database instance for the purpose of keeping the coding "synchronized" between a designated source and target dictionary.

[0113] Oracle Account Setup

[0114] The TMS to Medidata Rave Integration application requires the following Oracle client specific accounts to be defined with the Oracle database.

[0115] Client Data-Transfer Account - Each client using the TMS to Medidata Rave Integration application has a designated Oracle data-transfer schema. The schema name is a user name of TMSINT_XFER_<clientcode> where <clientcode> is the 3-6 character code used to identify a given client. The data-transfer schema is not a privileged Oracle account and is responsible for the transfer of data from and to the client source system. Each client specific data-transfer schema may see only data that is applicable to the given client. The data-transfer schema will be created with the following objects set forth in Table 1 :

[01161 Client Data-Processing Account - Each client using the TMS to Medidata Rave Integration application has a designated Oracle data-processing schema. The schema name is a user name of TMSINT_PROC_<ClientCode> where <ClientCode> is the 3-6 character code used to identify a given client. The data-processing schema is a privileged Oracle account and is responsible reading data extracted from the client's source system by the data-transfer schema, processing the data in TMS and writing the data back to the data-transfer schema import table. Each client specific data- processing schema may see only data that is applicable to the given client. The data- processing schema will be created with the following objects set forth in Table 2:

TABLE 2

r0117]TMS to Medldata Rave lr tearation Client Setup & Del inition

[0118] The following is a description of the components of the TMS to Medidata Rave Integration application definition process that is specific to a given client. It should be noted that a client may not be defined within the TMS to Medidata Rave Integration application until the TMS definition has been completed. The TMS dictionaries, external systems, etc. must exist within the TMS application prior to setting up the client within the TMS to Medidata Rave Integration application. The administrator is the only.user authorized to manage client definitions and setup. The Oracle database package TMS I NT_ADM_UTI LS owned by the administrator contains the application APIs in which the administrator may create, update, delete, enable or disable client related data.

[0119] Client Def in itign -The first step for defining a client within the TMS to Medidata Rave Integration application is to create a client definition entry in a metadata table TMS!NT_ADM_ CLIENTS. Only the administrator is authorized to create a new client. Each client has a unique AliasCode and unique ClientID that will be used by the application. The ClientID and ClientAlias values may not be modified after the creation of the client record. The ClientID is used as a foreign key reference in other tables for linking attributes to a specific client. Oracle database objects and Oracle user accounts are subsequently created using the ClientAlias value. Described below are various TMSINT ADM CLIENTS files. [0120] Creating a Client - The API CREATE_CLIENT is used to create a client definition record in the TMSINT_ADM_CLIENTS table. The administrator selects a unique 3-6 character code that will identify the client. The client code is used internally by the application only and will not be used by or written to the client's source system. Therefore, this value may be selected by the administrator. It is preferred that the client code visually identify the client to which it is assigned. For example, "NOV" may be used for Novartis, "BMS" may be used for Bristol-Myers Squibb and 1NV" may be used for Inventive Health. The client description may be any value up to 200 characters and is recommended to be the full name of the client. A Client! D will automatically be assign during execution of the API. A ClientID is required for any additional setup or definition for the client.

[0121] Updating a Client - The API UPDATE_CLIENT is used to update an existing client definition record in the TMSINT_ADM_CLIENTS table. The only expected updates to a client definition after creation are the update of the client description or the active flag. Setting the activate flag from Ύ" to "N" will stop all integration processing for the client. Likewise, setting the active flag from "N" to Ύ" will restart integration processing for the client provided at least one datafile is also active. Any required value within the table in which the API has a default of NULL is simply a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.

[0122] Deleting a Client - The API DELETE_CLIENT is used to delete client definition record from the TMSINT_ADM_CLIENTS table. When a client is deleted, all

subordinate client metadata will likewise be deleted (datafiles, dictionary mappings, etc.). The Oracle accounts associated with the client are manually deleted from the database by the DBA; however, the registration of the Oracle accounts will be deleted from the application.

[0123] Querying a Client - The API QUERY_CL!ENT is used to query any client definition record in the TMS I NT_ADM_CLI ENTS table. The API is a pipelined function based on the TMSINT_ADM_CLIENTS table and may therefore be executed as any query from the SQL command line.

[01241 Datafile Definition - After a client definition has been defined and a ClientID assigned, individual datafi!es are then be created. Datafile definitions for a client are stored in a metadata table entitled TMSINT_ADM__DATAFILES. A client may have one or more datafiies. At least one active datafile is required for integration processing. Normally there is a one to one correspondence between a clinical study and a datafile and each study will have its own record entry in the TMS I NT_ADM_DATAFI LES table. Metadata in the TMSINT_ADM__DATAFILES table will specify the URL and credentials (if any) required to connect to the client's source system for both the extraction of data as well as the import of data. Data extracted and imported will be in an HTML format. Additionally, the HTML tags for both the beginning of the HTML file and beginning of new patient record is specified. The end of file and end of record file HTML tags may be derived from the beginning of file and beginning of record values. The ActiveFlag value can be used to enable and disable the given file from integration processing.

[0125] Creating a New Datafile - The API CREATE_DATAFILE is used to create a new datafile definition record in the TMSINT_ADM__DATAFILES table. Each datafile record carries the parent ClientID of the client in which it is applicable. When the datafile has been created, a DatafilelD is assigned and returned.

[0126] Updating a Datafile - The API UPDATE _DATAFSLE is used to update an existing datafile definition record in the TMS I NT_ADM_DATAFI LES table. This API is also used to update URL passwords if they are periodically updated by the client.

Additionally, datafiies can be enabled or disabled from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL is simply a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed. Deleting a Datafile - The API DELETE_DATAFILE is used to delete datafile definition records from the TMSINT_ADM_DATAFILES table. When a datafile is deleted, all subordinate client metadata wi!l likewise be deleted (dictionary mappings, external system mappings, ancillary data, etc.).

[0127] Querying a Datafile - The API QUERY_DATAFILE is used to query any datafile definition record in the TMSINT_ADM_DATAFILES table. The API is a pipelined function based on the TMSINT_ADM_DATAFILES table and may therefore be executed as any query from the SQL command line.

f0128] Dictionary Mapping Definition - After a datafile has been defined in

TMS I NT__ADM_DATAFI LES and a DatafilelD has been assigned, each distinct DCM-VT combination within the datafile is then defined in a TMSINT_ADM_DICT_MAPPING metadata table to map the DCM-VT to a client specific TMS dictionary for coding. For example, assuming that a given datafile contained an AE (Adverse Event), MH

(Medical History) and CM (Concomitant Medication) data then three records would need to be created mapping the DCM-VT to the MedDRA and WHO Drug TMS dictionaries, respectively. The TMSINT_ADM__DICT_MAPPING table will ultimately specify how the DCM and VT will be parsed from the client source system HTML extract data, what TMS dictionary to use for coding, and how to construct the HTML coding file and HTML action file data that will be returned to the client source system for update after TMS processing.

[0129] Creating a New Dictionary Mapping - The API CREATE_ DICT_MAPPING is used to create a new datafile dictionary mapping record in the

TMSINT_ADM DICT__MAPPING table. Each mapping record carries the parent DatafilelD of the client datafile in which it is applicable. When the dictionary mapping has been created, a DictMappingID will be assigned and returned.

[0130] Updating a Dictionary Mapping -The API UPDATE_ DICT _MAPPING is used to update an existing datafile dictionary mapping record in the

TMSINT_ADM_DICT_MAPPING table. This API may be used to update any of the non-key values at any time. Additionally, dictionary mappings may be enabled or disabled from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL provides a means to allow the user to enter only the values that they want to update. For example, the pActiveFiag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed,

[0131] Deleting a Dictionary Mapping - The API DELETE _ DICT_MAPPING is used to delete datafile definition record from the TMSINT_ADM_DICT_MAPPING table. When a dictionary mapping is deleted, all subordinate ancillary metadata will likewise be deleted.

[0132] Querying a Dictionary Mapping - The API QUERY_ DICT _MAPPING is used to query any datafile dictionary mapping records in the TMSINT_ADM_DICT_MAPPING table. The API is a pipelined function based on the TMSINT_ADM_DICT__MAPPING table and may therefore be executed as any query from the SQL command line.

[0133] External System Mapping Definition -After a datafile has been defined in

TMSINT_ADM_DATAFILES and a DatafilelD has been assigned, each datafile is then mapped to a client specific TMS External System within TMS within the

TMS I NT_ADM_EXT__MAPPI NG metadata table. TMS allows for the definition of up to eight custom columns (EXT_VALUE__1 EXT_VALUE__8) to store source system "key" data values. For each client datafile, the TMSINT_ADM_DICT_MAPPiNG will contain one record for each defined TMS external column. The TMSINT_ADM_EXT_MAPPING table will ultimately specify how each of the source system "key" values will be parsed from the client source system HTML extract data, how they are mapped to the TMS external system, and finally how to construct the HTML coding file and HTML action file data that will be returned to the client source system for update after TMS processing. Since TMS external column values are considered to be a "key" value used in the client source system to uniquely identify a specific patient record, the data values are expected to remain static. The exception to this is any external column defined with the name "SITE" which is designated as a "snowbird" column for patients that change locations (Site) during part of the year. Functionality has been included to include updates to the "SITE" column only. Updates to any additional external value would simply treat the record as a new patient record. [0134] Creating a New External System Mapping - The API CREATE_EXT_MAPPING is used to create a new external system mapping record in the

TMSINT_ADM_EXT_MAPPING table. Each mapping record carries the parent DatafiielD of the client datafi!e in which it is applicable. When the dictionary mapping has been created, an ExtMappingID will be assigned and returned. A mapping should be created for each column defined within the TMS external system. If the external system has only six of the eight possible external columns defined, then the datafile will have six mapping records.

[0135] Updating an External System Mapping - The API UPDATE_EXT_MAPPING is used to update an existing external system mapping record in the

TMSINT_ADM_EXT_MAPPING table. This API may be used to enable or disable the collection of a given external data value from integration processing by updating the ActiveFlag. Any required value with the table in which the API has a default of NULL provides a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.

[0136] Deleting an External System Mapping - The API DELETE_EXT_MAPPING is used to delete datafiie definition record from the TMSINT_ADM_EXT_MAPPING table. When a dictionary mapping is deleted, the corresponding data value will no longer be parsed from the inbound source system data and will no longer be included in the outbound HTML coding or response/action file.

[0137] Querying an External System Mapping - The API QUERY _ DICT_MAPPING is used to query any datafile dictionary mapping records in the

TMSINT_ADM_DICT_MAPPING table. The API is a pipelined function based on the TMSINT_ADM_DiCT_MAPPING table and may therefore be executed as any query from the SQL command line.

[0138] Ancillary Column Definition - The definition of Ancillary data is an optional component of the TMS to Medidata Rave Integration application. The definition of ancillary data is defined in a metadata table TMSINT_ADM_ADD_COLS. Ancillary data or add on-co!umns are individual data values in the client source system that are associated to a specific DCM within a datafiie that are retrieved from the client source system for the purpose of displaying ancillary data during TMS omission management activities via TMS Drill-Down Views. The TMSINT_ADM_ADDON_ COLS table will ultimately specify what data column values for a given DCM and datafile should be retrieved from the client source system and how the values will be parsed from the client source system HTML extract data.

[0139] Creating a New Ancillary Column Definition - The API CREATE__ADDON_COL is used to create a new ancillary column record in the TMSINT_ADM_ADDON_COLS table. Each mapping record carries the parent DatafilelD of the client datafile and the DCM name in which it is applicable. When the ancillary column definition has been created, an AddonCollD will be assigned and returned.

[0140] Updating an Ancillary Column Definition - The API UPDATE_ADDON_COL is used to update an existing ancillary column definition record in the

TMSINT_ADM_ADDON_COLS table. This API may be used to enable or disable the collection of a given ancillary column value during integration processing by updating the ActiveFlag. Any required value in the table in which the API has a default of NULL is a means to allow the user to enter only the values that they want to update. For example, the pActiveFlag parameter has a default value of NULL. If the pActiveflag value is not provided or a NULL value is provided, the API will assume the user is not updating that value and the current value will not be changed.

[0141] Deleting an Ancillary Column Definition - The API DELETE_ADDON_COL is used to delete ancillary column definitions from the TMSINT_ADM_ADDON_COL table. When an ancillary column definition Is deleted, the corresponding data value will no longer be parsed from the inbound source system data and will appear as a NULL value if referenced in a TMS drill-down view,

[0142] Querying an Externa! System Mapping - The API QUERY _ DICT_MAPPING is used to query any datafile dictionary mapping records in the TMSINT_ADM_DICT_ MAPPING table. The API is a pipelined function based on the TMSINT_ADM_DICT_ MAPPING table and may therefore be executed as any query from the SQL command line,

f0143] Oracle Account Registration - Each client is required to have a client specific Oracle data-transfer schema and an Oracle data-processing schema registered in the IMS to Medidata Rave Integration application table TMSINT_ADM_ORA__ACCOUNTS table. The actual Oracle accounts must exist in the Oracle database before they may be registered. The data-transfer schema is responsible for the "transfer" of data from the client source system to the Oracle database and from the Oracle database back to the client source system. The data-processing schema is responsible for picking up client data written to database by the data-transfer schema, processing the data in TMS and writing the results back to the database to be written back to the source system by the data-processing schema. AH data-transfer schemas must have the name

TMSINT__XFER_<CiientCode> and all data-processing schemas must have the name TMSINT_PROC__<ClientCode>.

[0144] Registering an Oracle Account - The API REGISTER_ORA_ACCOUNT issued to register a new Oracle account in the TMSINT_ADM_ORA_ACCOUNTS table. When registering accounts for a new client, both a data-transfer account and the data- processing account are then registered. Valid account type values are "XFER" or "PROC". When the account has been registered, an AccountID will be assigned and returned.

I0145]Unregistering an Oracle Account - The API DELETE_ORA_ACCOUNT is used to delete or un-registered Oracle accounts from the TMSINT_ADM_ORA_ACCOUNTS table.

[0146] Suspending an Oracle Account - The API SUSPEND_ORA_ACCOUNT is used to suspend an Oracle account in the TMSINT_ADM_ORA_ACCOUNTS table. This action will update the ActiveFlag from Ύ" to "N" which will prevent the applicable integration processing until when time the account is unsuspended.

[0147]Unsuspending an Oracle Account - The API UNSUSPEND_ORA_ACCOUNT is used to unsuspend an Oracle account in the TMSINT_ADM_ORA_ACCOUNTS table. This action will update the ActiveFlag from "N" to Ύ' which will allow integration processing to resume.

[0148] Querying Registered Oracle Accounts - The API QUERY_ORA_ACCOUNT is used to query Oracle accounts in the TMSINT__ADM_ORA_ACCOUNTS table. The API is a pipelined function based on the TMSINT__ORA_ACCOUNTS table and may therefore be executed as any query from the SQL command line.

Gathering Client Requirements

[0149]TMS Dictionaries

1. What TMS dictionaries are required (MedDRA, WHO Drug)?

2. Can a DBMS Consulting prewritten dictionary packages be used?

3. What vendor format of the dictionary should be used (WHO Drug only: DD, DDE, DDE-HD)?

4. What vendor version should be used for initial loading?

5. What are the required data items that will need to be derived back to the

source system (Derivation Path)?

6. Default Integration functionally uses one TMS domain per study? Is this

adequate or should all study data be within a client level TMS domain?

7. What version import reports are required? Client specifies custom reporting requirements outside of the custom canned reports provided by default

8. What is the required TMS dictionary version upgrade schedule?

9. Should TMS virtual dictionary domains be used?

[0150] TMS External System

1. What data items within the study will identify a unique patient record?

2. Are there more than 8 columns in which an external value will contain multiple concatenated values?

3. Are there any custom requirements for TMS omission statuses?

4. When writing data back to the source system, what order are the external columns required to be in within the HTML file?

5. Is there a "SITE" column (snowbird column)? 6. Ensure ail external value will be unique and remain static (excluding "SITE" if applicable)

[0151]TMS Drill-Down Views

1. What drill-down views be required?

2. How many drill-down views will be required and what external system column will they be assigned to?

3. What are the datafi!e/DCM ancillary columns that will be included in the drill- down view (if applicable)?

4. TMS Search Objects

1. Should a custom TMS search object be defined for any of the client

dictionaries?

2. Will the global default search object be used or will the client maintain a

client-specific search object

[0152] TMS VTA Load Utilities

1. Will the client be providing a spreadsheet of VTA data to be pre-loaded in any of the dictionaries?

[0153] Clinical Studies/Datafiies

1. How many clinical studies will be processed by the TMS to Medidata Rave Integration application?

2. What is the URL and any applicable credentials for each study to be extracted from the source system and processed in TMS?

3. What is the URL and any applicable credentials for each study to be imported from TMS back into the source system?

4. What is the HTML tag for the study datafiie that will mark the beginning of the file <BOF>?

5. What is the HTML tab for the study datafiie that will mark the beginning of an individual patient record <BOR>? 6. What is the HTML header value that will be required in the import HTLM file to be updated back in the client source system?

[0154] Clinical Study Dictionary Mapping

1. How many distinct DCM-VTs exists within the study datafile?

2. What TMS dictionary is applicable to the DCM-VT?

3. What is the HTML tag within the HTML source system extract file for the DCM-VT? This is HOW the data value will be parsed and accurately specified?

4. What is the HTML prefix within the HTML source system extract file for the DCM-VT? This is HOW the data value will be parsed and accurately specified?

5. What is the HTML tag within the HTML import file will need be specified when returning coding data?

6. What is the HTML prefix shall precede the data value (DT) within the HTML import file will need be specified when returning coding data?

7. What is the HTML suffix transaction-type value (if any) required when returning coding data?

8. What is the HTML tag within the HTML import file will need be specified when returning response/action data?

9. What is the HTML prefix shall precede the data value (action text) within the HTML import file will need be specified when returning response/action data?

10. What is the HTML suffix transaction-type value (if any) required when returning response/action data?

11. What is the HTML value will be required when returning response/action data?

[0155] Clinical Study External System Mapping

1. What are the TMS external system values define?

2. Do any external values contain multiple column data values

(concatenated)? 3. What is the HTML tag within the HTML source system extract file for each external value? This is HOW the data value will be parsed and accurately specified?

4. What is the HTML prefix within the HTML source system extract file for each external value? This is HOW the data value will be parsed and accurately specified?

5. What is the HTML suffix transaction-type value (if any) required when returning coding data?

6. What is the HTML suffix transaction-type value (if any) required when returning action data?

7. What is the required order of the external column values in the return import HTLM file?

[0156] Clinical Study Ancillary Data

1.What DCM specific ancillary columns required (if any)?

2. What is the HTML tag within the HTML source system extract file for the ancillary data value? This is HOW the data value will be parsed and accurately specified?

3. What is the HTML prefix within the HTML source system extract file for the ancillary data value? This is HOW the data value will be parsed and accurately specified?

TMS to Medidata Rave Integration (TMSINT) Design Specification

[0157] Custom TMS to Medidata Rave Integration application (TMSINT) - The TMS to Medidata Rave Integration process will automatically extract the configured clinical data in an HTML format, the HTML data will be transformed into patient records appropriate for processing in the TMS application and then the TMS processing results will be written in a client compatible HTLM format and updated in the client's source system, [0158] Each client utilizing the TMS to Medidata Rave Integration application will have designated TMS dictionaries within the TMS application as required and a designated custom TMS External System definition. Additionally, the client may have enhanced coding capabilities using DBMS Consulting Inc. custom TMS Search Objects and DBMS Consulting Inc. custom VTA loading. Use of the TMS to Medidata Rave

Integration application allows the client to perform all TMS omission management activities from within TMS and the ability to customize pre-defined omission actions and statuses as well as the option to utilize DBMS customTMS Drill-Down views for the display of ancillary data.

[0159] The objective of this document is to specify the overall design of the DBMS TMS to Medidata Rave Integration (TMS I NT) application and its range of functionality. Topic specific to the TMS application such at the defining, loading and upgrading TMS dictionaries, as well as TMS external systems, TMS Search Objects, and TMS Drill- Down views are considered custom TMS application components. The DBMS TMS to Medidata Rave Integration (TMS I NT) application interacts directly with client source systems and the TMS application to provide a hosted clinical data coding solution.

f0160] TMS to Medidata Rave Integration application Schemas - The TMS to Medidata Rave Integration (TMSINT) application requires one Oracle schema to be defined as the application owner and administrator. All clients configured to use the TMSINT application will be defined and managed by the administrator. Each client setup to use the TMSINT application will require two client specific Oracle schemas. One schema will be the designated data-transfer account and the other account will be the

designated data-processing account. Each Oracle schema is described below:

[0161] Administrator Schema - The TMSINT_ADM Oracle schema is the owner and administrator of the TMSINT application. It is recommended that the schema name be TMSINT_ADM; however, another Oracle schema name may be used if desired. For the purpose of this document, the administrator account will be referred to as being

TMSINT_ADM. The application administrator will own all of the application metadata tables and will be required to setup and configure a given client before the client may use the TMSINT application. All stored procedures for application use are created within the administrative account. As clients are configured for use, the appropriate privileges, synonyms and objects will be created will be created in the client specific Oracle accounts. The data-transfer and data-processing functionality used by each client will be accessed from stored procedures from within the administrative account. The application functionality has been written so that the same APIs will be appropriate for ajl clients based on the client's individual configuration in the application metadata tables. This allows for future modifications and bug fixed from within a single Oracle account opposed to having to modify multiple client accounts. Table 3 summarizes the objects created for the application administrator schema:

[0162] Data-Transfer Schema (TMSINT__XFER_<client>) - Each client configured to use the TMSINT application will be required to have one designated Oracle data-transfer schema. The Oracle schema name is named TMSINT_XFER_<client> where the <client> value is the alias code defined for the client during the configuration process. For example, if the client "ABC Company" has a defined client alias code of "ABC", the data-transfer schema for "ABC Company" will be TMSINT_XFER_ABC. The purpose of the data-transfer schema is to "transfer" data to and from the client's clinical data source system. Installation scripts have been written to automate the creation of the client specific data-transfer schemas which will include the creation of all Oracle objects to be created directly within the data-transfer schema. As the data-transfer schema is created, the appropriate grants will be issued and the appropriate private synonyms will be created to ensure access to all administrator owned objects. The data-transfer schema is not a privileged Oracle account and is granted only the Oracle CONNECT and RESOURCE system roles. The data-transfer is a "staging" schema and as such, the data will not persist in the database. Table 4 summarizes the objects created for each client specific data-transfer schema:

TABLE 4

[0163] Data-Processing Schema (TMSINT_PROC_<client>) - Each client configured to use the TMSINT application will be required to have one designated Oracle data- processing schema. The Oracle schema name is named TMSINT_PROC_<client> where the <client> value is the alias code defined for the client during the configuration process. For example, if the client "ABC Company" has a defined client alias code of "ABC", the data-processing schema for "ABC Company" will be TMSINT_PROC_ABC. The purpose of the data-processing schema is to process all client data in TMS as well as return the TMS processing results to the data-transfer schema to be updated in the client's clinical source system, installation scripts have been written to automate the creation of the client specific data-processing schemas which will include the creation of all Oracle objects to be created directly within the data-processing schema. As the data-processing schema is created, the appropriate grants will be issued and the appropriate private synonyms will be created to ensure access to all administrator owned objects. The data-processing schema is a privilege Oracle account and is defined as a TMS super-user. Table 5 summarizes the objects created for each client specific data-processing schema.

TABLE 5

[0164] TMS to Medidata Rave Integration Application Administrator Tables

[0165]TMSINT_ADM_PROPERTIES - The property table allows defining various properties within the TMS I NT application. The only required entry within the properties table during the installation process is the name of the Oracle account that is the TMINT application owner. For the application owner, the PropertyCategory is "APPLICATION", the PropName is OWNER" and the PropValue is the Oracle schema owning all of the application objects which is expected to be TMSINT_ADM". The properties table can optionally be used to specify other miscellaneous properties as needed. A SQL script is executed during the installation process as the application owner and will automatically create the required entry in the TMSINT_ADM_ PROPERTIES table. Additional SQL scripts are provided to insert, update, and delete properties as needed.

[0166]TMSINT_ADM_CLIENTS - The clients table allows for the definition of a client within the TMS I NT application. Each client is assigned a unique numeric identifier and alias code. The numeric identifier, Client-ID will be used as a foreign key in other metadata tables for referential integrity. The value of the client alias code selected dictates the names of the Data-Transfer and Data-Processing schemas

TMSINT_XFER_<client-alias> and TMSINT_PROC_<client-alias>, respectively. SQL scripts are provided to easily allow for the insert, update, and delete of clients data.

[0167] TMS !NT_ADM_DATAFI LES - The datafiles table allows for the definition of datafiles within the TMS I NT application for a specific client in TMSINT_ADM_CLIENTS. A client may have one or more datafiles but a datafile may be assigned to only one client. Each datafile is assigned a unique numeric identifier (Datafi!eiD) that will be used as a foreign key in other metadata tables that have a child relationship to the datafile at hand. A datafile is the URL (e.g. https://pharmanet.mdsol.com/RaveWeb Services) in which to access the client's datafile, which will be an HTML document. In addition to the datafile URL, the user name and password credentials needed to access the client datafile URL are required. Each datafile will correspond to one clinical study name. When data is processed in TMS, the TMS domain name will be the "Study Name" concatenated with the string "_DOMAIN" for all coding records within the given datafile. New datafiles may be added to an existing client at any time. To temporarily or permanently discontinue processing of any datafile, the Active Flag can simpiy be updated to "N" and likewise to reinstate processing the Active Flag may be reset back to Ύ". All datafiles are initially created as active. SQL scripts are provided to easily allow for the insert, update, and delete of client datafiles.

[0168]TMSINT_ADM_DICT_MAPPING - The dictionary mapping table is a child table to the datafiles table TMSINT__ADM_DATAFILES and contains one record for every DCM/VT within a datafile to be coded in TMS. For example, a single datafile may contain a DCM/VT (AE/AETERM) that Is applicable to adverse event MedDRA coding and a DCM/VT (CONMEDS/CMTERM) that is applicable to Drug coding or multiples of each. The dictionary mapping table associates the specific DCM/VT to be coded with a specific dictionary in TMS. This information is used during TMS processing by the Data-Processing schema. A new DCM/VT dictionary mapping may be added to an existing datafile at any time by simple adding the new record to the TMSINT_ADM__ DICT_MAPPING table. To temporarily or permanently discontinue processing of a DCM/VT within a given datafile, the Active Flag can be set to "N" and likewise to reinstate processing of the DCM/VT within the datafile, the Active Flag can be reset to Ύ". All datafile dictionary mappings will initially be created as active. The TMS

Dictionary MUST already exist in the TMS repository prior to adding a datafile dictionary mapping. SQL scripts are provided to easily allow for the insert, update, and delete of datafile to dictionary mapping.

[0169]TMSINT_ADM_EXT_MAPPING - The external system mapping table is a child table to the datafiles table. This table creates a relationship between a given client datafile and a defined TMS External System. The table contains one record for every column defined in TMS definition of the External System which may be at most eight values (ExtValuel - ExtValue8). If the TMS External System has 6 defined external column values (Study, Site, Patient, Event, etc.) there may be only one record for each defined external system column. This table is used by the TMS I NT application to parse the datafile values that will ultimately be stored in TMS during processing. Additional external system mappings can be added to an existing datafile at any time as long as it corresponds to the TMS definition of the external system. To temporarily or permanently discontinue processing of a given external system column, the Active Flag can be set to "N" and likewise reinstated by setting the Active Flag back to "Y", Ali datafile external mappings will initially be created as active. The TMS External System MUST already exist in the TMS repository prior to adding a datafile external system mapping. SQL scripts are provided to easily allow for the insert, update, and delete of datafile to external system mapping.

[0170] TMSINT ADM ADDON_COLS - The add-on column mapping table is also a child table to the datafiles table. This table provides and optional means to collect and store ancillary data associated to a specific DCM within a given datafile. These column values are not stored within the TMS repository during the coding process; however, they can be used for later omission management via TMS Drill-Down views in which ancillary data may be queried and viewed as additional data to aid in the manual classification process. By default, the ancillary data is stored in the TMSINT application tables but not directly accessible from TMS. Ancillary data is optional and as many columns as needed can be stored which may differ between DCMs. For the Adverse Event (AE) DCM, it may be desired to capture ancillary data such as the onset date (ONSDT) or the severity (ONSER); whereas, for the Concomitant Medication (CM) DCM it may be desired to capture the dose (CMDOSE) and frequency (CMDOSFRQ). To temporarily or permanently discontinue processing of a given external system column, the Active Flag can be set to "N" and likewise to reinstate processing the Active Flag set to T. All datafile add-on column mappings are initially created as active. SQL scripts are provided to easily allow for the insert, update, and delete of datafile ancillary data.

[0171] TMSI NT_ADM_ORA_ACCOUNTS - The Oracle accounts table is the metadata table that allows an Oracle schema to be programmatica!ly associated to a specific client. Each client has a data-transfer schema (account type = "XFER") and a data- processing schema (account type = "PROC"). This table is the mechanism that allows a given Oracle schema to ONLY see the data associated to the given client in which the Oracle account is associated. When creating the required accounts for a TMSINT application client, the actual Oracle schema MUST be pre-existing in the Oracle database and will be validated using the Oracle view ALL_USERS based on user name. Creating or deleting entries does NOT actually create or drop the Oracle schema in the database but is more of a "registration" process which creates a relationship between the Oracle schema and a client as well as controls the selection of client data between the Oracle schemas. Additional SQL scripts are provided to create the Oracle schemas in the database which will default the Oracle password for accounts to the user name. For example, the SQL script to create a data transfer schema for client "INV" will create and Oracle user of TMSINT_XFER_INV and the account has an Oracle password of TMSINT _XFER_INV. The Oracle database password may optionally be changed at any time using normal Oracle means (ALTER <username> USER IDENTIFIED BY <psw>;). When passwords are modified within the database, it is strongly

recommended that the corresponding password in the Oracle accounts table be updated as well. The Oracle passwords within the Oracle accounts table are used for query only and not used to connect to the database at the time of this document. SQL scripts are provided to easily allow for the registration and un-registration of Oracle accounts as well as suspending and unsuspending of the accounts by modifying the Active flag to "N" and "Y", respectively. Suspending the Oracle Data-Transfer schema by setting the Active Flag to "N" will "suspend" or prevent the data transfer activity.

Unsuspending the Oracle Data-Transfer schema but setting the Active Flag to "Y" allows the data transfer activity to resume. The same actions can occur for the Data- Processing schema.

[0172] TMS to Medidata Rave Integration application Administrator Views

[0173] The following Oracle views are created within the TMS integration

administrator's schema.

[0174]TMSINT_ADM_DICT_VERSIONS - The TMSINT_ADM DICT_VERSIONS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator and contains a list of all TMS base and/or filter dictionaries and their corresponding dictionary version based on the TMS ~DICTVER informative note. The dictionary type value will be derived based on the dictionary level names and will indicate if the dictionary is an "AE" (Adverse Event) dictionary such as MedDRA , a "DRUG" such as WHODrug or a "FILTER" dictionary such as the SMQ or SDG dictionary.

[0175] TMSI NT_ADM_DERV_PATH_LEVELS - The TMSINT_ADM_DERV_

PATHJLEVELS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator containing the TMS "Derivation Path" dictionary levels per TMS dictionary. This view is used is used by the TMS to Medidata Rave Integration application process when calling the TMS API TMS_USER_ FULLAUTOCODE.

CLASSIFYTERM to populate the TMS object table TMS_USER_AUTOCODE.

DeriveValuesTAB with the dictionary derivation path levels in which to derive back.

[0176]TMSINT ADM_DERV_PATH_COLUMNS - The TMSINT_ADM_DERV_PATH _COLUMNS is an Oracle view owned by the TMS to Medidata Rave Integration application administrator containing the TMS dictionary columns that will be derived for each dictionary level of the TMS dictionary derivation path.

[0177]TMSINT_ADM_DICT-ING_LISTS - The TMS!NT_ADM_DICT_INGJ_ISTS is an Oracle view owned by the TMS to Medidata Rave Integration application owner and contains the informative note values for any TMS dictionary having a "PT" dictionary level that has an associated informative note named "INGLIST". The ingredients list is associated to UMC drug dictionaries in which the lngredientsJLongText.txt file has been loaded into the TMS dictionary. TMS does not allow informative notes to be derived at part of the TMS derivation path; therefore, this view was created for ease of access and viewing of the "INGLIST" outside of the TMS application. The UMC provided ingredient list provided by the vendor is up to 4000 characters in length (INGJJSTJLONG). Due to restrictions within most client source systems, the column ING_LIST_SHORT is provided which consist of the first 500 characters of the ingredient list.

f0178lTMS to Medidata Rave Integration application Data-Transfer Tables

[0179] The following are Oracle tables are created within each client specific data- transfer schema. The tables are the staging tables of data being extracted from the client's source system for TMS integration processing and the TMS processing results being returned to the client's source system. [0180] TMSINT_XFER_HTML_EXTRACT - The TMSI NT_XFER_HTML_EXTRACT table is used to stage data extracted directly from the client source system that is to be processed in TMS. Data written to the EXTRACT table is written in an HTML format as directly extracted from the source system. EXTRACT data will be initially created with a PROCESS_FLAG value of "N" indicating the data has not been processed. During integration processing, the data-processing schema will select pending data and mark the data as being processing with a PROCESSJFLAG of "P". When the pending EXTRACT data has been read, formatted and successfully copied to the data- processing INBOUND staging tables, the data will be marked as having been processed with PROCESS_FLAG of "Y", Successfully processed data will subsequently be deleted. Data is written to the EXTRACT table by FILEJNAME as defined in

TMSINT_ADM_DATAFILES. Each datafile is applicable to one client clinical study. Once study data has been written to the EXTRACT table, it is deleted before data for the same study/file may be extracted and written to the table.

I0181]TMSINT_XFER_HTML-IMPORT - The TMSINT_XFER__HTML_IMPORT table is used to stage TMS processing results for patient records that have been either classified in TMS or records that have been assigned an omission action in TMS that requires a response from the source system. The FILE_TYPE column will contain the values of either "CODING" or "ACTION" indicating the type of update in the client system that is needed. Data written to the IMPORT table is written in an HTML format. The IMPORT data is created by the data-processing schema based on integration processing. Records are initially written with a PROCESS_FLAG of "N" indicating that the record has not yet been processed in the client source system. As the data-transfer schema record updates the records within the client's source system, the PROCESS_ FLAG will be updated to Ύ", The next execution of the integration process delete any records that have a PROCESS_FLAG of Ύ" before writing any new updates needed. Unlike the EXTRACT table which contains a complete HTML file per study, the IMPORT table will contain HTML files per patient update as required by the source system (i.e. Medidata Rave). [0182]TMS to Medidata Rave Integration application Data-Processing Tables. The following Oracle tables are created within each client specific data-processing schema. The INBOUND tables are the staging tables of client data subject to TMS integration processing. The CODING tables are the "production" tables that contain of all client's data that has been processed in TMS. The JN tables are the audit tables. Each series of tables contains the "core" table as well as a "DTLS" table and a "DERV" table. The "DTLS" table is applicable only to any ancillary data defined for the given DCM. The "DERV" table will contain the TMS "Derivation Path" data and will only contain data for TMS patient records that are VTA classifications.

[0183]TMSINT _PROC-INBOUND - The TMSINT_PROC_INBOUND table is used to stage client data by study/datafiie that has been obtained from the data-transfer owned EXTRACT table. The INBOUND table is populated using the application metadata tables and applicable data parsed from the HTML within the EXTRACT table. The EXTRACT HTML data transformed into a record format applicable to TMS processing. All data processed in TMS via the integration application will be done based on data in the INBOUND tables. As data is processed in TMS, it will be created or updated in the CODING tables and can then be deleted from the INBOUND tables. The INBOUND tables are cleared of all data at the onset of integration Processing.

[0184] TMSI NT_PROC-I NBOUND_DTLS - The TMSINT _PROC_INBOUND_DTLS table is used to stage client ancillary data for the given patient record in the

TMSlNT_PROC_INBOUND staging table where ancillary data has been defined in the application metadata table TMSINT_ADM_ADDON_COLS for the given study and DCM. One record is created for each applicable ancillary column defined.

[0185]TMSINT_PROC-INBOUND_DERV - The TMSINT_PROC_INBOUND_DERV table is used to stage the TMS derivation path records for the given patient record in the TMSINT_PROC-INBOUND staging table where the patient record is a VTA coding record. The TMSINT PROC_INBOUND_DERV table is populated directly by the integration process during TMS processing and is not populated via EXTRACT data. When records are classified in TMS, the application views TMS I NT_ADM_DICT_DERV PATH LEVELS and TMS I NT ADM DICT DERV PATH COLUMNS are used to generate a record in TMSINT_PROC_INBOUND_DERV for every derivation path level and column for the given patient record. Since the TMS derivation path is define in IMS, the TMSINT_PROC-INBOUND_DERV tabie may contain records with NULL derivation path values. For example, if the full ATC path is applicable (ATC1..ATC5) but the coding term has a primary ATC at the ATC3 level, the derivation path levels for ATC4 and ATC5 will be NULL

[0186]TMSINT_PROC_CODING - The TMS!NT_PROC_CODING table is used maintain all TMS related data for a given client. As data from INBOUND is processed in TMS, it will be either created in the CODING table or updated in the CODING table. The CODING table should always be a representation of the client's source data since the last execution of the integration process. All data within the TMSINT_PROC_ CODING table should be either a patient record in TMS_SOURCE_TERMS or an omission record in TMS VT _OMISSIONS. The TMSINT_PROC_CODING can be populated ONLY with data from TMSINT__PROC-INBOUND. With each execution of the integration process, data in the CODING table will be validated against the TMS repository. If the data have become an inaccurate representation of TMS repository data, the CODING record will be copied back to INBOUND to be reprocessed. Also, with each execution of the integration process, data in the CODING table will be validated against the INBOUND table when the study/file exists in INBOUND. If the patient data is referenced in CODING that is no longer in the INBOUND table, the patient record has been deleted from the client source system and will subsequently be written to the JN tables and removed from the CODING tables. TMS repository data will likewise be removed.

[0187]TMSINT_PROC_CODING_DTLS - The TMSINT_PROC _CODING_DTLS table will store all ancillary data for the given patient record in the TMSINT_PROC _CODING. Data in the TMSINT_PROC_CODING_DTLS will be created/updated directly from TMSlNT_ PROC_INBOUND_DTLS during integration processing. Data in the

TMSINT_PROC_CODING_DTLS table is only used by the TMS application when applicable TMS drill-down views have been defined for the given TMS external system and DCM. [0188]TMSINT PROC_CODING_DERV - The TMSINT PROC_CODING DERV table stores the complete TMS derivation path for any given patient record in the TMSINT_ PROC_CODING that exists as a coding record. Data in the TMSINT_PROC_CODING _DERV is created/updated directly from TMSINT PROC_INBOUND_DERV during integration processing. Data in the TMSINT_PROC__CODING_DERV table is only used for the purpose of returning one or more components of the TMS derivation path to the client source system.

[0189]TMSINT_PROC CODING JN - The TMSINT PROC_CODING_JN table is an audit table that is used to maintain a history of changes made to patient records. Data from TMSINT_PROC_CODING is written to the TMSINT_PROC_CODING_JN table when a patient record is being deleted from the client source system and TMS or when any component of the patient record is being updated including ancillary data and TMS derivation path data. When a JN record is created, the JN_ENTRY_TS, JN_ENTRY_ BY and JN_REASON are specified. The ENTRY_TS and ENTRYJ3Y of the JN record created will automatically become the UPDATED_TS and UPDATE__ BY records values in the TMSINT_PROC_CODING records that is superseding the JN record except in the case of deleted patient records.

[0190] TMSINT_PROC CODING_DTLS_JN - The TMSINT_PROC_CODING__DTLS _JN table is an audit table that Is used to maintain a history of patient ancillary data. Data from TMSINT_PROC_CODING_DTLS is written to the TMSINT_ PROC_CODING _DTLS JN table when a patient record is being deleted from the client source system and TMS or when any component of the patient record is being updated including ancillary data and TMS derivation path data. When a JN record is created, the

JN_ENTRY_TS, JN_ENTRY_BY and JN_REASON will be the same as the parent JN record in TMSINT_PROC_CODING_ JN.

[0191]TMSINT_PROC CODING_DERV_JN - The TMSINT_PROC_ CODING_

DERV__JN table is an audit table that is used to maintain a history of patient derivation path data (if applicable). Data from TMSINT_PROC_CODING_DERV will be written to the TMSINT_PROC_CODING_DERV JN table when a patient record is being deleted from the client source system and TMS or when any component of the patient record is being updated including ancillary data and TMS derivation path data. When a JN record is created, the JN_ENTRY_TS, JNJENTRYJBY and JN_REASON will be the same as the parent JN record in TMSINT_PROC_CODING_ JN.

[0192]TMSINT_PROC LOGGING - The TMS i NT__PROC_LOGGI NG table is a logging table that will be populated with TMS integration processing runtime results. The logging table is truncated and repopulated with each execution of the integration. Data from the logging table can be queried or emailed for integration processing results and execution times. The logging table is currently being used to specify how many records were processed, how many were new patient records, how many were patient update record as well as how many resulting in classifications or omissions, etc.

f0193] TMS I NT Application Administrator Database Packages - All TMS integration processing is executed via APIs within Oracle database packages. For ease of maintenance, all database packages are owned by the application administrator. All integration activity performed by the data-transfer schema and the data-processing schema is accomplished so via APIs are owned by the administrator. The individual data-transfer and data-processing schemas are granted execute privileges on the applicable APIs and have the appropriate Oracle synonyms. The application

administrator owns three database packages. The TMSINT_ADM_UTILS is designed exclusively for use by the application administrator. The TMSINT_XFER_UTILS is designed for use by the data-transfer schemas and the TMS I NT_PROC_ UTI LS is designed for use by the data-processing schemas. These database packages are described below.

[0194] TMS I NT_ADM_ UTILS Package - The TMSINT_ADM_UTILS database package is owned by the application administrator TMSINT_ADM. The administrator is the overseer of the application metadata and configures new clients for application use, makes modifications, and/or makes additions to existing clients. The APIs available to the application administrator for client setup and management are listed below from the TMSINT_ADM_UTILS database package. Additionally a SQL "wrapper * script has been written to call each of the specified APIs below in which the user will be prompted for the API inputs and the modification can be captured in a LOG file with a before and after image where applicable. The "wrapper" is an alternative means to execute some of the client configuration APIs with scripting allowing the user to be "prompted" for required input.

[0195]TMSINT_XFER_UTILS Package - The TMSINT_XFER_UTILS database package is owned by the application administrator TMSINT_ADM but is designed for execution exclusively by the client specific data-transfer schemas (TMSINT_XFER_ <client>). All query related functions are "wrappers" around the corresponding

TMSINT_ADM_UTILS query functions but are additionally constrained for the <client> at hand based on Oracle USER and the corresponding ClientID as indicated in the TMSINT_ORA_ACCOUNTS table for the Oracle USER. This means that data-transfer account TMSINT_XFER_ABC can only see data for company "ABC" and the data- transfer account TMSINT_XFER_XYZ can only see data for company "XYZ" even though both are executing the same database package within the same APIs. All non- query related procedures and functions are applicable and operate on the local tables owned by the Oracle data-transfer USER. A private synonym of TMSINT_XFER_UTILS that "points" to the application owner (TMSINT_ADM) package TMSINT_XFER__UTILS is created in the data-transfer schema automatically when the data-transfer schema is created.

[0196]TMSINT_PROC_UTILS Package - The TMSINT_PROC_UTILS database package is owned by the application administrator TMSINT_ADM but has been designed for execution exclusively by the client specific data-processing schemas (TMSINT_PROC_<client>). All query related functions are "wrappers" around the corresponding TMSINT_ADM__UTILS query functions but are additionally constrained for the <client> at hand based on Oracle USER and the corresponding ClientID as indicated in the TMSINT_ORA_ACCOUNTS table for the Oracle USER. This means that data-processing account TMSINT_PROC_ABC can only see data for company "ABC" and the data-processing account TMS I NT_PROC_XYZ can only see data for company "XYZ" even though both are executing the same API within the same database package. Ail non-query related procedures and functions are applicable and operate on the local tables owned by the Oracle USER or the corresponding Data- Transfer schema (TMSINT_XFER_<client>) associated with the same client. A private synonym of TMSINT_PROC_UTILS that "points" to the application owner

(TMSINT_ADM) package TMSINT_PROC_UTILS will be created in the data-processing schema automatically when the data-processing schema is created.

IO1971TMS to Medidata Rave Integration application Bi-Directional Data-Transfer Design - The TMS to Medidata Rave Integration (TMSINT) application process design contains a data-transfer schema and a data-processing schema. The data-transfer schema is responsible for obtaining clinical data from within the client's source system and writing the data to an Oracle staging table for additional processing. The data- transfer schema is also is responsible for updating the client's source system with the TMS processing results and for all direct interfacing with the client's source system using web-services.

[0198] The synchronous TMS to Medidata Rave Integration (TMSINT) application performs direct data transfer from RAVE via Rave Web Services (RWS) technology. The TMSINT processes are exposed as web services and are also scheduled via an Oracle DBMS_SCHEDULER at a frequency specified by the client. The TMSINT also exchanges standard clinical data and auxiliary data synchronously and with immediate confirmation.

[0199] RWS uses the Representational State Transfer (RESTful) architecture. Data is posted to or retrieved from Rave using HTTP protocol messages posted to specific URLs. Each message receives an immediate success or failure response. In the event of a failure, any pending changes are rolled back. Since RWS uses "RESTful" web services, it does not mandate the use of either Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL).The Medical Coding Solution supplied by DBMS allows the loading of HTML and text files as data sources.

• Extract from Rave

• Integrate to TMS

• Import Classification and Coding Queries (actions) to Rave [0200] TMS to Medidata Rave Integration Application Data-Transfer Process - Data is entered into the Medidata Rave system. The TMS to Medidata Rave Integration

(TMSINT) application is a Webservice that can run on a scheduler or manually.

[0201] For example , running the TMS to Medidata Rave Integration (TMSINT) application Extract from Rave for Verbatim Terms extracts auxiliary data into the TMS Integration inbound tables.

[0202] Running the TMS to Medidata Rave Integration (TMSINT) application TMS Integration to process all verbatim terms that are marked for processing by TMS sends to TMS and inserts into the TMS SOURCE_TERMS TABLE all their associated key data. The TMS to Medidata Rave Integration (TMSINT) application triggers TMS Synchronization and Autoclassification. in particular

• The TMS to Medidata Rave Integration (TMSINT) application allows the loading of previously coded terms, preserving original coding.

• During Autociassification, TMS searches for an exact match to a dictionary term or an existing VTA in the Domain and links the TMS verbatim term to a dictionary term.

• TMS first searches all terms in the Dictionary at the Classification Level, including dictionary terms plus Verbatim Terms that have been mapped to Dictionary Terms in the Domain.

• TMS uses a built-in matching algorithm which makes all mixed case terms match and reduces multiple spaces to one space.

• For each term that is automatically matched/classified in TMS the verbatim term is associated with a VTA linking it to a dictionary term. During the TMS to

Medidata Rave Integration (TMSINT) application, all the derived data is then written to the Outbound Views for processing to the source EDC system; a propagation feature provides enabling coding of identical previously coded verbatim terms to their VTAs.

• if the search does not produce a match, TMS can use any custom-defined

Autocoder Search Objects to find a match which automatically matches non- equivalent terms. If a match is still not found, manual intervention is required. • The TMS to Medidata Rave Integration (TMS!NT) application creates an omission in TMS and record in the source terms table. An omission is being handled in TMS by either coding the term or by applying an Action.

• The TMS to Medidata Rave Integration (TMS I NT) application is then able to resubmit terms to be re-coded if a verbatim, route or indication field is modified.

• The TMS to Medidata Rave Integration (TMS I NT) application is written so that extracting data subsequent times will process source data/metadata extract where the verbatim term, route or indication field is modified.

[0203] During the system data processing into TMS, the data in the INBOUND table is reprocessed and synchronized with the TMS data to ensure the modified data is updated and omissions are either coded or a TMS Action (Query) is Raised in TMS.

[0204] When the TMS to Medidata Rave Integration (TMS I NT) application is run for TMS Integration and Import the TMS to Medidata Rave Integration (TMS I NT) application imports into the Medidata Rave EDC system all coded data from each individual study/job/domain including dictionary version information and WHODrug ingredients. The TMS to Medidata Rave Integration (TMS I NT) application also creates open queries in the Medidata Rave EDC system for the Sites to respond to the Coders coding queries.

[0205] The TMS to Medidata Rave Integration application Bi-Directional Detailed Design provides for extracting Client Source Data via Web-Services by extracting the study dataset from the source Medidata Rave EDC system using the Rave Web Services (RWS) defined as https://{host}/RaveWebServices/studies/{study-id}/datasets/{ regular- or-raw}. Alternatively, Client Source Data can also be updated via Web-Services by Updating/Creating the coding assignment and related dictionary hierarchy for each such identified item on the source Medidata Rave EDC system using the Rave Web Services (RWS). The Update/Open query for each such identified item on the source Medidata Rave EDC system using the Rave Web Services comprising continuously (see Fig 2).

• Extracting datasets from Medidata

• Validating against TMS and identifying the updates/queries, and • Importing the updates /open queries to Medidata.

r0206TTMS Integration TMS Data-Processing Design - The TMS to Medidata Rave Integration (TMS I NT) application process design contains a data-transfer schema and a data-processing schema. The data-processing schema is responsible for obtaining clinical data that has been staged by the data-transfer schema, processing the data in TMS, and staging the TMS processing results in the data-transfer schema to be written back to the client's source system by the data-transfer schema. Each of the processing steps performed by the data-processing schema is described below.

[0207] Obtaining Client Data for TMS Processing from EXTRACT- The following describes how the data-processing schema will obtain client data from the data-transfer schema that has been extracted and staged for TMS processing.

[0208] 1. Determine if there is Pending Data to be Processed in TMS - Upon execution of the integration process, the data-processing schema determines if there is pending client data that has been staged by the data-transfer schema. Client data is staged by the data-transfer schema in a raw HTML format within the staging table TMSINT_ XFER_HTML_EXTRACT. Data that is pending processing in TMS will have a

PROCESS_FLAG value of "N". If no pending data exists, processing will continue as described in the TMS Processing section below. If pending data exists, processing will continue with performing the steps,

[0209] 2. Validate Datafiles - Prior to processing, each distinct datafile in the EXTRACT staging table is validated to ensure that the datafile has been written in its entirety. This will be done by verifying that the datafile contains both a beginning of file (BOF) marker and an end of file (EOF) marker.

[0210] The BOF/EOF marker for all client datafiles is stored in the NEWJFILE_HTML_ TAG column of the application metadata table TMSINT_ADM_DATAFILES. The NEW_FILE_HTML_TAG (i.e. <ODM>) will be used to derive the BOF and EOF file markers. The BOF marker will consist of the NEW__FILE_HTML_TAG less the trailing (i.e. <ODM). The EOF marker will consist of the NEW__FILE_HTML__TAG with the "<" replaced with "</" (i.e. </ODM>). [0211] Each EXTRACT datafile wii! be checked to ensure the BOF/EOF markers are present. If the EOF marker is not found, the file is incomplete. This may occur if the file is actively being written to the EXTRACT table by the data-transfer schema in which case, the datafile will simply be processed with the next execution of the integration process. If the datafile does not have both the BOF marker and the EOF marker, the file will be omitted from current processing. If the datafile contains both the BOF and EOF markers, processing will continue with the step below,

[0212] 3. Format HTML Records into TMS Records - Data In the EXTRACT staging table exists in a raw HTML format. Before TMS processing may occur, the HTML records are grouped into logical "patient" records. This will be done using the beginning of record (BOR) marker, the end of record (EOR) marker and applicable client metadata.

[0213]The BOR/EOR marker for all client datafiles is stored in the NEW_REC _HTML_ TAG column of the application metadata table TMS I NT_ADM_DATAFI L ES . The

NEW_REC_HTML_TAG (i.e. <ClinicalData>) will be used to derive the BOR and EOR file markers. The BOR marker will consist of the N EW_REC_HTML_TAG less the trailing ">" (i.e. <ClinicalData). The EOR marker will consist of the NEW_REC_HTML_ TAG with the "<" replaced with "</" (i.e. </ClinicalData>). Each EXTRACT datafile record will be checked to ensure the BOR/EOR markers are present.

[0214] Each HTML record written to the EXTRACT table is assigned a unique identifier (RECORD_SEQNO). The sequential order of the RECORDJSEQNO is the order in which the HTML record was inserted. While the RECORD_SEQNO identifies individual records, the PROCESS_RECNO column will be used during this process to assign a logical record numbers based on the BOR/EOR record markers. Datafile records will be read in the order of insert. The logical record number (PROCESS_RECNO) will be updated to a 1-up number each time the BOR marker is read. All records prior to the first occurrence of the BOR marker will be assigned a PROCEES_RECNO of 0 since these records are HTML header records. As records are sequentially read, each time the BOR marker is found, the PROCESS_RECNO will be incremented and it will be confirmed that the previous record read contained the EOR marker. If the datafile is found to have any unmatched BOR/EOR markers, the file will deemed invalid and will be omitted from further processing. If the datafile was found to be valid, the

PROCESS_ FLAG will be updated from "N" to "P", the PROCESSES will be updated to the current SYSDATE and the PROCESS.. REGNO will be updated to reflect the logical patient numbers.

[0215] 4. Clear the Data-Processing INBOUND Staging Tables - The INBOUND tables exist within the data-processing schema and are the staging tables for ALL data to be processed in TMS. The INBOUND tables include the tables TMSINT_PROC _

INBOUND, TMSINT_PROC-INBOUND_DTLS, and TMSINT_PROC_INBOUND_ DERV. The TMSINT_PROC-INBOUND table will contain ail information about a patient record. The TMSINT PROC_INBOUND DTLS table will contain all of the ancillary data for the corresponding TMSINT_PROC_INBOUND patient record. The TMSINT_ PROC_INBOUND_DERV table will contain the TMS derivation path for the

corresponding patient record in TMSINT_PROC_INBOUND (if applicable). The

INBOUND tables, which are staging tables, will be cleared of any data at the onset of each process. As data is processed successfully in TMS from the INBOUND tables, the data will be moved to the corresponding CODING tables. The only time data will be left in the INBOUND tables from previous processing is if an error occurred. Since all client data obtained from EXTRACT is cumulative, the previous data in INBOUND may safely be removed at the onset of each process.

[0216] 5. Stage HTML EXTRACT Data in INBOUND Staging Tables - Data from the EXTRACT staging table having a PROCESS_FLAG of "P" are copied from EXTRACT staging table in the data-transfer schema to the INBOUND staging tables in the data- processing schema. The application metadata tables are used in conjunction with the EXTRACT data to successfully populate the INBOUND tables. The metadata tables specify not only core TMS information about a given datafiie such as the TMS

dictionary, TMS domain and TMS external system but also contains the directive on HOW individual data items such as the VT and the patient "key" values are to be parsed from the HTML data. • The TMSINT_ADM_DATAFILES metadata table will be used to obtain the DatafilelD, ClientID, TMS dictionary information, TMS domain information, and the TMS External System name for the given client datafiie staged in EXTRACT.

• The TMSINT_ADM_DICT_MAPPING metadata table will used to determine what DCMs and VTs are contained within the HTML data and HOW to parse the data values from the HTML using the HTML_TAG and HTML_PREFIX values for the given client datafiie staged in EXTRACT.

The TMSINT_ADM_DICT_MAPPING metadata table will also be used to determine HOW to parse the RESPONSE value from the HTML using the RESPONSE_HTML_TAG and RESPONSE_HTML_PREFIX values for the given client datafiie staged in EXTRACT. Responses are applicable only to previously processed data in which the specified VT is an omission in TMS, the VT has an open action and the site (client source system) is responding to the action.

The TMS I NT_ADM_EXT_MAPPI NG metadata table will used to determine what patient related "key" values are contained within the HTML data based on the TMS External System and HOW to parse each of the data values from the HTML using the HTML_TAG and HTML_PREFIX values for the given client datafiie staged in EXTRACT.

[0217] Using the application metadata tables referenced above in conjunction with the staged EXTRACT data and the logical record indicators (PROCESS_RECNO), data are selected, formatted and written to the data-processing INBOUND staging table

TMSINT_PROC-INBOUND table based on the mappings below. Certain data items will be obtained from the application metadata tables based on the give client datafiie while other data will be parsed directly from the HTML data in the EXTRACT staging table for the given patient record.

[0218] 6. Stage HTML EXTRACT Ancillary Data in INBOUND Staging Tables - After the TMSINT_ PROC_!NBOUND table has been successfully populated, the

TMSINT_PROC-INBOUND__DTLS table is repopulated with any required ancillary data for the datafiie DCM based on defined metadata. Ancillary data is defined for a given datafile at the DCM level and is contained in the metadata table TMSINT_ADM_

ADDON__COLS. Ancillary data is applicable to TMS drill-down views only; therefore, not all datafile DCMs will have defined ancillary data.

[0219] The TMSINT_ADM_ADDON_COLS metadata table is used to determine what patient related ancillary values are contained within the HTML data and HOW to parse each of the ancillary data values from the HTML using the HTML_TAG and HTML_ PREFIX values for the given EXTRACT datafile and DCM. Using the TMSINT_PROC_ INBOUND table in conjunction with the application metadata table referenced above and the EXTRACT table, the TMSINT_PROC-INBOUND_DTLS table is populated based on the following mappings below. Note that the logical record number in the EXTRACT table is temporarily stored in the TMS__ERROR_MSG column of the

TMSINT_PROC_INBOUND table to enable a direct link between the INBOUND record and the HTML EXTRACT logical patient record.

[0220] 7. Maintaining the HTML File Header Key Values - For each EXTRACT datafile that was processed and successfully copied to the INBOUND tables, the HTML file header is obtained and written to the LAST_FILE_HEADER column value in the application metadata table TMSINT_ADM_DATAFILES. The HTML header in the EXTRACT table will be updated to reflect a PROCESS_RECNO of 0 during the formatting process. This HTML file header contains key information that is specified in the HTML file data generated that will be written back to the data-transfer IMPORT staging tables. This key information is required by the client source system for the subsequent updates based on TMS processing results.

[0221] The following is an example HTML file header containing key information generated by the client source system:

<ODM CreationDateTime="2016-05-20T09:25:27.026-00:00" FileOID="f053546e-8445- 4d5d-aa70-e21 c6c28adcc" FileType="Snapshot" ODMVersion="1.3"

xmlns=http://www.cdisc.org/ns/odm/v1.3

xmlns:mdsol="http://www.mdsol.∞m/ns/odm/metadata"

xmlns:xlink="http://www.w3.org/1999/xlink" > [0222] 8. Checking for Duplicated INBOUND Records - The

TMSINT_PROC_INBOUND table will be checked for exact duplicate records. Any record that is determined to be a duplicate is removed from the INBOUND tables. It is not likely this situation will occur. However, a check has been put in place to ensure patient records are not duplicated. Duplicated TMSINT__PROC_INBOUND records are denoted as records in which the following column values have more than one occurrence:

• TMS_HASH_KEY

• CLIENT-ID

• DATAFILE-ID

• DEF-DICTIONARY-ID

• DEF-DOMAIN-ID

• VT-UPPER

EXT VALUE- 1. EXT_VALUE_8

[0223] 9. Checking for Multiple Patient INBOUND Records - The TMSINT_

PROC-INBOUND table is checked for patient records having more than one

occurrence of a patient record where the record is not an exact duplicate but a subsequent update to the same patient record. The application does not currently support processing multiple versions of the same datafile within the EXTRACT staging table. However, this functionality has been pre-added to handle multiple versions of a patient record if added in future releases.

[0224] Records are selected from TMSINT_PROC_INBOUND in which the

TMS_SOURCE_TERM_ID and TMS_OCCURRENCE_ID occur multiple times. For each of these records, either the VT, the external value representing the "SITE" or the action/response values differ. Each record with the same TMS_SOURCE_TERM_ID will selected and ordered based on the order of insert. The second and subsequent records are updated to reflect a 1-up TMS__OCCURRENCE_ID value.

[0225] 10. Delete Processed Data from EXTRACT - Once data selected for processing from the EXTRACT staging table has successfully been copied to the INBOUND tables for TMS processing, the data is removed from the EXTRACT staging table which will allow new data to be written to the EXTRACT table during the next execution.

[0226] TMS Integration Processing - The following describes how the data-processing schema processes client data in TMS. At the time of TMS processing, any pending client data from the data-transfer EXTRACT staging table will have been written to the data-processing schema INBOUND staging tables. The following processes are then executed during TMS processing:

[0227] 1. TMS Synchronization - At the onset of every execution of the TMS integration process, the TMS API synchronization with the TMS repository is executed

(TMS_USER_SYNCHRONIZATION.SYNCHRONIZE). This ensures that any activity that has occurred via the TMS front-end or any dictionary versioning activity is propagated to the TMS patient data in the TMS_SOURCE_ TERMS and

TMS__VT_OMISSIONS tables.

[0228] 2. Analyze Data-Processing Tables - At the onset of every execution of the TMS integration processing, the data-processing schema tables and indexing is analyzed for performance.

[0229] 3. Remove Obsolete Client Patient Data from TMS - The patient data extracted from the client's source system for a given datafile (i.e. Study) is cumulative and always represents the study's current patient data. If a patient record for the study exists in TMS either as a coding record in TMS_SOURCE_TERMS or an omission record in TMS_VT_OMISSIONS but no longer appears in INBOUND data when the datafile is being processed, then the patient has been deleted within the client's source system and will likewise need to be delete from the client CODING tables and the TMS repository.

[0230] For each datafile currently being processed in INBOUND, a determination is mad as to what patient records based on TMS_HASH_KEY exist in the TMSINT_PROC_ CODING table that are no longer represented in TMSINT_PROC_INBOUND. These are patient records that have been deleted from the client source system and will be deleted from TMS. For each of these records, the action in Table 6 is taken: TABLE 6

[0231] 4. Remove Obsolete Client VTAs from the TMS Repository - The VTAs within the TMS repository are kept synchronized for a given study. Each datafi!e being processed in INBOUND represents a single study in the client's source system. Under the current application design, each study has its own designated study specific TMS domain. As datafiles are processed in INBOUND, any active VTA that exists in the TMS repository that is no longer represented as a VT in the INBOUND table is declassified in TMS using the API TMS_USER_RECLASSIFICATION.UNDOVTA. This will handle the removal of VTAs in TMS for deleted patients as well as updates made to VTs within the client source system. Active VTAs in TMS always represent the current coding for the given study based on the last integration execution.

[0232] 5. Flag Previously Processed Client Data for Reprocessing in TMS Each execution of the integration process ensures the previously processed client data that exists in the TMSINT_PROC_CODING table is still "synchronized" with the TMS repository. Client patient data in the application tables may no longer accurately reflect the TMS repository data when a dictionary version upgrade is applied to the dictionary or transactions are completed via the TMS front-end such as reclassifications or any TMS omission management functions.

[0233] All patient data that exists in the application CODING tables that does not already exist in the INBOUND tables is checked to ensure accuracy with the TMS repository. If any of the TMS related columns are found to be inaccurate, the data from the TMSINT_ PROC_CODING and TMSINT_PROC_CODING_DTLS are copied back to the TMSINT_PROC-INBOUND and TMSINT _PROC_INBOUND_DTLS tables to be reprocessed with the current EXTRACT data that already exists in INBOUND (if any). It is possible to run the TMS integration process even when no EXTRACT data exists to force the client's data to be "synchronized" with current TMS repository data. [0234] 6. Processing INBOUND Data in IMS - All data that exists in the

TMSINT_PROC_INBOUND table is processed in TMS via the TMS API

TMS_USER_FULLAUTOCODE.CLASSIFYTERM. All TMS processing results are captured in the TMSINT_PROC_INBOUND table within the TMS related columns depending on whether the patient record resulted in being a classification record or an omission record in TMS:

[0235] 7. Obtaining the INBOUND Derivation Path Data - After all data in the

TMSINT_PROC_INBOUND data has been processed in TMS, the TMSINT_PROC_ INBOUND_DERV table is populated with the TMS "Derivation Path" for each patient record in TMSINT_PROC_INBOUND where the TMS_VTAID is non-null. The TMS API TMS_USER_AUTOCODE.Try Classifying will be called in conjunction with custom code to retrieve all TMS columns as defined in the views TMSI NT_ADM_DERV_PATH_ LEVELS and TMS I NT_ADM_DERV_PATH__COLU MS . Once the entire TMS derivation path has been obtained the data is inserted into the TMS!NT_PROC_INBOUND_DERV tables. One record is created in the TMSINT _PROC-INBOUND_DERV table for every dictionary level/column combination for the given patient record identified by the assigned INBOUND-ID. Since ALL level/column combinations are created, it is possible that some derivation path data values will be NULL. An example is the Drug dictionary which can have a full ATC hierarchical derivation path consisting of ATC level 1 through 5. If the VTA classification term has a "Primary Link" ATC that is an ATC3, then the corresponding ATC4 and ATC5 derivation path values will be NULL.

[0236] 8. Inserting CODING Data from New Patient INBOUND Data - After all of the patient data in the TMSINT_PROC_INBOUND table has been processed in TMS, the INBOUND data for new patients are initially created in the CODING tables. New patient data in INBOUND that has not previously been processed will have both a

TMS HASH_KEY value and a TMS_SOURCE_TERM_ID that does not yet exist in the CODING table. These records are new patient records and can simply be inserted into the CODING tables. In this case, all data in the TMSINT_PROC_INBOUND ,

TMSINT_PROC_INBOUND_DTLS and TMSINT_PROC-INBOUND_DERV tables for the given patient will be inserted into the TMSINT_PROC_CODING, TMSINT_PROC_ CODING_DTLS and TMSINT_PROC_CODING_DERV tables, respectively. After the new patient records have been successfully created in the CODING tables, the corresponding patient records will be deleted from the INBOUND tables.

[0237] Data for new patients is processed collectively as one Oracle database transaction using Oracle object type arrays. All CODING inserts and all INBOUND deletes are successful for the changes to be committed to the database. In the event of an Oracle error, no databased changes will be committed and the TMS integration process will fail with the appropriate error message.

PROC INBOUND → TMSINT _PROC_CODING

PROC INBOUND_DTLS → TMSINT_PROC_CODING DTLS

TMSINT PROC INBOUND_DERV TMSINT _PROC_CODING_DERV TMSINT_ PROC INBOUND → Delete

TMSINT PROC JNBOUND_DTLS Delete

TMS I NT PROC INBOUND DERV→ Delete

[0238] 9. Updating CODING Data from Update Patient INBOUND Data - After all of the patient data in the TMSINT_PROC_INBOUND table has been processed in TMS, the INBOUND data for previously processed patients is processed. Previously processed patient records have both a TMS_HASH__KEY and a TMS_SOURCE_ TERM-ID value that is pre-existing in TMS I NT_ PROC_CODi NG . Once the previously processed patient records are identified, the INBOUND data for each patient is compared to the CODING data. If no data values differ, the INBOUND patient record may simply be deleted from INBOUND since no additional action is required.

[0239] If one or more data value differences are detected, the current CODING records for the patient are copied as-is to the JN tables to preserve a pre-update copy of the patient record. The TMSINT_PROC_CODING record, all corresponding TMSINT_ PROC_CODING_DTLS records and all corresponding TMSINT_PROC_CODING_ DERV records will be copied to the JN tables TMSINT _PROC_CODING_ JN, TMSINT_ PROC_CODING_DTLS_JN and TMSINT_CODING_DERV_ JN, respectively. The reason for the change is annotated in all JN tables in the JN_REASON column and wells the date of the change in JN_ENTRY_ TS and the user making the change in JN_ENTRY_BY. The JN_REASON, JN_ENTRY_TS and JN_ENTRY_BY will be the same value on all of the JN tables. Additionally, the JN_ENTRY_TS will become the UPDATED .TS for all related CODING tables applicable to the new updated values. The UPDATED_TS and UPDATEDJ3Y values of the most record patient CODING record will always point to the JN records in which is superseded. After the current CODING records have been inserted in the JN tables, the CODING tables are then updated to reflect the values of the INBOUND tables. After the CODING tables have been updated, the INBOUND record can be deleted,

[0240] Data for patient updates are processed collectively as one Oracle database transaction using Oracle object type arrays. All JN inserts, CODING inserts/updates/ deletes and all INBOUND deletes must be successful for the changes to be committed to the database. In the event of an Oracle error, no databased changes are committed and the TMS integration process will fail with the appropriate error message.

• TMSiNT_PROC_CODING → TMSINT_PROC_CODING_JN

• TMSINT_PROC_CODING_DTLS → TMSINT_PROC _CODING_DTLS_JN

• TMSINT_PROC_CODING_DERV → TMSINT_PROC_CODING_DERV_JN

• TMSINT_PROC-INBOUND → TMSINT_PROC _CODING

• TMSINT_PROC_INBOUND_DTLS TMSINT_PROC_CODING_DTLS

• TMSINT_PROC_INBOUND_DERV→ TMSI NT_PROC_CODI NG_DERV

• TMSINT_PROC_INBOUND → Delete

• TMSlNT„PROC_INBOUND_DTLS → Delete

• TMSINT_PROC_INBOUND_DERV→ Delete

When deleting data from INBOUND, all remaining patient records will be deleted and will include those patient records that did not have data changes. The INBOUND tables should have no remaining data after successful processing.

[0241] Returning TMS Processing Results (Import) - The following section describes how client TMS processing results are created in the data-transfer staging table TMSINT_PROC_HTML-IMPORT table to be updated in the client source system by the data-transfer schema.

[0242] 1. Delete Previously Generated IMPORT Data - Prior to generating data in the TMSINT XFER_HTML_IMPORT table, all previously generated data will be deleted from table where the PROCESS_FLAG is "Y" indicating that the data processing schema has successfully updated the client source system.

[0243] 2. Generating Coding Data for Import - Any patient data that was initially classified in TMS per the current integration process or any patient classification that was modified in TMS per the current integration process is written to the data-transfer staging table TMSINT_XFER_HTML_IMPORT so that client's source system may be updated with the current TMS processing results.

[0244] At the onset of integration processing, the execution timestamp is captured. This processing timestamp is then used to select all patient coding data from the

TMSINT_PROC_CODING where either the CREATION _TS or the UPDATE__TS is greater than or equal to the execution timestamp. This then selects not only new patient records that were classified but also patient records that were updated by the current process.

[0245] As with data from the TMSINT__XFER_HTML_EXTRACT table that has been extracted from the client source system, data written to the TMSINT_XFER_

HTML-IMPORT table is also in an HTML format. Unlike the extract data; however, that is one single HTML file per study with multiple embedded records, the import data will be constructed as one HTML file per patient record to be updated.

• The import HTML file header is constructed from the metadata table TMSINT_ADM_DATAFILES columns LAST _F!LE_HEADER and

POST_FILE_HEADER_SUFFIX. The value "Snapshot" is replaced with

"Transactional" within the LAST_FILE_HEADER value. Additionally the

NEW_REC_HTML_TAG is used to derive the BOR/EOR markers.

• The patient "key" values as specified in the TMS external system are constructed in an HTML format using the TMSINT_PROC_EXT_MAPPING values of HTML_ TAG, HTML_PREFIX and CODING_HTML_SUFFIX in conjunction with the EXT_VALUE_# in TMSINTJPROC_CODING for the given patient in the order specified by POST_COLUM-ID.

• Although any TMS related values can be returned to include the full TMS derivation path, by default the application will return the VTA classification term, the dictionary version and in the case of the drug dictionary the list of ingredients. These values are constructed in an HTML format using the TMSINT_PROC_ DICT_MAPPING values of HTML_TAG, HTML_PREFIX and CODING_HTML _ SUFFIX in conjunction with the corresponding data values from TMSINT _ PROC_CODING and TMSINT_ADM_DICT_ING_LIST for the given patient.

• The appropriate HTML tags are added for the EOR and EOF markers and the record is created in the TMSI NT_XFER_HTML_I MPORT table with a

FILE_TYPE of "CODING" and a PROCESS_FLAG of" N".

[0246] Generating Action Data for Import - Any patient omission data that has had an action either initially created or updated in TMS that was created or updated in the CODING table per the current integration process is written to the data-transfer staging table TMSINT_XFER_HTML_IMPORT so that client's source system can be updated with the current TMS response results.

• At the onset of integration processing, the execution timestamp is captured. This processing timestamp is then used to select all patient omission data from the TMSINT_PROC_CODING where UPDATE_T S is greater than or equal to the execution timestamp, the TMS_ACTION_TEXT is not null and the TMSINT_JROC_ CODING_JN table indicates the creation or update of an action.

• Data written to the TMSINT_XFER_HTML_IMPORT table are also in an HTML format. Unlike the extract data; however, that is one single HTML file per study with multiple embedded records, the import data will be constructed as one HTML file per patient record to be updated.

• The import HTML file headers are constructed from the metadata table TMSINT ADM DATAFILES columns LAST FILE HEADER and POST_FILE_HEADER_SUFFIX. The value "Snapshot" is replaced with

Transactional" within the LAST_FILE_HEADER value. Additionally the

NEW_REC_HTML_TAG is used to derive the BOR/EOR markers.

• The patient "key" values as specified in the TMS external system are constructed in an HTML format using the TMSINT_PROC_EXT_MAPPING values of HTML_TAG, HTML_PREFIX and ACTION_HTML_SUFFIX in conjunction with the EXT_VALUE_# in TMSINT__PROC_CODING for the given patient in the order specified by POST_COLUM_ID.

• The VT is returned constructed in an HTML format using the

TMSINT_PROC_ DICTJvlAPPING values of HTML_TAG, HTML_PREFIX and ACTION_HTML_ SUFFIX in conjunction with the corresponding VT value from TMSINT_PROC_CODING.

• Although any TMS related values can be returned, by default the application returns the TMS omission status and the TMS omission action.

These values are constructed in an HTML format using the

TMSINT_PROC_DICT_MAPPING values of RETURN_ACTION_HTML in which the values <OMISSION _STATUS> and <OMISSION_ACTION> are replaced with the corresponding data values from TMSINT_PROC_CODING and TMS.

• The appropriate HTML tags are added for the EOR and EOF markers and the record is created in the TMSINT_XFER_ HTML_IMPORT table with a FILE_TYPE of "CODING" and a PROCESS_FLAG of "N".

[0247] The following example demonstrates integration of the Medidata Clinical Trial

System with the Oracle Coding System

Client Coding configuration - setting up terms for coding.

[0248] The DBMS TMS to Medidata Rave Integration (TMS I NT) application Sample

Client TMS Integration Definition Worksheet shown in Fig. 3 is used by the

Administrator to configure the verbatim, route and indication fields and all other definitions and configurations for the client. [0249]The DBMS TMS to Medidata Rave Integration (TMSINT) application has the ability to reuse study field configurations (e.g. copying the configuration for an AE Verbatim between studies). A template SQL script is provided that can be cloned and modified for the given client being defined similar to that of the SQL script below. The SQL script has been designed to accommodate subsequent executions and will initially delete the client and ail related data prior to executing the SQL script.

Synonym (VTA) Load

[0250] The DBMS TMS to Medidata Rave Integration (TMSINT) application TMS VTA Batch Load Utility allows the user to preserve previous coding assignments in

dsNavigator and is a means to facilitate the creation of approved VTA's in TMS for a given dictionary by which the Verbatim Term (VT), Dictionary Coding Term and optional TMS Domain Name are specified in ASCII TAB delimited file(s). The contents of the ASCII VTA data file(s) is loaded into a staging table called STG__VTA_LOAD . A SQL process creates the distinct TMS Dictionary Domain VTA's using the TMS API

TMS_USER_CLASSIFICATION.CreateAcceptedVTA. (See Table 7)

[0251] The ASCII VTA data file used by TMS VTA Batch Load Utility is a TAB delimited file less any headings or embedded special characters except for the TAB character used to separate the column values. All efforts should be made to ensure that there are no embedded horizontal tabs characters or new line characters within the text values.

TABLE 7

[0252] The DBMS TMS to Medidata Rave Integration (TMSINT) application allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology. The application also allows the loading of HTML and text files as data sources, for example provide Extract from Rave, Integrate data into TMS and Import Classification and Coding Queries (actions) to Rave

Getting Started

[0253] Enter data into Medidata Rave system - The TMSINT application uses a

Webservice that can run on a scheduler or manually. This example, illustrates running manually.

[0254] Verbatim Terms are loaded into the TMS Integration inbound tables. When the TMS Integration (Tl) load utility is run, all verbatim terms that are marked for processing by TMS are sent to TMS and inserted into the TMS SOURCE_TERMS TABLE with all their associated key data. TMS Integration (Tl) triggers TMS Synchronization and Autoclassification. The TMSINT application allows the load of previously coded terms while preserving original coding including extracting clinical data from a URL and full TMS integration

Auto Match

[0255] During Autoclassification, TMS searches for an exact match to a dictionary term or an existing VTA in the Domain linking the TMS verbatim term to a dictionary term. TMS first searches all terms in the Dictionary at the Classification Level, including dictionary terms plus Verbatim Terms that have been mapped to Dictionary Terms in the Domain. TMS uses a built-in matching algorithm which makes all mixed case terms match and reduces multiple spaces to one space. ( See Fig 4)

Auto Match Equivalent Terms

[0256] For each term that is automatically matched/classified: the verbatim term is associated with a VTA linking it to a dictionary term in TMS. During the Oracle TMS to Medidata Rave Integration, all the derived data is written to the Outbound Views for processing to the source EDC system. This provides a propagation feature enabling coding of identical previously coded verbatim terms to their VTAs. (See Fig. 5)

Non-Match

[0257] If the search does not produce a match, TMS can use any custom-defined Autocoder Search Objects to find a match which automatically matches non-equivalent terms. (See Figure 6) If a match is still not found, manual intervention is required. The TMSINT application creates an omission in TMS and a record in the source terms table. An omission is then handled in TMS by either coding the term or by applying an Action. Re-submit terms to be re-coded

[0258] The TMSINT application is able to re-submit terms to be re-coded if a verbatim, route or indication field is modified. The Oracle TMS to Medidata Rave Integration has been written so that extracting data subsequent times will process source data/ metadata extracted where the verbatim term, route or indication field is modified.

During the system data processing into TMS, the data in the INBOUND table is reprocessed and synchronized with the TMS data to ensure the modified data is updated. (See Fig. 7 and Fig 8)

Export Coding Data and Queries from TMS

[0259] Code omission raises a query in TMS. When the TMS Integration and Import is run the TMSINT application is able to export all coded data from each individual study/job/domain including dictionary version information and WHODrug ingredients. (See Fig 9). In addition, the export can include the associated source data and actions / queries created in TMS (See Fig. 10 and 11).

TMS Domain/Dictionary

[0260] . Multiple synonym lists are created enabling different synonym lists to be linked to different studies with the TMS Domain/Dictionary functionality (See Fig 12). TMS also enables different approval workflow by domain; one study/program can automatically approve a VTA when it is classified while the other study can enforce the manual approval process after classification (See Fig 13).

Coding in Oracle TMS [0261] TMS can be accessed by multiple users at the same time and the coder is able to filter using the TMS Omission Filter (See Figure 14) and while coding, the user can sort coded data within the TMS Classify VT Omissions (See Figure 15). TMSINT application maintains a unique record ID from the source data.

Source System unique record ID Key

o TMSINT_PROC_INBOUND INBOUND-ID

o TMSINT_PROC_INBOUND EXT VALUEJ ..EXT_VALUE_8 o TMSINT_PROC_CODING CODING-ID

o TMSINT_PROC_CODING EXT_VALUE__1..EXT _VALUE_8

TMS unique record ID Key

o TMS_SOURCE_TERMS SOURCE_TERM_ID,

OCCURRENCE-ID

o TMS_VT_OMISSIONS SOURCE_TERM_ID, OCCURRENCE-ID

Browse ingredients & ATC

[0262] TMS browsing capabilities make ingredients visible when coding in WHO DDE and all multi-axial relationships are able to be seen including the ability to see all ATC levels. While coding in the Oracle TMS coding system, the coder is able to view ingredients, all ATC levels, and all multi-axial relationships for the dictionary

classification terms on the Above and Below window.(See Figure 16).

Ingredients & Dictionary Version propagates to Rave

[0263] The TMSINT application then propagates the Ingredients and Dictionary Version to Medidata Rave. Additional hierarchy, including the ATC can be propagate if the Rave study is setup to receive the data (See Figure 17).

Drill Down Views

[0264] The TMSINT application allows the user to "Drill Down" for auxiliary information (e.g., CM Country, Indication, Route, Route Other) and displays differentiators while coding, allows the ability to use these differentiators for code assignment without automatically adding terms to a synonym list (e.g. allow the ability to use country specific information in the coding of non-distinct trade medications). These Drill Down Views can be customized based on user's need (See Figure 18).

TMS Status/Notes

[0265] TMS provides the ability to attach/view status codes or notes to coded and uncoded terms (e.g. to indicate terms with outstanding queries or those which have had a change request sent to the MSSO) using the TMS Status/Notes Pop-up Window from the Classify VT Omissions (under Omission Management). Status/Notes can also be propagated to Medidata Rave once an Item is created in the study to receive the Note. (See Figure 19) and to approve VTAs (under Omission Management (See Figure 20) Coding Approval

[0266] The user has the option, if ever needed, to use a two-step coding process where a second coder must approve the code (VTA Classification).To do this, TMS provides configurable workflows within the coding/approval process (e.g. provide the ability to allow a secondary coding approval work-flow but not mandate it). The workflow is configured in the Domain/Dictionary definition See Figure 21.

High Level Classification

[0267] TMS is able to perform ATC coding for medications in a single step and ATC Classification: Terms are automatically classified based on the primary link. For High Level Classification with Primary Link, the Primary link can be reclassified at the

Preferred Term level or the source verbatim term level. The ATC can be exported to the Medidata Rave System when the study is setup to receive the ATC Values. (See Figure 22)

High Level Term Indicator Classification

[0268] For Drug Dictionaries without Primary Links or Paths on the ATC Levels, the Indication Omissions window is available. The coder is able to code/classify to the specific ATC in a single step. This is available when the WHODrug C version is installed.

TMS Search Object

[0269] TMS out-of-the box search object autoencoding capabilities include Auto-coding to previously coded verbatim term assignments (VTAs) within the same domain (this is available when the WHODrug C version is installed), Auto-coding to previously coded verbatim code individual (VTIs), (e.g., the verbatim term plus the auxiliary information of the source term) and Auto-coding of a source term to a non-unique dictionary term using additional drug information of the source term. As well as Automatic ATC classification of a drug based on previous ATC classification of the same drug having the same indication

Autocode VT + Auxiliary

[0270] Auto-coding to previously coded verbatim code individual (VTIs), e.g., the verbatim term plus the auxiliary information of the source term (See Figure 24).

TMS Autocode -Indication Classification

[0271] Automatic ATC classification of a drug based on previous ATC classification of the same drug having the same indication (See Figure 25)

Custom Autocoder Search Objects

[0272] The TMS I NT application Custom auto-coding search object algorithms

automatically code verbatim terms to the Medical and Drug Dictionaries. The objective is to use the Autocoder during the TMS Integration to increase auto classification efficiency gains, ensure consistency, and avoid delays in processing clinical data in TMS. The TMS I NT application Custom Autocoder searches the appropriate dictionaries in TMS and attempts to find automatic matches for a VT based on the algorithm logic. The logic includes word swapping, substitution logic for punctuation, stop words, insignificant word removal, and abbreviation expansion/contraction. Where matches are found, TMS posts the approved VT assignment to TMS and will flag the assignment as being Autocoded. This will enable the user to develop metrics reports for Autocoding performance. The auto coded terms can then be automatically approved by the system. TMS I NT application Custom Candidate list auto-coding solution can also be available to propose matches (candidate terms), rate and prioritize the proposed matches, and list the proposed matched for the coder to make the decision on coding to the best match. (See Figure 26). A flow diagram illustrating the Oracle TMS to Med (data Rave

Integration (TMS I NT) application with Autocoder Is shown in Figure 27.

Custom Autocoder Design [0273] The TMSINT application custom auto-coding search object algorithms are executed during the TMS__USER_FULLAUTOCODE.CLASSIFYTERM API based on the order identified in the Autocode Module Codelist. The logic is as follows:

Verify a direct match cannot be made before starting,

Break up a multi-word term into individual words from left to right and from first word to the last word,

Reorder individual words with all possible permutations of a multi-word term (with limits). Search the dictionary at the classification levels for a direct match, and

Executes procedures against individual words in the order defined in the Autocode Module local reference codelist.

Custom Autocoder Partial Word Substitution Design

[0274] Processes PART_SUBS and WORD_SUBS where the description value is null. For example: When the following characters or words are included in the PARTJ3UBS and WORD_SUBS codelists, the system removes punctuation & symbols ("; *" to blank), removes numeric values ("0 - 9" to blank), removes stop words, and removes frequent or insignificant terms ("an, and, is, of to blank). Once the words in the VT Array are substituted, the VT Term is reconstructed and the dictionary is searched at the classification levels for a direct match for each substitution. If no match is found, continue substitution of same word is continues until there are no more available substitutions. If no match is found, individual words are reordered with all possible permutations of a multi-word term (with limits). The dictionary is then searched at the classification levels for a direct match. The Custom Autocoder Partial Word Substitution Design is illustrated in Figure 28.

Custom Autocoder Fuii Word Substitution Design

[0275] Process WORD_SUBS where the description value is not null. For example: creating a substitution synonym list (TYLENOL to ACETAMINOPHEN), creates a substitution abbreviation expansion/contraction list (OD to right eye, both eyes to OU). Once the words in the VT Array are substituted, the VT Term is reconstruct and the dictionary is searched at the classification levels for a direct match for each substitution. If no match is found individual words are reordered with all possible permutations of a multi-word term (with limits). The dictionary is searched at the classification levels for direct matches for each substitution. If no match is found, after a substitution, the array is refreshed back to its original content before checking the next substitution.

If no match is found after performing one substitution at a time and it is know there is at least one substitution, then all substitutions are made and re-checked for a direct match If no match is found, a check for a direct match is made by re-ordering the words in the Array (See Figure 29). Table 9 is an example of a custom autocoding procedure

TABLE 9

Custom Autocoding Example

[0276] The Oracle TMS to Medidata Rave Integration (TMSINT): Architecture incorporating features of the invention is schematically shown in Figure 30. The OPA Domain includes the WebLogic Server Administration Console, WebLogic Server Java, and OpaServerl , which are required to run RDC Onsite and, if the user has the Oracle Thesaurus Management System (TMS) installed, it includes the TMS Browser and TMS reports and the database connections. When the Oracle Enterprise Manager is installed in the FR Domain, it also functions to monitoring the OPA Domain.

[0241] The OHS 5.1.x Logical Architecture is schematically shown in Figure 31.

Recommended Hardware

[0277] The recommended hardware to properly run the Oracle TMS to Medidata Rave Integration (TMSINT) includes:

• A Database Server: with o Oracle RDBMS 11g (11.2.0.x) on Windows 2008/2012 64 bit Enterprise

Edition

o Dedicated: 12 CPU with RESERVE of 3000 Ghz each (locked in VMware configuration)

o Memory Allocation: 32 GB Memory RESERVED memory allocation

• 1 Application Tier server with:

o Oracle WebLogic and 11g App, on Windows 2008 64 bit Enterprise

Edition. Note.

o Dedicated: 4 CPU with RESERVE of 3000 Ghz each (locked in VMware configuration) , and

o Memory Allocation: 32 GB Memory RESERVED memory allocation & Windows 2008/2012 R2 Enterprise Edition is required for memory

allocation above 16GB.

• Client:

o Windows 2008 R2/Windows 9/10/11 , Windows 7 (or later compatible versions) with Internet Explorer 9/10/11( or later compatible versions). o All desktop/iOS/Android clients supported via RDP or Citrix

o To reduce costing, internal RAID-5 disk configuration is suggested on the ESXi host. This configuration of DL 380 G9 has excellent performance overall for INTERNAL RAID-5 disk configuration, where production performance is NOT expected to be impacted.

o Development and Validation can be scaled at 50% of the production capacity for Memory and CPU, and left as UNRESERVED. This will allow a fall-over capability from the Production ESXi to the Development/ Validation ESXi servers.

o There will be 2 VMs each provisioned for Development, Validation and Production across each ESXi server. This will allow maximum SRM failover capability. [0278] One skilled in the art will recognize that later compatible versions of the software and operating systems listed above can be used with RAVE software specifically upgrade or adapted to run on or be compatible with those versions of the software and operating systems.

Server Specifications

The preferred server specifications are set forth in Table 10.

TABLE 10

SERVER SPECIFICATIONS

Failover

[0280] Acquisition and deployment of HP DL380 Gen9 servers DEDICATED to the users OC/TMS environment running VMWare ESXi 5.5.x or 6.x.

• Dedicated rack space and network connectivity and managed servers are

required at the user facility, which currently operate other production applications in a hot standby/hot failover mode. Applicant then dedicated servers on-line at both the user and applicant's facilities to perform the development, validation and production target environments running VMWare ESXi with a documented IQ for the hardware installation and ESXi installation.

• Network connectivity is established between the provider and the users data center, in cooperation with the users networking team. Dedicated hardware is preferred for several reasons because it provides a secure data isolation level from any other user's environments and allows user to acquire the most current hardware for performance and scalability purposes.

Synchronous integration using Web Services

[0281] The Medical Coding Solution supplied by the provider allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology. The processes are provide according to a fixed schedule an also as required as web services. This synchronous approach uses Rave Web Services. Rave Web Services (RWS) integrates Medidata Rave with third-party systems to exchange CDISC ODM 1.3 standard clinical data or metadata synchronously and with immediate confirmation. RWS uses the Representational State Transfer (RESTful) architecture. Data is posted to or retrieved from Rave using HTTP protocol messages posted to specific URLs. Each message received is programed to provide an immediate success or failure response. In the event of a failure, any pending changes are rolled back. Since RWS uses "RESTful" web services, it does not mandate the use of either Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL). Synchronous integration high-level architecture

[0282] The study dataset is extracted from the source Medidata system using the RWS. https://{host}/RaveWebServices/studies/{study-id}/datasets/{ regular-or-raw}. The extracted information is pushed to staging tables in the providers Custom IMS integration system. The DBMS Custom APIs processes the data and identifies the items that require coding, a coding classification or opening a query. Each Open query of each such identified item on the source system is updated using the RWS and validated against TMS and other related identied updates/queries. Updated/open queries are imported to Medidata and updated/current datasets are extracted from Medidata.

Asynchronous integration using text file

[0283] The system described herein saves the study datasets information in a file in standard ODM format and processes the information to validate against TMS using a DBMS Custom integration system. This asynchronous integration high-level architecture extracts the study dataset from a text file placed on an accessible location and pushes the extracted information to staging tables of the DBMS Custom TMS integration system. The DBMS Custom APIs processes this data and identifies the items that require coding, a coding classification or opening a query. A text file is then generated for each item that requires an Update/Open query and this text file which contains the xm! request body that a RWS can accept. Demonstration of User administration in TMS includes validation against TMS and identifying the updates/queries, generating text file for each identified update/open query action and reading datasets from a text file.

User administration in TMS

[0284] User administration overview comprises creating, configuring, modifying, and inactivating portions of the stored records. The TMS Security module is used to create users, assign roles and deactivate users and to define access rights to different users. TMS allows workflow assignments based on role (e.g. those responsible for coding and those responsible for approval). The predefined roles are:opa_admin, rxclin jead, tms_access, tms_allocate_priv, tms_approve_priv, tms_classify_priv, tms_define_priv, tms_dictupg_priv, tms_dsi_priv, tms_integrate_priv, tms_maintain__priv, and ms__reciassify__priv, tms_research_priv. (See figure 32)

Synchronous integration using Web Services

[0285] The Medical Coding Solution described herein allows for direct data transfer from EDC and Safety Database systems via WebServices and similar technology. The processes are exposed as web services and are scheduled as required. The

synchronous approach uses Rave Web Services (RWS) to integrate Medidata Rave with third-party systems and to exchange CDISC ODM 1.3 standard clinical data or metadata synchronously and with immediate confirmation. RWS uses the

Representational State Transfer (RESTful) architecture. Data is posted to or retrieved from Rave using HTTP protocol messages posted to specific URLs. Each message receives an immediate success or failure response. In the event of a failure, any pending changes are rolled back. Since RWS uses "RESTful" web services, it does not mandate the use of either Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL).

Synchronous integration high-level architecture

[0286] The study dataset is extracted from the source Medidata system using the RWS (https://{host}/RaveWebServices/studies/{study-id}/datasets/ {regular-or-raw}) and the extracted information is moved to staging tables of the TMS integration system. The DBMS Custom APIs processes this data and identifies the items that require coding, a coding classification or open a query. Each identified Update/Open query item on the source system using the RWS is validate against TMS and the updates/queries are imported to Medidata and data sets are extracted from Medidata.

Asynchronous integration using text file

[0287] In the system the study datasets information is saved in a file in standard ODM format and this information is processed to validate against TMS using the Custom Asynchronous integration high-level architecture: The study dataset is extracted from a text file placed on an accessible location, the extracted information is pushed to staging tables of the Custom TMS integration system., the Custom APIs process this data and identify the items that require coding, a coding classification or opening a query, and a text file is generated for each item that requires an Update/Open, this text file containing the xml request body acceptable to a RWS. User administration in TMS is validated against TMS, the updates/queries are identified, a text file is generated for each identified update/open query action and datasets are read from a text fife.

Coding Configuration

[0288] The TMS I NT application Definition Worksheet is used by the Administrator to configure the verbatim, route and indication fields and all other definitions and configurations for the client for coding the TMS Reference Codelists and the TMS External Systems. Figure 33 is a sample of a Client definition based on a new client worksheet. The TMSINT application has the ability to reuse study field configurations (e.g. copying the configuration for an AE Verbatim between studies). A template SQL script is provided that may be cloned and modified for the given client being defined similar to the SQL script that has been designed to accommodate subsequent executions and will initially delete the client and ail related data prior to executing the SQL script.

High Level Process

[0289] Client Clinical Trial System Source System Configuration Setup - A new client is setup for use with the TMSINT application. This includes determining the client's TMS dictionary requirements, external system requirements and detailed information on the format of the client's source data subject to processing.

[0290] Client Clinical Trial System Source System Data Extract - During the

configuration setup of a client within the TMSINT application, a client specific Oracle schema is created that is responsible for extracting data from the clients Clinical Trial System Source system for TMS processing as well as re-importing data back into the client's source CDR system. The Oracle schema is referred to as the "data-transfer" schema and is only responsible for the transfer of data from the client system and back into the client system. The APIs within the package TMSINT_XFER_UTILS are written exclusively for the data-transfer functionality. Data extracted from the client's source system is written to a staging tabie (TMSiNT__XFER_HTML_EXTRACT) in a raw HTML format by the data-transfer schema. Data to be re-imported into the clients' source system is written to staging tables by the data-processing schema

[0291] Resetting HTML Extract Records to Process in TMS - The application has been written so that processing data subsequent times will not have an impact when no changes are detected. In order to re-process data in the TMSINT_XFER_HTML_ EXTRACT tabie, the PROCESS_FLAG value is updated from T back to "N" using the API TMSINT_XFER_UTILS.RESET_XFER_FOR_PROCESSING to perform this update.

[0292] Deleting/Purging Previously Processed HTML Extract Data - Data within the data-transfer staging table TMSINT_XFER_HTML_EXTRACT remains in the staging table until purged. User's discretion determines how long to maintain previously processed data within the staging tabie.

[0293] Deleting/Purging All HTML Data Records - The API TMSINT_XFER_

UTI L.CLEAR_ALL_EXTRACT_DATA can be used to hard-delete all data in the staging table regardless of the PROCESS_FLAG. This API is intended for testing and development or to prevent ali subsequent data processing.

[0294] Client Clinical Trial System Source System Data Formatting - During the configuration setup of a client within the TMSINT application, a client specific Oracle schema, referred to as the data-processing schema, is created that is responsible for processing data from the clients Clinical Trial System Source system within TMS. The data-processing schema is responsible for obtaining the client data pending processing from the data-transfer owned staging tables, processing the data in TMS, and subsequently writing the applicable TMS processed data back to data-transfer staging tables to be re-imported back into the client's source system. The APIs within the package TMSINT__PROC_UTILS are written exclusively for the data-processing functionality. The API process TMSINT_PROC_UTILS.PROCESS-INBOUND

FROM_XFER performs this action. As data is read and formatted from the TMSINT_ XFER_HTML_EXTRACT table, it is written to the data-processing staging tables TMSINT__PROC_INBOUND and TMSINT_PROC_INBOUND_DTLS (ancillary data). Data is marked as processed in the data-transfer table once the record(s) have been successfully formatted and written to the data-processing schema INBOUND staging tables.

[0295] Creating TMS INBOUND Processing Records: - Once the PROCESS_RECNO column has been populated, multiple HTLM data records can be selected for insertion into the data processing staging table TMSINT_PROC_INBOUND. If the client has defined ancillary data, the corresponding data is created in the TMSINT_PROC_ !NBOUND_DTLS tables. If a client does not have ancillary data as defined in the configuration table TMSINT_ADM_ADDON_COMUMNS then no data will be written to TMSI NT_PROC_l NBOUND_DATA. As each record is successfully formatted and written from the data-transfer staging table (TMSINT_XFER_HTML_EXTRACT) to the data-processing staging tables (TMSINT_PROC-INBOUND/TMSINT_PROC_

INBOUND_DTLS), the data in the TMSINT_XFER_HTML_EXTRACT table is marked as processed by updating the PROCESS_FLAG from "N" to "Y" which will prevent the data from being re-processed.

[0296] Re-processed HTML Records - It may be desired at some point to re-process data in the data-transfer table TMS I NT_XFER_HTML_EXTRACT a subsequent time. The application is written such that processing data subsequent times will not have an impact when no changes are detected. In order to re-process data in the TMSINT_ XFER_HTML_EXTRACT table, the PROGESSJFLAG value is updated from Ύ' back to "N" using The API TMSINT_PROC_UTILS.RESET_XFER_FOR_PROCESSING to perform this update. The next execution of TMSINT_PROC_UTILS.PROCESS_ INBOUND_FROM_XFER re-processes the extract data.

[0297] Client Clinical Trial System Source System Data Processing - After the HTML extract data has been read from the data-transfer schema, formatted and created in the data-processing schema INBOUND staging tables via the API TMSINT_PROC_

UTILS.PROCESS_INBOUND_FROM_XFER, the data can then be processed in TMS. The API to process the INBOUND data in TMS is TMSINT_PROC_UTILS.PROCESS_ INBOUND_IN_TMS. This process creates VTAs in the TMS repository, creates patient coding records in TMS_SOURCE_TERMS and creates patient omission records in TMS_VT_OMISSIONS. At the onset of processing, each record in TMSINT_PROC_ INBOUND in which the TMS_PROCESS_FLAG is "N" will be set to a TMS_PROCESS_ FLAG value of "P" ("in progress"). For each of the "in progress" records, the TMS related columns are populated based on attempts of coding the given record in TMS.

[0298] Keeping Client Data "Synchronized" with the TMS Repository: - With EVERY execution of the API TMSINT_PROC_UTILS.PROCESS_INBOUND_IN_TMS, the procedure CODING_DATA_SYNC_RECHECK is executed PRIOR to processing the pending data in the INBOUND tables. This procedure reviews all pre-existing client data in the CODING tables to detect if any of the conditions listed below have occurred. If so, a "copy" of the CODING record will be created in the INBOUND tables for reprocessing if it does not already exist in the INBOUND tables.

Has the VTA been declassified? (patient coding is now a patient omission) Has the VTA been classified (patient omission is now a patient coding record) Has the VTA been re-classified? (manual reclassification or direct-match reclassification during versioning)

Has the VTA's parent DT been updated? (dictionary version upgrade)

Has the VTA's parent DT been made inactive? (dictionary version upgrade) Has the VTA's parent DT been promoted? (dictionary version upgrade)

Has the VTA's parent DT been demoted? (dictionary version upgrade)

Has one or more components of the "Derviation Path" changed? (dictionary version upgrade)

[0299] Processing INBOUND data in the TMS Repository. - Moving INBOUND Data to the CODING Tables - After each TMSINT_PROC_ INBOUND record is processed and the TMS related columns have been populated, the records are moved from the

INBOUND staging tabies to the CODING tables. The CODING table data is persistent and will always represent the current client data in TMS at the time of the last execution of TMSINT_PROC_UTILS.PROCESS_ INBOUND-IN _TMS. If the INBOUND VT term and the External Values 1..8 do not exist in the CODING table, then this is a new patient record being processed. If the INBOUND VT and the External Values 1..8 already exist in the CODING table, and all TMS values and all client data values are the same, then the record has been previously processed and nothing has changed since the time the record was last processed. If the INBOUND VT and the External Values 1..8 already exist in the CODING table, but one or more TMS values and/or client data values are the different, then the record has been previously processed but something has changed within the client data or within the TMS repository since the time the record was last processed.

Administrator Owned Tables (TMSINT_ADM)

[0300] The TMSINT__ADM administrator schema owns all of the application metadata tables. The metadata tables allow for the definition and configuration of a client for use with the TMSINT application. The metadata data tables facilitate "generic" applications and stored procedures that can be useable by all client configurations eliminating the need for client custom specific source code. The application owner can view all data within metadata tables while the client specific data-transfer and data-processing Oracle schemas can see only data applicable to the client in which the Oracle schema is associated. The application metadata tables require that accurate requirements be collected for a given client and properly defined within the metadata tables at the onset of processing.

[0301]TMSINT_ADM_ PROPERTIES table allows for the definition of various properties within the TMSINT application. The only required entry within the properties table during the installation process is the name of the Oracle account that is the TMINT application owner. For the application owner, the PropertyCategory is "APPLICATION", the PropName is "OWNER" and the PropValue is the Oracle schema owning all of the application objects which is TMSINT_ADM". The properties table may optionally be used to specify other miscellaneous properties as needed. A SQL script is executed during the installation process as the application owner that will automatically create the required entry in the TMSINT_ADM_PROPERTIES table. Additional SQL scripts are provided to insert, update, and delete properties as needed. [0302]TMSINT__ADM_CL!ENTS table allows for the definition of a client within the TMSINT application. Each client is assigned a unique numeric identifier and alias code. The numeric identifier, Client-ID is used as a foreign key in other metadata tables for referential integrity. The value of the client alias code selected dictates the names of the Data-Transfer and Data-Processing schemas TMSINT_XFER_<client-alias> and TMSINT_PROC_<client-alias>, respectively. SQL scripts are provided to easily allow for the insert, update, and delete of clients data.

[0303]TMS!NT__ADM_DATAFILES table allows for the definition of datafiles within the TMSINT application for a specific client in TMSINT_ADM_CLIENTS. A client may have one or more datafiles, but a datafile can be assigned to only one client. Each datafile is assigned a unique numeric identifier (DatafilelD) that is used as a foreign key in other metadata tables that have a child relationship to the datafile at hand. A datafile is the URL (e.g. https://pharmanet.mdsol.com/RaveWebServices) in which to access the client's datafile, which is an HTML document. In addition to the datafile URL, the user name and password credentials needed to access the client datafile URL are required. Each datafile corresponds to one clinical study name. When data is processed in TMS, the TMS domain name will be the "Study Name" concatenated with the string

"_DOMAIN" for all coding records within the given datafile. New datafiles can be added to an existing client at any time. To temporarily or permanently discontinue processing of any datafile, the Active Flag can be updated to "N" and to reinstate processing the Active Flag is reset back to Ύ", All datafiles are initially created as active. SQL scripts are provided to easily allow for the insert, update, and delete of client datafiles.

[0304]TMSINT_ADM_DICT_MAPPING (dictionary mapping) table is a child table to the datafiles table TMSI NT_ADM_DATAF I LES and will contain one record for every DCM/VT within a datafile to be coded in TMS. For example, a single datafile can contain a DCM/VT (AE/AETERM) that is applicable to adverse event MedDRA coding and a DCM/VT (CONMEDS/CMTERM) that is applicable to Drug coding or multiples of each. The dictionary mapping table associates the specific DCM/VT to be coded with a specific dictionary in TMS. This information is used during TMS processing by the Data-Processing schema. A new DCM/VT dictionary mapping can be added to an existing datafile at any time by adding the new record to the TMS I NT_ADM_DI CT _ MAPPING table. To temporarily or permanently discontinue processing of a DCM/VT within a given datafile, the Active Flag is set to "N°; to reinstate processing of the DCM/VT within the datafile, the Active Flag is reset to Ύ". All datafile dictionary mappings are initially created as active. The TMS Dictionary must already exist in the TMS repository prior to adding a datafile dictionary mapping. SQL scripts have been provided to easily allow for the insert, update, and delete of datafile to dictionary mapping.

[0305]TMSINT_ADM_EXT_MAPP!NG (external system mapping) table is a child table to the datafiles table. This table creates a relationship between a given client datafile and a defined TMS External System. The table contains one record for every column defined in TMS definition of the External System which can be at most eight values (ExtValuel - ExtValue8). If the TMS External System has 6 defined external column values (Study, Site, Patient, Event, etc.) there can be only one record for each defined external system column. This table is used by the TMS I NT application to parse the datafile values that will ultimately be stored in TMS during processing. New external system mappings may be added to an existing datafile at any time as long as it corresponds to the TMS definition of the external system. To temporarily or

permanently discontinue processing of a given external system column, the Active Flag is set to "N" : to reinstate the Active Flag is set back to "Y". All datafile external mappings are initially created as active. The TMS External System must already exist in the TMS repository prior to adding a datafile external system mapping. SQL scripts are provided to allow for the insert, update, and delete of datafile to external system mapping.

[0306]TMSINT_ADM_ADDON_COLS (add-on column mapping) table is a child table to the datafiles table. This table provides and optional means to collect and store ancillary data associated to a specific DCM within a given datafile. The column values will not be stored within the TMS repository during the coding process. However, they may optionally be used for later omission management via TMS Drill-Down views in which ancillary data can be queried and viewed as additional data to aid in the manual classification process. By default, the ancillary data is stored in the TMSINT application tables but not directly accessible from TMS. Ancillary data is optional and as many columns as needed may be stored which may differ between DCMs. For the Adverse Event (AE) DCM, it may be desirable to capture ancillary data such as the onset date (ONSDT) or the severity (ONSER). For the Concomitant Medication (CM) DCM it may be desired to capture the dose (CMDOSE) and frequency (CMDOSFRQ). To temporarily or permanently discontinue processing of a given external system column, the Active Flag can be set to "N" and to reinstate processing the Active Flag is set to Ύ". All datafile add-on column mappings are initially created as active. SQL scripts have been provided to easily allow for the insert, update, and delete of datafile ancillary data.

[0307]TMSINT_ADM_ORA_ACCOUNTS (Oracle accounts) table is the metadata table that allows the Oracle schema to be programmatically associated to a specific client. Each client has a data-transfer schema (account type = "XFER") and a data-processing schema (account type = "PROC"). This table is the mechanism that allows a given Oracle schema to only see the data associated to the given client in which the Oracle account is associated. When creating the required accounts for a TMSINT application client, the actual Oracle schema must be pre-existing in the Oracle database and will be validated using the Oracle view ALL_USERS based on user name. Creating or deleting entries does not actually create or drop the Oracle schema in the database but is more of a "registration" process which creates a relationship between the Oracle schema and a client as well as controls the selection of client data between the Oracle schemas. Additional SQL scripts are provided to create the Oracle schemas in the database which will default the Oracle password for accounts to the user name. For example, the SQL script to create a data transfer schema for client "!NV" will create and Oracle user of TMSINT_XFER_!NV and the account will have an Oracle password of

TMSINT_XFER-INV. The Oracle database password may optionally be changed at any time using normal Oracle means (ALTER <username> USER IDENTIFIED BY <psw>;). When passwords are modified within the database, it is preferred that the corresponding password in the Oracle accounts table be updated as well. The Oracle passwords within the Oracle accounts table are used for query only and not used to connect to the database at the time of this document, SQL scripts are provided to easily allow for the registration and un-registration of Oracle accounts as well as suspending and unsuspending of the accounts by modifying the Active flag to "N" and Ύ", respectively. Suspending the Oracle Data-Transfer schema by setting the Active Flag to "N" "suspends" or prevents the data transfer activity. Unsuspending the Oracle Data-Transfer schema but setting the Active Flag to T allows the data transfer activity to resume. The same actions can occur for the Data-Processing schema.

Data-Transfer Owned Tables (TMSINT_XFER_<client>)

[0308] Each client defined within the TMSINT is required to have a data-transfer schema. Data stored within this schema is applicable only to the client in which it is associated and is not intended to persist. The data-transfer schema is applied to a staging area only in which data will be extracted from the client's source CDR system for processing and subsequently re-incorporated back into the client's source CDR system where required.

[0309] TMSINT_XFER_HTML_EXTRACT table is the staging table that is used to store data extracted from the client source system. Client data is extracted from an HTML document (i.e. client datafile) and written to the TMSINT_XFER_HTLM_ EXTRACT staging table in the original HTLM format. APIs are written to insert and delete data from the extract staging table. The APIs exist within the TMSINT_XFER_ UTILS package owned by the TMSINT application administrator account (TMSINT_ADM).

[0242] Data-Processing Owned Tables (TMSINT_PROC_<cllent>) - Each client defined within the TMSINT application is required to have a data-processing schema. Within the data-processing schema a series of tables exist as shown table 11.

TABLE 11

[0310]TMSINT_PROC_INBOUND (inbound processing) table is the TMS staging table that stores data directly obtained from the data-transfer HTML extract table

TMSINT__XFER__HTML_EXTRACT after being transformed into a TMS useable record format. Generic APIs are written to programmatically obtain the HTML extract data, transform data, write the data to the inbound processing table, and subsequently mark the data-transfer extract data as having been successfully processed (i.e. copied to INBOUND). The APIs exist within the TMSiNT_PROC_UTILS package owned by the TMSINT application administrator account (TMSINT_ADM).

[0311]TMS!NT_PROC-INBOUND _DTLS (inbound details processing) table is the staging table that stores any ancillary data values obtained from the data-transfer HTML extract table that was transformed from HTML into a useable and meaningful format. If the client has no defined ancillary add-on columns in TMSINT_ADM_ADDON_COLS, then no data will be stored within the inbound details table; otherwise data is parsed from the HTLM data based on the corresponding HTLM_PREFIX value in

TMSINT_ADM_ADDON_COLS. The records within the inbound details table are associated with a specific TMS coding record within the inbound table via the foreign key value INBOUND-ID. The data values within the inbound details table may optionally be used for TMS Drill-Down views to be viewable from the TMS application. I0312]TMSINT_PROC_INBOUND_DERV (inbound derivation processing) table (i.e. "Derivation Path") is the staging table that stores the TMS "Derivation Path" for those records processed in TMS that successfully code to a TMS dictionary term. When an inbound record successfully codes in TMS, the column TMS__DT__CONTENT -I D within the TMSINT _PROC_INBOUND table contains the TMS DictContentID of the dictionary term (DT) in which the VTA has been assigned in the TMS repository. Using the TMS DictContentID, the full TMS "Derivation Path" is derived and each column value stored as one record in the TMS I NT_PROC_l N BOU N D_DERV table. The records within the inbound derivation table must be associated to a specific TMS coding record within the inbound table via the foreign key value INBOUND-ID.

I0313]TMSINT_PROC-INBOUND_DERV-The Oracle views TMS I NT_ADM_DERV_ PATH_LEVELS and TMSINT_ADM_DERV_PATH_ COLUMNS are owned by the TMSINT application administrator and are used to determine what TMS dictionary levels are applicable to each dictionary "Derivation Path" and what column values are retrieved from TMS for each applicable dictionary level to be stored in the

TMSINT_PROC-INBOUND_DERV table. Unlike the TMSINT PROC_INBOUND and TMSINT PROC_INBOUND_DTLS tables which are populated via client data, the TMSINT_PROC_INBOUND_DERV is populated based on "derived" TMS repository data.

[03141TMSINT_PROC_ CODING - After data within the TMSINT_PROC-INBOUND staging table is successfully processed in TMS, it is written to the TMSINT_

PROC_CODING table and subsequently removed from the INBOUND staging table. Unlike the TMSINT_PROC_INBOUND table which is a temporary staging table, the TMSINT_PROC__CODING table data will persist and will represent a client's current TMS coding data. Generic APIs are written to programmatically transfer data from the staging table TMSINT_PROC-INBOUND to the TMSINT_PROC_CODING data table.

[0315]TMSINT_PROC CODING_DTLS (coding details) table is the table that will store any ancillary data values for the parent record in the TMSINT_PROC_CODING table identified by the foreign key CODING-ID. If the client has no defined ancillary add-on columns in TMSINT_ADM_ADDON_COLS, then no data will be stored within the coding details table; otherwise data is copied directly from the TMSINT_PROC_INBOUND_ DTLS table. The data values within the coding details table can be used for TMS Drill- Down views to be viewable from the TMS application.

[0316]TMSINT_PROC_CODING_DERV (coding derivation) table (i.e. "Derivation Path") stores the TMS "Derivation Path" for those records in TMSINT _PROC_COD!NG in which the TMS_DT_CONTENT_l D contains a non-null value indicating there is a corresponding patient coding record in TMS_SOURCE_TERMS. The TMSINT_PROC_ CODING_DERV table contains one record per derived column and derivation path level. Data is created in the TMS!NT_PROC_CODING__DERV only from the

TMSINT_PROC-INBOUND_DERV table.

[0243]Journaling Tables include the TMSINT_PROC_CODING_JN table, the TMSINT_PROC_CODING_DTLS_JN and the TMSINT_PROC_CODING_DERV_JN. TMSINT_PROC_CODING_JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC CODING table. The TMSINT_PROC_ CODING_DTLS__JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC_CODING_DTLS table. The TMSINT_PROC_ CODING_DERV_JN table is a journal/audit table used to maintain a history of changes applicable to the TMSINT_PROC_CODING_DERV table.

Built-in data recovery/reioad/roiiback processes

[0317] The system described herein utilizes Web Services technology and built-in data recovery/reload processes to manage the source data files and metadata and handle accidental deletions, transfer interruptions, partial load, or other failures that may occur. If the data is partial loaded or deleted in the extract temporary staging tables during the source data/metadata extract, the formatting, and record creation in the data-processing INBOUND staging tables, the data can be removed/purged and reloaded subsequent times from the source EDC system. The integration has been written such that processing data subsequent times will not have an impact on previously processed data unless an update is required. During the system data processing into TMS, the data in the INBOUND table can be reprocessed and synchronized with the TMS data to ensure no loss of data.