Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NON-NATURAL PATTERN IDENTIFICATION FOR COGNITIVE ASSESSMENT
Document Type and Number:
WIPO Patent Application WO/2009/114795
Kind Code:
A3
Abstract:
Methods, systems, and apparatus, including medium-encoded computer program products, for detection of cheating on a cognitive test. In one aspect, a method includes receiving first information concerning a person, the first information specifying the person's responses, and lack thereof, for items of a cognitive test administered to the person, wherein the cognitive test includes multiple item-recall trials used to assess cognitive impairment; analyzing the first information using a classification algorithm trained on second information concerning a group of people to whom the cognitive test has been administered, the classification algorithm generated in accordance with a first part and a second part, the first part distinguishing between cheaters and non-cheaters, and the second part distinguishing between impaired cheaters and non-impaired cheaters; and identifying, based on the analyzing, the person as a cheater requiring a verification test to confirm an initial result of the cognitive test.

Inventors:
SHANKLE WILLIAM RODMAN (US)
Application Number:
PCT/US2009/037139
Publication Date:
December 17, 2009
Filing Date:
March 13, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEDICAL CARE CORP (US)
SHANKLE WILLIAM RODMAN (US)
International Classes:
G06Q50/00
Domestic Patent References:
WO2002005247A22002-01-17
Foreign References:
EP1139268A22001-10-04
US20050143630A12005-06-30
US6280198B12001-08-28
Other References:
RYO ODA ET AL.: "Does an altruist-detection cognitive mechanism function independently of a cheater-detection cognitive mechanism? Studies using Wason selection task", EVOLUTION AND HUMAN BEHAVIOR, vol. 27, 27 March 2006 (2006-03-27), pages 366 - 380
Attorney, Agent or Firm:
HUNTER, William, E. (P.O. Box 1022Minneapolis, MN, US)
Download PDF:
Claims:

Attorney Docket No. 231 17-0006WO1

CLAIMS

What is claimed is:

1. A computer-implemented method comprising: receiving first information concerning a person, the first information specifying 5 the person's responses, and lack thereof, for items of a cognitive test administered to the person, wherein the cognitive test comprises multiple item-recall trials used to assess cognitive impairment; analyzing the first information using a classification algorithm trained on second information concerning a group of people to whom the cognitive test has been o administered, the classification algorithm generated in accordance with a first part and a second part, the first part distinguishing between cheaters and non-cheaters, and the second part distinguishing between impaired cheaters and non-impaired cheaters; and identifying, based on the analyzing, the person as a cheater requiring a verification test to confirm an initial result of the cognitive test. 5 2. The method of claim 1, wherein the classification algorithm is configured to check for cheating strategies characteristic of persons with Alzheimer's disease or a related disorder (ADRD).

3. The method of claim 1, wherein the classification algorithm is selected to maximize sensitivity while minimizing reduction in specificity, which preserves a high0 negative predictive value while maintaining a low misclassification rate for impaired cheaters, and the group of people comprising a first sample and a second sample, the method further comprising: analyzing data in the first part for the first sample to identify a subset of variables that discriminate between cheaters and non-cheaters, and validating results in the first part5 using the second sample; and analyzing, in the second part, data of persons identified as cheaters in the first part to identify a subset of variables that discriminate between impaired cheaters and non- impaired cheaters.

Attorney Docket No. 231 17-0006WO1

4. The method of claim 3, wherein the subset of variables that discriminate between cheaters and non-cheaters comprises education and multiple numbers corresponding to items recalled on two or more of the multiple item-recall trials including a delayed free recall trial, and the subset of variables that discriminate between impaired cheaters and

5 non-impaired cheaters comprises age, gender, education and a number corresponding to items recalled on at least one of the multiple item-recall trials.

5. The method of claim 1, wherein the multiple item-recall trials comprise word recall tests of memory, and the analyzing comprises distinguishing between impaired and non-impaired individuals based on a total number of words recalled across the trials.

o 6. The method of claim 1 , wherein the analyzing comprises evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people.

7. A computer-readable medium encoding a computer program product operable to cause data processing apparatus to perform operations comprising: 5 receiving first information concerning a person, the first information specifying the person's responses, and lack thereof, for items of a cognitive test administered to the person, wherein the cognitive test comprises multiple item-recall trials used to assess cognitive impairment; analyzing the first information using a classification algorithm trained on second0 information concerning a group of people to whom the cognitive test has been administered, the classification algorithm generated in accordance with a first part and a second part, the first part distinguishing between cheaters and non-cheaters, and the second part distinguishing between impaired cheaters and non-impaired cheaters; and identifying, based on the analyzing, the person as a cheater requiring a verification5 test to confirm an initial result of the cognitive test.

8. The computer-readable medium of claim 7, wherein the classification algorithm is configured to check for cheating strategies characteristic of persons with Alzheimer's disease or a related disorder (ADRD).

Attorney Docket No. 231 17-0006WO1

9. The computer-readable medium of claim 7, wherein the classification algorithm is selected to maximize sensitivity while minimizing reduction in specificity, which preserves a high negative predictive value while maintaining a low misclassification rate for impaired cheaters, and the group of people comprising a first sample and a second

5 sample, the operations further comprising: analyzing data in the first part for the first sample to identify a subset of variables that discriminate between cheaters and non-cheaters, and validating results in the first part using the second sample; and analyzing, in the second part, data of persons identified as cheaters in the first part o to identify a subset of variables that discriminate between impaired cheaters and non- impaired cheaters.

10. The computer-readable medium of claim 9, wherein the subset of variables that discriminate between cheaters and non-cheaters comprises education and multiple numbers corresponding to items recalled on two or more of the multiple item-recall trials5 including a delayed free recall trial, and the subset of variables that discriminate between impaired cheaters and non-impaired cheaters comprises age, gender, education and a number corresponding to items recalled on at least one of the multiple item-recall trials.

11. The computer-readable medium of claim 7, wherein the multiple item-recall trials comprise word recall tests of memory, and the analyzing comprises distinguishing0 between impaired and non-impaired individuals based on a total number of words recalled across the trials.

12. The computer-readable medium of claim 7, wherein the analyzing comprises evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people.

Attorney Docket No. 231 17-0006WO1

13. A system comprising: a user device; and one or more computers operable to interact with the device and to perform operations comprising:

5 receiving first information concerning a person, the first information specifying the person's responses, and lack thereof, for items of a cognitive test administered to the person, wherein the cognitive test comprises multiple item-recall trials used to assess cognitive impairment; analyzing the first information using a classification algorithm trained on second o information concerning a group of people to whom the cognitive test has been administered, the classification algorithm generated in accordance with a first part and a second part, the first part distinguishing between cheaters and non-cheaters, and the second part distinguishing between impaired cheaters and non-impaired cheaters; and identifying, based on the analyzing, the person as a cheater requiring a verification 5 test to confirm an initial result of the cognitive test.

14. The system of claim 13, wherein the classification algorithm is configured to check for cheating strategies characteristic of persons with Alzheimer' s disease or a related disorder (ADRD).

15. The system of claim 13, wherein the classification algorithm is selected to0 maximize sensitivity while minimizing reduction in specificity, which preserves a high negative predictive value while maintaining a low misclassification rate for impaired cheaters, and the group of people comprising a first sample and a second sample, the operations further comprising: analyzing data in the first part for the first sample to identify a subset of variables5 that discriminate between cheaters and non-cheaters, and validating results in the first part using the second sample; and analyzing, in the second part, data of persons identified as cheaters in the first part to identify a subset of variables that discriminate between impaired cheaters and non- impaired cheaters.

Attorney Docket No. 231 17-0006WO1

16. The system of claim 15, wherein the subset of variables that discriminate between cheaters and non-cheaters comprises education and multiple numbers corresponding to items recalled on two or more of the multiple item-recall trials including a delayed free recall trial, and the subset of variables that discriminate between impaired cheaters and non-impaired cheaters comprises age, gender, education and a number corresponding to items recalled on at least one of the multiple item-recall trials.

17. The system of claim 13, wherein the multiple item-recall trials comprise word recall tests of memory, and the analyzing comprises distinguishing between impaired and non-impaired individuals based on a total number of words recalled across the trials.

18. The system of claim 13, wherein the analyzing comprises evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people.

19. The system of claim 13, wherein the one or more computers comprise a server system operable to interact with the device through a data communication network, and the device is operable to interact with the server as a client.

20. The system of claim 13, wherein the device comprises a user interface device, the one or more computers comprise the user interface device, and the operations further comprise outputting an indication of the identified person to a device comprising a computer-readable medium.

Description:

Attorney Docket No. 231 17-0006WO1

NON-NATURAL PATTERN IDENTIFICATION FOR COGNITIVE ASSESSMENT

CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of the priority of U.S. Provisional Application Serial No. 61/036,881, filed March 14, 2008 and entitled "Non-Natural Pattern Identification During Cognitive Assessment".

BACKGROUND

[0002] This specification relates to assessing the cognitive function of a person to whom a cognitive test has been administered, and in particular to detection of cheating on a cognitive test.

[0003] Various techniques have been used to measure the cognitive function of a person. For example, the National Institute of Aging's Consortium to Establish a Registry of Alzheimer's Disease (CERAD) has developed a ten word list as part of the Consortium's neuropsychological battery. The CERAD word list (CWL) test consists of three immediate-recall trials of a ten word list, followed by an interference task lasting several minutes, and then a delayed-recall trial with or without a delayed-cued-recall trial. The CWL is usually scored by recording the number of words recalled in each of the four trials. A single cutoff score for the delayed-recall trial, with or without adjustment for demographic variables, is typically used to determine whether cognitive impairment exists.

[0004] Some have proposed various improvements to the CWL. In addition, the CWL and the improvements thereof have been used to provide memory performance testing services, via the Internet, to clinicians in daily practice. Such services allow rapid testing of individual patients and reporting on the results of such testing. Previous reports for individual cognitive performance test results have included a statement of whether the patient has been found to be normal or to have cognitive impairment. [0005] Furthermore, in the long-term care insurance industry, individuals applying for a policy must typically be determined to not have cognitive impairment or dementia due to Alzheimer's disease or a related disorder (ADRD). Insuring such an impaired individual can result in a typical claims cost of more than $200,000 per case, assuming four years of claims payments. The insurer therefore wishes to avoid insuring applicants who already have ADRD. For this reason, insurers pay underwriters to administer

Attorney Docket No. 231 17-0006WO1

cognitive testing that is both sensitive to detect mild cognitive impairment, as well as specific to correctly identify normal aging. To reduce the costs of such cognitive testing, insurers are increasingly testing applicants over the telephone.

SUMMARY [0006] This specification describes technologies relating to assessing the cognitive function of a person to whom a cognitive test has been administered, and in particular to detection of cheating on a cognitive test.

[0007] In general, an aspect of the subject matter described in this specification can be embodied in one or more methods that include receiving first information concerning a person, the first information specifying the person's responses, and lack thereof, for items of a cognitive test administered to the person, wherein the cognitive test includes multiple item-recall trials used to assess cognitive impairment; analyzing the first information using a classification algorithm trained on second information concerning a group of people to whom the cognitive test has been administered, the classification algorithm generated in accordance with a first part and a second part, the first part distinguishing between cheaters and non-cheaters, and the second part distinguishing between impaired cheaters and non-impaired cheaters; and identifying, based on the analyzing, the person as a cheater requiring a verification test to confirm an initial result of the cognitive test. Other embodiments of this aspect include corresponding systems, apparatus, and computer-readable media encoding computer program product(s) operable to cause data processing apparatus to perform the operations.

[0008] These and other embodiments can optionally include one or more of the following features. The classification algorithm can be configured to check for cheating strategies characteristic of persons with Alzheimer' s disease or a related disorder (ADRD). The classification algorithm can be selected to maximize sensitivity while minimizing reduction in specificity, which preserves a high negative predictive value while maintaining a low misclassification rate for impaired cheaters. Moreover, the group of people can include a first sample and a second sample, where the method further includes: analyzing data in the first part for the first sample to identify a subset of variables that discriminate between cheaters and non-cheaters, and validating results in the first part using the second sample; and analyzing, in the second part, data of persons identified as cheaters in the first part to identify a subset of variables that discriminate between impaired cheaters and non-impaired cheaters.

Attorney Docket No. 231 17-0006WO1

[0009] The subset of variables that discriminate between cheaters and non-cheaters can include education and multiple numbers corresponding to items recalled on two or more of the multiple item-recall trials including a delayed free recall trial, and the subset of variables that discriminate between impaired cheaters and non-impaired cheaters can include age, gender, education and a number corresponding to items recalled on at least one of the multiple item-recall trials. The multiple item-recall trials can include word recall tests of memory, and the analyzing can include distinguishing between impaired and non-impaired individuals based on a total number of words recalled across the trials. The analyzing can include evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people.

[0010] A system can include a user device; and one or more computers operable to interact with the device and to perform operations including those of the method discussed above. The one or more computers can include a server system operable to interact with the device through a data communication network, and the device can be operable to interact with the server as a client. Moreover, the device can include a user interface device, the one or more computers can include the user interface device, and the operations can further include outputting an indication of the identified person to a device including a computer-readable medium. [0011] Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages. Cheaters can be readily detected from information specifying answers on a cognitive test that includes multiple item-recall trials used to assess cognitive impairment. Analysis of the word order across multiple item-recall trials can significantly increase the ability to detect cheaters. Moreover, substantial savings can be realized by detecting 1) test cheaters so as to avoid inappropriate insurance policies being issued; 2) cheating by insured individuals attempting to fail a test to obtain long-term care insurance claims benefits early; 3) cheating by individuals attempting to do worse on a test to become eligible for a clinical trial; and 4) cheating by professionals attempting to pass a cognitive test that is required for their continued employment. [0012] Moreover, the systems and techniques described can be implemented for use in other scenarios in which individuals are tested on their cognitive abilities. Examples include taking entrance exam tests for admission to professional schools, taking tests to qualify for a benefit, an insurance policy or a clinical drug trial, and competitions for

Attorney Docket No. 231 17-0006WO1

prizes, prestige or recognition by others, or other scenarios where individuals have an incentive to do well on a given test.

[0013] The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] FIG. 1 shows an exemplary system used to identify cheaters on a cognitive test. [0015] FIG. 2 shows an exemplary process used to identify cheaters on a cognitive test. [0016] FIG. 3 A shows an exemplary process of generating a classification algorithm to identify cheaters on a cognitive test.

[0017] FIG. 3B shows an exemplary process of detecting cheating on telephone administered cognitive testing of a long-term care insurance applicant. [0018] FIG. 4 shows another exemplary system used to identify cheaters on a cognitive test.

DETAILED DESCRIPTION

[0019] FIG. 1 shows an exemplary system 100 used to identify cheaters on a cognitive test. A data processing apparatus 110 can include hardware/firmware and one or more software programs, including a cognitive test responses analysis program 120. The cognitive test responses analysis program 120 operates in conjunction with the data processing apparatus 110 to effect various operations described in this specification. The program 120, in combination with the various hardware, firmware, and software components of the data processing apparatus, represents one or more structural components in the system, in which the algorithms described herein can be embodied. [0020] The program 120 can be an application for performing analysis on data collected to assess the cognitive function of a subject. An application refers to a computer program that the user perceives as a distinct computer tool used for a defined purpose. An application can be built entirely into an operating system or other operating environment, or it can have different components in different locations (e.g., a remote server). The program 120 can include or interface with other software such as database software, testing administration software, data analysis/computational software, and user interface software, to name a few examples. For example, the program 120 can include

Attorney Docket No. 231 17-0006WO1

or interface with software that collects data and analyzes the cognitive function or otherwise assesses the brain condition of a person.

[0021] Interface software can also be included that operates over a network to interface with other processor(s), such as in a computer used by a test taker or test administrator. For example, the program 120 can include software methods for inputting and retrieving data associated with a cognitive assessment test, such as score results, or demographic data. Such a cognitive assessment test can be administered using the program 120, or the program 120 can be used to collect and analyze data from a cognitive assessment test administered in another manner, such as an in-person or a test administered by telephone. In addition, the program 120 can also effect various analytic processes, which are described further below.

[0022] The data processing apparatus includes one or more processors 130 and at least one computer-readable medium 140 (e.g., random access memory, storage device, etc.). The data processing apparatus 110 can also include one or more user interface devices 150. User interface devices can include display screen(s), keyboard(s), a mouse, stylus, modems or other networking hardware/firmware, or any combination thereof to name a few examples. The subject matter described in this specification can also be used in conjunction with other input/output devices, such as a printer or scanner. The user interface device can be used to connect to a network 160, and can furthermore connect to a processor or processors 170 via the network 160 (e.g., the Internet).

[0023] Therefore, a user of the analysis program 120 does not need to be local, and may be connecting using a web browser on a personal computer, or using other suitable hardware and software at a remote location. For example, a clinician at a testing center can access a web interface via the remote processor 170 in order to input test data for a cognitive test. The test data can be the results of an already administered test, or the test data can be the information exchanged when actually administering the cognitive test using a network based testing system. In any event, data can be transmitted over the network 160 to/from the data processing apparatus 110. Furthermore the clinician can input test data and retrieve analysis based on that data or other data stored in a database. Note that the data processing apparatus 110 can itself be considered a user interface device (e.g., when the program 120 is delivered by processor(s) 170 as a web service). [0024] The system 100 can be used to analyze data from various types of cognitive function or brain condition assessment tests. The following description provides extensive details with respect to detecting test cheaters in the context of persons with

Attorney Docket No. 231 17-0006WO1

Alzheimer's disease or a related disorder (ADRD). However, the systems and techniques described can be implemented for use in other scenarios in which individuals are tested on their cognitive abilities. Thus, the systems and techniques described can be used in many different contexts to detect cheating among persons taking neuropsychologic tests. [0025] In the context of ADRD, if an individual already has ADRD, they have a strong incentive to try and obtain a long-term care insurance policy to cover the costs of their care. If they take the test over the telephone, it is easy to cheat by taking notes on what items they will be asked to recall later. Such cheating could result in an ADRD individual being issued a long-term care insurance policy that will cost the insurer upwards of $200,000.

[0026] Because of the increased use of telephone testing when applying for a long-term care insurance policy, more individuals already affected by ADRD will be able to cheat and pass the test to receive a long-term care insurance policy. Based on results discussed below, it is estimated that of 100,000 applicants for long-term care insurance annually tested by telephone, approximately 1,140 individuals with ADRD will cheat and pass testing to receive a policy. This can result in annual claims costs of around $228 million or more to the long-term care insurance industry. It is therefore advantageous for insurers to devise strategies to detect cheating when applicants are tested by telephone. [0027] The strategies that people use to cheat on a cognitive test are limited. In the case of cheating by the test taker, cheating fundamentally involves the use of a strategy of providing answers without an examiner realizing that the person being tested is obtaining them in violation of the rules of the test. For example, in testing memory over the telephone, a person may write down all the words they are asked to remember and recall later. When asked to recall the words, they may then read the list back to the examiner. The way that the person reads the words back to the examiner constitutes their cheating strategy. One person's cheating strategy may be to read back the words in the order they were presented. Another's cheating strategy may be to read back the words in their reverse order. Another' s cheating strategy may be to read back the list of words in a random order. [0028] In deciding whether a person is cheating, one can compare their memory performance to well characterized patterns observed among non-cheating individuals. These normal patterns of memory retrieval are non-intuitive and quite distinct from most cheating strategies conjured up by individuals. These normal patterns of memory performance consist of (1) changes in total numbers of words recalled across successive

Attorney Docket No. 231 17-0006WO1

learning and testing trials, and (2) stereotypical orderings of the words recalled in each trial.

[0029] FIG. 2 shows an exemplary process 200 used to identify cheaters on a cognitive test. First information is received (210) concerning a person, where the first information specifies the person's responses, and lack thereof, for items of a cognitive test administered to the person. The information can be from a previously administered test or from a test that is currently being administered. Nonetheless, the exemplary process described in connection with FIG. 2, and other implementations of the more general concepts underlying this exemplary process, are not practiced on the human body since such processes do not themselves involve an interaction necessitating the presence of the person.

[0030] The test can be administered in-person, over the phone, through a computer network (e.g., through the Web over the Internet), or in another manner. Note that for in- person administration of the test, the present system and techniques can still provide benefits in that cheating can be detected in the event that the test administrator is in collusion with the test taker or the test administrator is not sufficiently trained to accurately detect cheating that occurs during an in-person administration of the cognitive test. [0031] The cognitive test can include multiple item-recall trials, which can further include at least one item common to a subset of the recall trials, the subset including at least two of the recall trials. Alternatively, the multiple item-recall trials can have no items in common. In general, the full set of information in the test should be recorded, including all components of the test and all subject responses. The information can be received (210) from a database, a network or web-enabled device, a computer readable medium, or a standard input output device on a computer system, to name just a few examples. The cognitive test can include a test of attention and recall, and the test components can include items (e.g., words) to be recalled in one or more trials. For example, a test of attention and recall can include the CERAD word list (CWL) and/or other lists of words or items. [0032] The CWL is a test of immediate and delayed free recall and delayed cued recall that was developed by the National Institute of Aging CERAD centers in the 1980s. There are three learning trials in which the subject is presented each word in the list and repeats it, then at the end of the list, recalls as many words as they can. The subject is not instructed to recall the words in the order they are presented, but rather to recall as many

Attorney Docket No. 231 17-0006WO1

words as they can immediately after being presented the list of ten words. They are also instructed that a few minutes after the third learning trial they will again be asked to recall as many of the words as they can without another presentation of the words. The words are presented in a different order for each learning trial. The number of words correctly recalled is recorded for each of the three learning trials. After the third learning trial, an interference task that distracts the subject from rehearsing the word list (e.g., a test of executive function, which can consist of the subject being asked to select which of three animals is most different from the other two, and is given 12 such triads of animals to make decisions upon) is given over a period of two to five minutes. After the interference task, the subject is asked to recall as many of the ten words as they can (delayed free recall trial). The number of words correctly recalled is recorded. After the delayed free recall trial, the subject is given a delayed recognition task. The subject is presented the ten CWL words intermixed with ten distracter words. For each word, the subject is asked whether it was one of the CWL words, and the subject's response (yes or no) is recorded. [0033] Since the words of the trials are already known, the first information need not specify the words themselves, but rather just whether or not a given word was recalled. For example, eight word lists can be used, with each word list including ten words for learning and recall, plus ten more words for delayed-cued-recall. Four trials can be employed in the cognitive test, where one of the eight word lists can be selected for use in the test. The first set of ten words from the list can be used in the immediate and delayed free recall trials (and the words of the list can be presented in the same order in each trial or in a different order), and the second set of ten words can be used as the distracter word list for the delayed-cued-recall trial. The first information can include an eighty column binary score (i.e., an eighty bit vector) that corresponds to the responses received on the immediate and delayed free recall trials of the cognitive test. Each bit in this example indicates whether a corresponding word from a trial was recalled, or whether the corresponding word from the trial was not recalled.

[0034] For example, an eighty columns wide binary indicator matrix can be defined as follows. Each word in each trial can occupy 2 columns. The first column can be assigned a 1 if the word in the trial was recalled and a 0 if it was not recalled. The second column can be assigned a 0 if the word in the trial was recalled and a 1 if it was not recalled. Each trial with ten words thus occupies twenty columns for a total of eighty columns for the four free recall trials of the word list trials. With this arrangement, the binary indicator matrix gives a row total of forty, which permits the determination of an

Attorney Docket No. 231 17-0006WO1

optimal column score for a word when it was recalled in a trial, as well as a different optimal column score when that word was not recalled in a trial. [0035] The words in each word list can be linguistically and statistically equivalent. The words on each distinct list can have the same level of intra-list associability and usage frequency. Each list of words can have the same level of associability and usage frequency with each and every other list of words. For example, the eight word lists used can be as shown in Table 1 :

Attorney Docket No. 231 17-0006WO1

W#: 10 Word List used in learning trial to be recalled

D#: Used in Delayed-Cued-Recall Trial along with the 10 Word List

Table 1 : Word Lists

Attorney Docket No. 231 17-0006WO1

[0036] The word lists can be used in different parts of a test (e.g., the distracter and learning word lists can be interchanged). Moreover, the words in each word list can be presented in the same order or different order. For example, a shuffled order can be employed over multiple trials, such as in the CERAD or the ADAS-Cog (Alzheimer's Disease Assessment Scale-cognitive subscale) cognitive assessment tools. ADAS-cog consists of eleven tasks measuring different cognitive functions. The ADAS-Cog word recall test has the same general method of test administration as the CWL. Note that the ADAS-Cog but does not use the 10-word list for cued recall that is used in the immediate and delayed free recall trials. It has its own set of words for that. [0037] In general, the words in each word list should have the same difficulty of being recalled as the other words on that list, as well as the words in the other lists. For each learning trial, the words can be presented in the same order or in different order. It will be appreciated that other data formatting approaches, as well as other cognitive tests and test components, are also possible. [0038] Other cognitive assessment tests can include, but are not limited to other multiple word recall trials, other recall or cued recall tests of verbal or non-verbal stimuli, tests of executive function, including triadic comparisons of items, (e.g., deciding which one of three animals is most different from the other two), tests of judgment, similarities, differences or abstract reasoning, tests that measure the ability to shift between sets or perform complex motor sequences, tests that measure planning and organizational skill, tests of simple or complex motor speed, tests of language abilities including naming, fluency or comprehension, tests of visual-perceptual abilities including object recognition and constructional praxis. In one implementation, subjects are asked to recall the nine animals that were used for the triadic comparisons interference task that was given between the third learning trial and the delayed free recall trial. This delayed free recall of animals differs from that of the wordlist in that the subject is not asked to remember the animal names to recall them later, and therefore may not write them down if they are cheating. Differences between the delayed free recall of the animals and of the wordlist can help identify test cheaters. Examples of recorded data can include the words recalled, the words not recalled, the order of the words recalled, time delay before recall, the order in which intrusions and repetitions are recalled, and various aspects of test performance. Moreover, the cognitive test can include one or more trials performed to determine specific cognitive functions such as physical (e.g. orientation or hand-eye coordination) or perception based tests. Additional information can be obtained in order to classify the

Attorney Docket No. 231 17-0006WO1

score, such as demographic information, or the date(s) of test administration, to name just two examples.

[0039] In any case, the first information is analyzed (220) using a classification algorithm trained on second information concerning a group of people to whom the cognitive test has been administered. The classification algorithm is generated in accordance with a first part and a second part, where the first part distinguishes between cheaters and non-cheaters, and the second part distinguishes between impaired cheaters and non-impaired cheaters. Thus, the classification algorithm characterizes the ability to detect cheating among normal and impaired individuals (e.g., those with ADRD). In the context of multiple item-recall trials, this can be done based on the total numbers of words recalled across trials (e.g., the total scores of the various trials of the CERAD wordlist), upon ratios of total scores for different pairs of trials, upon the recall of a given word or set of words across trials, upon the order in which a given word or set of words is recalled across trials, or based upon the orderings of words recalled per trial to detect different types of cheaters.

[0040] The classification algorithm can be configured to check for cheating strategies characteristic of normal persons and persons with some impairment, such as ADRD. One reason to think that individuals impaired with ADRD may have a different cheating strategy than normal individuals is that executive function is affected early in ADRD. Individuals use executive function to select a cheating strategy. The cheating strategy for impaired ADRD individuals may therefore be simpler or differ in some other way than that for normal individuals. Identification of a subset of cheating strategies that characterize most impaired ADRD individuals (or individuals with other executive function impairments) can improve the accuracy of cheating detection and reduce the costs required to re -test suspected cheaters in a setting where they cannot cheat.

[0041] The person can thus be identified (230), based on the analyzing (220), as a cheater requiring a verification test to confirm an initial result of the cognitive test. The verification test can be the same cognitive test administered using a different protocol. For example, when the test is initially administered by phone, the verification test can be an in-person or video-monitored administration of the same cognitive test with a different set of similar stimuli, such as a different, but equivalent, wordlist. Alternatively, the verification test can be a different test than the initially administered cognitive test. For example, if the test is initially administered by phone, the verification test could consist of

Attorney Docket No. 231 17-0006WO1

a second test administered by phone in which the stimuli, such as tones, cannot be written down.

[0042] FIG. 3A shows an exemplary process 300 of generating a classification algorithm to identify cheaters on a cognitive test. In a first part of algorithm generation, data can be analyzed to identify a subset of variables that discriminate between cheaters and non-cheaters, and results in the first part can be validated (310). The group of people used to train the classification algorithm can include a first sample and a second sample, the data analyzed in the first part can come from the first and second samples, and the results in the first part can be validated using a different mixture of the second sample, such as described in further detail below.

[0043] In a second part of algorithm generation, data of persons identified as cheaters in the first part can be analyzed to identify a subset of variables that discriminate between impaired cheaters and non-impaired cheaters (320). The persons identified as cheaters can be from the second sample, the first sample, or both. The analysis operations (310, 320) can be performed multiple times using different inputs to create multiple variations of the classification algorithm. In any case, a classification algorithm can be selected to maximize sensitivity while minimizing reduction in specificity, which preserves a high negative predictive value while maintaining a low misclassification rate for impaired cheaters (330). [0044] A detailed example is now discussed, in which total scores as well as ratios of total scores of the various sub-tests of a memory test were examined. In this example, the first sample included 50 subjects with no evidence of cognitive impairment and normal job performance who were administered a memory test over the telephone twice. In this example, the subjects in the first sample were assigned at random to cheat on either the first or the second test, and instructed to not cheat on the other test. Moreover, the subjects did not receive the same wordlist for the two tests.

[0045] The second sample included 15,467 individuals applying for long-term care insurance who took the memory test over the telephone and passed the test. They were asked at the end of the test whether they had written down any words. If they answered yes, they were classified as reported cheaters. They were also evaluated for suspected cheating based on pre-established criteria, such as a highly suspicious pattern of words recalled. Applicants suspected of cheating were classified as suspected cheaters. Of these 15,467 individuals, 15,038 had complete data that allowed for full analysis. The sample was therefore restricted to the 15,038 applicants with complete data.

Attorney Docket No. 231 17-0006WO1

[0046] A 30% sub-sample of the suspected and reported cheaters were administered the test a second time (N=847). This time, they were administered the test in-person to prevent cheating. Those applicants who failed the in-person test were classified as impaired cheaters, while those who passed were classified as unimpaired cheaters. [0047] Candidate Variables For Cheating Classification Algorithm - Candidate variables that were examined to detect cheaters in parts 1 and 2 of the cheating algorithm included demographics (age, gender, education) and variables assessing different aspects of the individual's cognitive test performance. These cognitive variables included total numbers of words recalled on each of the trials, the 6 ratios of the total numbers of words recalled for all possible pairs of four trials, the numbers of words recalled that were repetitions or intrusions (words not in the list) in each trial, the delayed free recall of animals (a separate measure of memory performance in which there was no specific instruction to try and remember, later on, the 9 animals that are presented to the subject — three at a time — as they are asked to select which animal is most different from the other two), and the ratio of the number of words recalled on the delayed free recall from the 10- word list compared to the delayed free recall of the 9 animals from the triadic comparisons task.

[0048] Cheating Algorithm Part 1 - Cheaters versus Non-Cheaters: The training sample had 208 subjects and consisted of all sample 1 subjects plus an equal number of sample 2 cheater and non-cheater subjects. Stepwise logistic regression was then performed on the candidate variables to remove ones that were clearly non-predictive. For the remaining variables, a bootstrap procedure was then applied, in which two thirds of the cheaters and two thirds of the non-cheaters in the training sample were randomly selected one thousand times, and logistic regression was performed each time. These thousand runs generated bootstrapped confidence intervals of the coefficient values for each of the remaining candidate variables. Any candidate variable whose bootstrapped confidence interval included zero was removed from the classification model. Variables included in the final part 1 model discriminating cheaters from non-cheaters included education, the number of words recalled on the first learning trial and on the delayed free recall trial, the ratio of the numbers of words recalled on trial 2 versus trial 1 , the ratio of the corresponding numbers for the delayed free recall trial versus learning trial 2, and the ratio of the number of words recalled on delayed free recall of the wordlist versus that of the animals. Additional parameters that properly weighted the classification algorithm of

Attorney Docket No. 231 17-0006WO1

cheater versus non-cheater included the prior probability of cheating by applicants for long-term care policies in the absence of other knowledge.

[0049] To validate the final part 1 algorithm, the set of the 847 sample 2 subjects were classified using the algorithm, and a non-parametric receiver operating characteristic (ROC) curve was used to estimate overall accuracy. The ROC curve defines the overall accuracy as the area under the curve, and defines the possible pairs of sensitivity and specificity values for discriminating cheaters from non-cheaters as points on the curve (regardless of whether the cheaters were normal or impaired). Applicants from sample 2 who were classified as cheaters were then submitted to part 2 of the algorithm to discriminate non-impaired cheaters from impaired cheaters.

[0050] The algorithm for part 1, which is applied to persons who had taken the cognitive test by telephone and were classified as normal, was as follows: Y = - ln[β o *(prevalence of not cheating)] + βi*education + β 2 *Trial 1 Total + β 3 *Delayed Free Recall Trial Total + β 4 *(Trial 2 Total /Trial 1 Total) + β 5 *(Delayed Free Recall Trial Total/Trial 2 Total) + β 6 *Delayed Free Recall Of Animals Total. Y is the score that is used, in conjunction with the true cheating status of each case, by a receiver operating characteristic curve to discriminate cheaters from non-cheaters. The β[ coefficients are the regression weights used to multiply by the scores of the various variables in the equation. [0051] Cheating Algorithm Part 2 - Impaired Cheaters versus Normal Cheaters: A 70% random sub-sample of the 847 applicants classified, by in-person testing, as impaired cheaters or normal cheaters were analyzed using stepwise logistic regression to look for variables discriminating impaired cheaters from normal cheaters. A similar set of candidate variables from the memory test and demographic variables were submitted for inclusion into the stepwise logistic regression. From this analysis, variables that were clearly non-predictors were excluded from the model. Bootstrap analysis was then performed using one thousand random samples of equal numbers of impaired and non- impaired cheaters - forty three to fifty five per group - and logistic regression was performed with each random sample to generate a bootstrapped confidence interval for each variable. Variables whose coefficient values had confidence intervals that included zero were excluded from the model. The remaining variables in the model were run through a second iteration of this process so that they had to survive stepwise regression followed by bootstrapped confidence interval estimation to remain in the final model for

Attorney Docket No. 231 17-0006WO1

part 2. To validate the resulting final model for part 2, the full set of fifty seven impaired cheaters were selected, and a 9% random sample of the normal cheaters were selected and then classified using a non-parametric ROC curve. This process was repeated twenty five times to obtain the overall accuracy and confidence interval for part 2. [0052] Variables that performed well in part 2 of the cheating classification algorithm were age, gender, education, the number of words recall on the third learning trial, and the number of words recalled on the delayed free recall of animals. Additional parameters that properly weighted the classification algorithm of impaired cheater versus non-impaired cheater included the prior probabilities of cheating by impaired and non- impaired applicants for long-term care policies in the absence of other knowledge.

[0053] The algorithm for discriminating impaired versus non-impaired cheaters who had taken the cognitive test by telephone and were classified as a possible cheater was as follows: Y = -ln[β o *(prevalence of non-impaired cheater/prevalence of impaired cheater)]+ βi*age + β 2 *gender+ β 3 *education + β 4 *Trial 3 Total + β 5 *Delayed Free Recall of Animals Total. Y is the score that is used, in conjunction with the true status of each case, by a receiver operating characteristic curve to discriminate impaired cheaters from non-impaired cheaters. The β[ coefficients are the regression weights used to multiply by the scores of the various variables in the equation. [0054] Such classification algorithm(s) can thus be used to detect cheating by a long- term care insurance applicant taking a cognitive test over the telephone. FIG. 3B shows an exemplary process 350 of detecting cheating on telephone administered cognitive testing of a long-term care insurance applicant. Telephone based EMST (Enhanced Mental Skills Test) testing can be performed (355). Classification algorithm(s) can be run (360) using results of the testing. A check can then be made (365) to determine whether to decline coverage, or proceed further.

[0055] If a normal determination is made (365), cheating versus no-cheating algorithm(s) can be run (370). A check can then be made (375) to assess whether cheating has occurred. If not, the potential underwriting of an insurance policy can be triggered. If cheating has likely occurred, the process can proceed further where impaired versus normal-cheater algorithm(s) can be run (380). A check can then be made (385) to assess whether a likely cheater is a likely impaired cheater. If not, the potential underwriting of an insurance policy can be triggered. If the likely cheater is likely an impaired cheater, a face to face (FTF) or other validation test can be given (390). A

Attorney Docket No. 231 17-0006WO1

check can then be made (395) to assess impairment and either decline coverage or potentially trigger the underwriting of an insurance policy.

[0056] A total of 18.9% of the 15,038 applicants were either reported cheaters (14.4%) or suspected cheaters (4.5%). Of the 30% sub-sample of 847 reported and suspected cheaters re-tested in-person, 94% and 6% of them were classified as normal cheaters and impaired cheaters, respectively. Compared to the 2.6% prevalence of impairment among persons not suspected of cheating, cheaters are 2.3 times more likely to be impaired than non-cheaters.

[0057] Table 2 below summarizes the savings in claims costs for the sample of 15,038 applicants who were screened normally by telephone and were evaluated for cheating using a subset of multiple versions of the classification algorithm. The different versions correspond to different cut points on the ROC curve used. The second and last columns of the table are normalized to allow generalization of the findings to any sample size.

Table 2: Range of Costs and Cost Savings for Different Algorithm Versions

Attorney Docket No. 231 17-0006WO1

[0058] Table 3 shows the classification performance statistics of parts 1 and 2 of the cheating algorithm when used together to classify the 15,038 subjects into impaired cheaters versus everyone else (normal cheaters and non-cheaters who passed the test when given by telephone). The sensitivity is the accuracy of detecting impaired cheaters and the specificity is the accuracy of detecting everyone else. To determine sensitivity, the sub-sample of 847 cheaters was used.

Table 3: Classification Performance Statistics for Different Algorithm Versions [0059] The results show that there is a trade-off between sensitivity and specificity, such that one cannot have both high sensitivity and specificity. Considering the two tables together, there are greater savings in future claims costs when one selects algorithms with a higher Negative Predictive Value, which means that fewer impaired cheaters were misclassified. The tradeoff for this greater future claims savings is a higher cost of paying for repeat testing in-person, which can go from a low of $5.25 per normal telephone test for the algorithm with the smallest future claims savings, to a high of $39.87 per normal telephone test for the algorithm with the greatest future claims savings. [0060] Parts 1 and 2 of the Cheating Algorithm described above can be improved by an analysis of the order of words recalled in each trial for impaired cheaters versus normal cheaters. Thus, analyzing a person's responses on a cognitive test, with respect to responses of a group of people, using a classification algorithm can also involve evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people. The recall patterns (across trials) for an individual can be compared with known recall patterns for a group of people whose brain conditions are already established to a desired level of accuracy. Given this information for the group of people, a good estimate of the conditional probability of recalling a particular set

Attorney Docket No. 231 17-0006WO1

of items in a particular order can be determined for impaired versus non-impaired individuals and for impaired cheaters versus non-impaired cheaters. This information can be compared with an individual's recall pattern(s) to determine whether or not the current individual likely cheated on the cognitive test. [0061] The information for the group of people can be a set of well-classified cases generated in the following manner. A relatively large population of subjects can be evaluated with an extensive neuropsychological test battery, with functional measures, with severity staging measures (the Clinical Dementia Rating Scale, the Functional Assessment Staging Test, and/or other measures), with laboratory testing and brain imaging. The evaluated population is "relatively large" in the sense that there are enough cases to provide statistically significant results in light of the number of modeled categories, e.g., over four hundred subjects when the number of tuple categories (discussed further below) is sixteen. The evaluated population should include normal subjects and subjects who are known to have cheated on a given cognitive test. [0062] Correspondence analysis can be used to analyze the cognitive test results for the subjects (e.g., the binary score vectors of the training sample), and to compute the optimal row score matrix, optimal column score matrix and the singular value matrix. Correspondence analysis is an analytical method that has been largely used in quantitative anthropology and the social sciences. Its primary function is to maximize the canonical correlation between the rows and columns of an input data matrix so that the maximum amount of information in the data can be explained. Mathematically, it is designed to provide the best linear solution to the explanation of the information (variance) in the data. [0063] In some embodiments, correspondence analysis can be used to maximize the explanation of the information that distinguishes impaired cheaters versus non-impaired cheaters. In the case of the CWL, the information consists of the patterns of recalled plus non-recalled words in each trial. In this sense, subject scores generated by correspondence analysis represent a complex combination of the subject characteristics (both normative and non-normative) plus word list test performance metrics (e.g., words recalled, order recalled, retention time, etc.). The maximization of the explainable information can be accomplished through a singular value decomposition of the input data matrix.

[0064] Correspondence analysis reduces the dimensionality of a raw data matrix while minimizing the loss of information. Tschebychev orthogonal polynomials can be used to

Attorney Docket No. 231 17-0006WO1

convert the raw data matrix into an optimal row score matrix, an optimal column score matrix, and a singular value matrix of eigenvalues. These matrices can have the following statistical properties: (1) each row of the optimal row score matrix consists of a vector whose components are multivariate, normally distributed and statistically independent of each other; (2) the optimal row score vectors are also directly comparable because the effects of their marginal totals have been removed; (3) each column of the optimal column score matrix consists of a vector whose components are multivariate, normally distributed and statistically independent; (4) the optimal column score vectors are also directly comparable because the effects of their marginal totals have been removed; (5) the singular value matrix consists of a vector along the diagonal of the matrix, in which each value represents a canonical correlation between the row and column variables of the optimal score matrices. Each value of the vector is statistically independent of the other values, and indicates the magnitude of the contribution of each component of the optimal row and column score vectors; the rank of these three matrices defines the number of statistically independent components needed to account for all of the explainable variance (non-noise) in the raw data. The rank is usually of much lower dimension than the number of rows or columns. This means that the transformation of the input data matrix into a set of statistically orthogonal matrices (via singular value decomposition) can yield a massive reduction in dimensionality while continuing to account for most of the explainable information in the input data.

[0065] Thus, the optimal row scores represent the pattern of both recalled and not recalled words in each trial after removing the effect of the total number of words recalled, and the optimal column scores represent the effects of recalling or not recalling a given word in a given trial after removing the effect of the sample distribution. In this regard, the optimal row and optimal column scores are not simple weightings of the number of words recalled, their difficulty, their order or their position in the wordlist, or the specific sample used. Rather, the optimal row and column scores provide the best linear solution to explaining the total variance (information) of the raw data. [0066] Correspondence analysis can thus produce optimal row and column score vectors that only require a relatively small number of components (the first two or three components in many cases) to characterize the majority of the explainable variance of the input data matrix. These optimal row and column score vectors can be derived by the simultaneous and inseparable use of the information from both normative and non- normative cases as well as recalled and non-recalled words per trial to maximize data

Attorney Docket No. 231 17-0006WO1

reduction and explanation of the total variance. The optimal column score and singular value matrices can be used for classification of future subjects, while the optimal row score matrix can be used to develop a statistical classification algorithm, such as one using logistic regression or discriminant analysis. [0067] Various different cognitive tests and cognitive function scoring techniques can be used. In any event, the cognitive test data can be analyzed to identify a person as a likely cheater (230). The identification can be a Boolean indication or a number, such as a measure of probability. The identification can be output to a device in various manners, including displaying or printing a cheating indication to an output device, transmitting the cheating indication over a network to a computer system, or saving the cheating indication in a computer-readable medium for use as input to further assessment programs.

[0068] As noted above, analyzing a person's responses on a cognitive test, with respect to responses of a group of people, can involve evaluating a probability of an order of items recalled by the person given probabilities of recall patterns for the group of people. The items learned and tested need not be words. However, the present disclosure focuses on the case of the items being words, in the context of the CWL. This is done for purposes of clarity in this disclosure and in no way limits the application of the systems and techniques described to these specific examples. In general, the described systems and techniques can be used in any cognitive test in which the pattern of recall of an item across testing trials can be measured. Moreover, the described systems and techniques can allow for variations in: (1) the number of learning trials; (2) the number of testing trials; (3) the types of learning trials used (e.g., presenting items visually or audibly, verifying or not verifying that the subject correctly registered or understood the item presented, providing cues for items not recalled, learning trials in which the subject is presented only items not recalled in the previous learning trial); (4) the types of testing trials (e.g., delayed cued recall versus delayed recognition versus delayed free recall, delayed free recall plus providing cues for items not recalled); (5) the number of items in the test list; (6) the number of items presented from the test list in each learning trial; and (7) the types of items presented in the test list (e.g., items presented as words, pictures or other visual displays, sounds, smells, tastes, and items presented by touching them). [0069] A recall pattern can be determined for each of multiple items across the recall trials. For example, subject test performance can be captured in the following form; let:

Attorney Docket No. 231 17-0006WO1

Jl if individual i responds correctly to item j on trial k ≠ [O if individual i responds incorrectly to item j on trial k

Then, the basic scoring element for the subject can be the response vector:

Z ϊ = (d υl , d υ2 ,.., d υK ) where K is the total number of trials. There are 2 K possible response tuples for each word. For the CERAD Wordlist, there are 16 (2 4 ) response tuples for each list word. Each of the 2 K possible response tuples per word is assigned a unique response tuple value, c, which, for a given subject's recall of that word across K trials is: c = ∑{2 k d ) k=l

Given a response tuple, c, the data can be coded as follows:

(l if c is the itemj response tuple for subject i x ≡ i

1}C [0 Otherwise

As will be appreciated, this approach allows the response tuple value, c, to be used as a binary address within a computer system to access x ijc , thus enabling more efficient processing. In any event, the goal can then be to identify response tuples that optimize discrimination between impaired cheaters and non-impaired cheaters. [0070] Many different types of classification algorithms can be applied to such data, including correspondence analysis, ordinal logistic regression, Bayesian hierarchical methods, and classification and regression trees. The example detailed below is based on discriminant analysis. [0071] A probability of the recall patterns for the person can be evaluated, given probabilities of the recall patterns for the group of people. For example, suppose that for a given population, each word has a fixed set of probabilities of falling into the 2 response tuples. Namely, for a given word,;, the prior probability response tuple vector, P jc , of all possible response tuples is:

P(X yC =l) = p j →Multinomial (1; p }1 , p }2 ,..., p ]C ) . Note that /7 / is the prior probability response tuple vector that would be assigned to any subject for the given word, j, until more information is known (such as the subject's performance for word/). Next, let the set consisting of the prior probability response tuple vectors for all list words be defined as the prior probability response tuple profile, p, which equals < p lc , p 2c ,..., p jc > c c =l . The implicit presumption here is that each word's

Attorney Docket No. 231 17-0006WO1

probability of recall is independent of the other list words, which is why the words in a learning list should have low associability.

[0072] When a subject has performed the specified number of trials, K, one can then compute their posterior probability response tuple profile, which is:

D, = < x ijc >" =l , represents the ith subject's response tuple for each of the M list words, and

P j is the jth probability response tuple vector for list wordj across the K selected trials. Note that the term, x lJC , equals 1 only for the response tuples, p JC , that characterize the recall performance of the given subject, i, across the list words.

1 o [0073] The group membership of subject i (e.g., impaired cheater versus non-impaired cheater) can be defined by an indicator variable, a, where: fl if subject i is impaired cheater Ci 1 = <

[0 if subject i is non-impaired cheater

Bayes theorem can be used to classify the subject to a particular group by evaluating the probability of their response tuple profiles given the probabilities of their response tuples:

,5 (1) / > ( α , = l l D, ) = f( " = 1)F(D ' "- = 1)

P(a = I)P(D 1 \ a = l) + P(a = O)P(D 1 \ a = 0)

Where P(a,=\) and P(a t =0) can be interpreted as the prior probability of membership to impaired cheater and non-impaired cheater groups respectively. In equation (1), the reliability of classifying a given subject into the proper group depends upon the accuracy of the estimates of the response tuples, c, that are most relevant to group discrimination. 20 If there is a sufficiently large data set where the group membership, a, is known, then the estimated probability of belonging to a given group, a (e.g., impaired cheater versus non- impaired cheater), for a given response tuple, c, and a given word,;, can be given by: C) for a = 0, 1 groups; i = 1,..,N subjects; c = 1,2,...,2^ response tuples; j= 1,2,... M words. 25 Note that the term a,(x lJC = c) is set equal to "1" for all subjects belonging to the group being estimated. The group being estimated is made up of those individuals whose recall pattern of the word, j, corresponds to the unique response tuple specified by the value, c, across the specified set of K trials.

Attorney Docket No. 231 17-0006WO1

[0074] Since the number of response tuples for any given word increases exponentially with the number of trials, large samples may be needed to obtain reliable estimates of the response tuple profiles, p, particularly if some of the response tuples, c, are uncommon. For the CWL, there are four interesting combinations of trials that provide a useful dissection of memory performance. The first three immediate free recall trials provide response tuples that measure working memory performance in the prefrontal cortex. The delayed free recall trial response tuples provide a measure of hippocampal storage and retrieval combined. The delayed recognition trial response tuples provide a measure of hippocampal storage. The first four trials or all five trials combined provide overall measures of memory performance.

[0075] For the four-trial CWL response tuples, one needs thousands of cases to obtain adequate estimates of each possible response tuple for each word. A database of cases can be built for this purpose, in which group membership is not explicitly known but can be reasonably accurately estimated by a previously established, validated algorithm (see e.g., Cho, et al., "Early Detection and Diagnosis of MCI Using the MCI Screen Test," The Japanese Journal of Clinical and Experimental Medicine, 2007; 84(8): 1152-1160; Trenkle, et al., "Detecting Cognitive Impairment in Primary Care: Performance Assessment of Three Screening Instruments," Journal of Alzheimer's Disease, 2007; ll(3):323-335; and Shankle et al., "Method to improve the detection of mild cognitive impairment", PNAS, Vol. 102, No. 13, pp. 4919-4924, 2005).

[0076] Group membership of each case in the database can be independently determined twice by the algorithm, first using a high sensitivity cut-point (e.g., Sn=96%, Sp=88%), which can identify a relatively pure sample of non-impaired cheaters, and then using a high specificity cut-point (e.g., Sn=83%, Sp=98%), which can identify a relatively pure sample of impaired cheaters. The performance of the group membership probability estimates derived from equation (2) can then be evaluated by each of these two cut-points for each response tuple of each word. This evaluation can be accomplished using each set of probability estimates independently to classify a different sample of subjects with known group membership. Note that an implicit presumption of this method is that the classification error attributable to the previously established algorithm is random relative to the response tuples.

[0077] FIG. 4 shows another exemplary system 400 used to identify cheaters on a cognitive test. The exemplary system described can perform a variety of functions including data analysis, storage and viewing, and remote access and storage capabilities

Attorney Docket No. 231 17-0006WO1

useful for analyzing results of a cognitive test, including identifying cheaters on the cognitive test, such as described elsewhere in this specification.

[0078] A Software as a Service (SaaS) model can provide network based access to the software used to identify cheaters on a cognitive test. This central management of the software can provide advantages, which are well known in the art, such as offloading maintenance and disaster recovery to the provider. A user, for example, a test administrator within a clinical environment 410, can access test administration software within the test administration system via a web browser 420. A user interface module 430 receives and responds to the test administrator interaction. [0079] In addition, a customer's computer system 440 can access software and interact with the test administration system using an extensible Markup Language (XML) transactional model 442. The XML framework provides a method for two parties to send and receive information using a standards-based, but extensible, data communication model. A web service interface 450 receives and responds to the customer computer system 440 in XML format. For example, an XML transactional model can be useful for storage and retrieval of the structured data relating to the cognitive test. [0080] An analysis module 460 analyses inputs from the web service interface 450 and the user face module 430, and produces test results to send. The analysis module uses a results analysis module 470 to perform the test analysis to identifying cheaters. The results analysis module 470 can, for example, incorporate the methods described elsewhere in this specification.

[0081] A data storage module 480 transforms the test data collected by the user interface module 430, web service interface 450, and the resulting data generated by the analysis module 460 for permanent storage. A transactional database 490 stores data transformed and generated by the data storage module 480. For example, the transactional database can keep track of individual writes to a database, leaving a record of transactions and providing the ability to roll back the database to a previous version in the event of an error condition. An analytical database 492 can store data transformed and generated by the data storage module 480 for data mining and analytical purposes. [0082] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as

Attorney Docket No. 231 17-0006WO1

one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus. The tangible program carrier can be a propagated signal or a computer-readable medium. The propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a computer. The computer-readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, or a combination of one or more of them. [0083] The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, or a combination of one or more of them. In addition, the apparatus can employ various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. [0084] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

[0085] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special

Attorney Docket No. 231 17-0006WO1

purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).

[0086] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example, semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[0087] While this specification contains many implementation details, these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment.

Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. [0088] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be

Attorney Docket No. 231 17-0006WO1

performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. [0089] Thus, particular embodiments of the invention have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. The actions recited in the claims can be performed using different statistical classification procedures, such as discriminant analysis, stepwise multivariate regression or general linear models, rather than logistic regression as described above. The actions recited in the claims can be performed using different orthogonal transformations of the raw input data, such as principal components analysis, multi-dimensional scaling, or latent variable analysis, rather than correspondence analysis as described above.

[0090] Moreover, additional techniques can be employed in combination with those described above. Cheaters from non-cheaters can be distinguished based on additional checks for similarities in the pattern of performance. Statistical techniques can be used to detect strange patterns in the data. Some examples include techniques to detect outliers, order effects, abnormal dispersion, and spurious correlations. Use of such techniques in combination with the system and techniques described above employs the premise that it is difficult to invent naturalistic data, particularly those with multiple dimensions.