Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A GRANULAR METHOD FOR MEASUREMENT AND ANALYSIS OF MENTAL CAPABILITIES OR CONDITIONS AND A PLATFORM THEREFOR
Document Type and Number:
WIPO Patent Application WO/2020/154757
Kind Code:
A1
Abstract:
A psychometric measurement and analysis platform that includes a plurality of databases (202) configured to store a psychometric feature (204) associated with a subject. The psychometric feature is recorded in the database while the subject undergoes a psychometric test that includes facial expression image recognition. The platform includes a processor (208) configured to tag (210) a psychometric feature (212) associated with the subject's interaction with a user interface (206) and in reaction to the psychometric test presented on the display. The processor is configured to compare the subject's psychometric feature with a psychometric feature resident on one or more databases to generate a psychometric analysis of the subject.

Inventors:
BLAIK JASON (AU)
GARCIA MELINDA (AU)
KERR MATTHEW (AU)
WANG SHAWN (AU)
CERVETTO KATHERINE (AU)
Application Number:
PCT/AU2019/051439
Publication Date:
August 06, 2020
Filing Date:
December 28, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
REVELIAN PTY LTD (AU)
International Classes:
A61B5/16; G16H50/20
Foreign References:
US20070050151A12007-03-01
US20130281798A12013-10-24
Other References:
SCHERER ET AL.: "Assessing the ability to recognize facial and vocal expressions of emotion: Construction and validation of the Emotion Recognition Index", JOURNAL OF NONVERBAL BEHAVIOR, vol. 35, no. 4, 24 July 2011 (2011-07-24), pages 305 - 326, XP019964845, DOI: 10.1007/s10919-011-0115-4
HILDEBRANDT ET AL.: "Measuring the speed of recognising facially expressed emotions", COGNITION & EMOTION, vol. 26, no. 4, 2012, pages 650 - 666, XP055728248
ILIESCU ET AL.: "Examining the Psychometric Properties of the Mayer-Salovey-Caruso Emotional Intelligence Test", EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT, 2013, XP019964845
Attorney, Agent or Firm:
MARTIN IP PTY LTD (AU)
Download PDF:
Claims:
What is claimed is:

1 . A method for granular measurement of a mental capability or condition of a subject, comprising: presenting the subject with a challenge event, the challenge event including facial expression image recognition; timing the subject’s interaction with the challenge event from the subject’s first interaction to the subject’s completion of the challenge event; collecting a plurality of psychometric features associated with at least one physical reaction of the subject during the subject’s interaction with the challenge event; and comparing the collected psychometric features of the subject with a target measure, the target measure being generated based on collected psychometric features of other subjects.

2. The method of claim 1 , wherein the facial expression image recognition requires the subject to identify if an emotion presented in a facial image matches a label presented concurrently with the facial image.

3. The method of claim 1 , wherein the facial expression image recognition requires the subject to identify which face presented in a facial image matches an emotion most likely experienced by a subject in a specified situation.

4. The method of any one of claims 1 -3, further comprising combining psychometric features from a plurality of challenge events in which the subject participated, and generating a metric representative of an interaction pattern.

5. The method of claim 4, further comprising analysing the mental capability or condition of the subject using one or more interaction patterns.

6. The method of either claim 4 or 5, further comprising using pattern recognition to predict the mental capability or condition of the subject.

7. The method of any one of the above claims, further comprising presenting the subject with an additional challenge event that includes a polygonal agreement test.

8. The method of any one of the above claims, further comprising presenting the subject with an additional challenge event that includes a numerical analysis.

9. The method of any one of the above claims, further comprising presenting the subject with an additional challenge event that includes a textual error identification test.

10. A psychometric measurement and analysis platform, comprising: a plurality of databases, each database being configured to store a psychometric feature associated with a subject, the psychometric feature being recorded in the database while the subject undergoes a psychometric test that includes facial expression image recognition; a user interface requiring a physical interaction by the subject, the user interface being used in combination with the psychometric test being presented to the subject on a display; and a processor configured to tag a psychometric feature associated with the subject’s interaction with the user interface and in reaction to the psychometric test presented on the display, said tag including a first ID associated with one of said databases, said tag including a second ID associated with an identification of the subject, said processor being configured to compare the subject’s psychometric feature with a psychometric feature resident on one or more of said databases to generate a psychometric analysis of the subject.

1 1 . The platform of claim 10, wherein the databases are configured as a distributed network.

12. The platform of either claim 10 or 1 1 , wherein the user interface includes a mouse.

13. The platform of either claim 10 or 1 1 , wherein the user interface includes a touch screen.

14. The platform of any one of claims 10-13, wherein said processor is configured to combine psychometric features from a plurality of psychometric tests in which the subject participated, and generate a metric representative of an interaction pattern.

15. The platform of claim 14, wherein said processor is configured to generate a plurality of different types of metrics, said processor being configured to weight the metrics to generate a subject’s cognitive score.

Description:
A GRANULAR METHOD FOR MEASUREMENT AND ANALYSIS OF MENTAL CAPABILITIES OR CONDITIONS AND A PLATFORM THEREFOR

Field of Invention

The present disclosure relates to a granular method for measurement and analysis of mental capabilities or conditions and a platform therefor.

Background of Invention

Physical ailments are diagnosed and treated using a variety of medical devices and methods. Often ignored, but equally important are mental ailments and capabilities. Understanding of mental illnesses and conditions continues to grow, and a variety of new treatments are being developed to combat the existing and newly recognised mental conditions. Proper diagnosis and assessment of a person’s mental state is important for several reasons. For example, by assessing a person’s mental state or capabilities, mental impairments and/or diseases can be more effectively treated. Additionally, the mental assessment may be used to better position a person in a job situation more naturally suited to their capabilities. Moreover, the collected mental assessments from a large enough sample of people can be used to program new algorithms to build more realistic artificial intelligence programming in computers since it is known that present artificial intelligence methods lack a realistic emotional component, and therefore impair any ability of a computer programmed with an artificial intelligence component to display empathy.

Accordingly, there exists a need for an improved system and method of mental diagnosis and assessment that minimises one or more known disadvantages of conventional systems and methods.

Summary

In order to increase accuracy of a mental assessment, a person’s emotional intelligence is measured using a variety of tests. Emotional intelligence is defined as the person’s capacity to effectively reason with and use emotions to enhance thought and to solve problems. (Mayer, 2016.) This includes the capacity to perceive and identify emotions, use emotion to facilitate reasoning, understanding the meaning of emotions and the information they convey, and the ability to effectively regulate and manage emotions. Known methods for psychometric measurement and analysis are typically focused on assessing a subject’s (also known as a candidate) ability to select the correct answer on a test. In so doing, such known methods fail to provide a deeper insight into a subject’s strengths and abilities and, consequently, do not provide insight into the subject’s true potential. Further shortcomings of such known methods include a failure to provide a true reflection of key aspects of the subject’s emotional intelligence, including emotional perception and emotional understanding.

In one aspect, the present disclosure provides an assessment which involves capturing all of the user’s interactions with a challenge event or puzzle, preferably from the moment the person is first presented with the challenge event to the moment the person completes the challenge event, and then combines those interactions using predictive algorithms to derive an overall measure of emotional intelligence. In this aspect, thousands of events or data points per person may be captured and used to more accurately generate a mental assessment of a person.

In a further aspect, interactions during a challenge event are captured first at an event level, and such events are then combined into metrics that represent interaction patterns. The metrics are then combined to produce an overall measure of a specific mental or emotional ability.

The present disclosure in another aspect sets forth a method for granular psychometric assessment of a mental capability or condition. Preferably, the method includes presenting a subject with a challenge event, the challenge event including facial expression image recognition; timing the subject’s interaction with the challenge event from the subject’s first interaction to the subject’s completion of the challenge event; collecting a plurality of psychometric features associated with the physical reactions of the subject during the subject’s interaction with the challenge event; and comparing the collected psychometric features of the subject with a target measure, the target measure being generated based on collected psychometric features of other subjects.

The present disclosure in another aspect sets forth a psychometric measurement and analysis platform. Preferably, the platform includes a plurality of databases, each database being configured to store a psychometric feature associated with a subject, the psychometric feature being recorded in the database while the subject undergoes a psychometric test that includes facial expression image recognition; a user interface requiring a physical interaction by the subject, the user interface being used in combination with the psychometric test being presented to the subject on a display; and a processor configured to tag a psychometric feature associated with the subject’s interaction with the user interface and in reaction to the psychometric test presented on the display, said tag including a first ID associated with one of said databases, said tag including a second ID associated with an identification of the subject, said processor being configured to compare the subject’s psychometric feature with a psychometric feature resident on one or more of said databases to generate a psychometric analysis of the subject.

The reference to any prior art in this specification is not and should not be taken as an acknowledgement or any form of suggestion that the prior art forms part of the common general knowledge in Australia or in any other country.

It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed, unless otherwise stated. In the present specification and claims, the word“comprising” and its derivatives including“comprises” and“comprise” include each of the stated integers but does not exclude the inclusion of one or more integers. The claims as filed with this application are hereby incorporated by reference in the description.

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments and together with the description, serve to explain the principles of one or more forms of the invention.

Brief Description of the Drawings

Fig. 1 is a schematic representation of a method for psychometric measurement and analysis of a mental capability or condition as herein disclosed.

Fig. 2 is a schematic representation of a psychometric measurement and analysis platform as herein disclosed.

Fig. 3 illustrates an algorithm as herein disclosed.

Fig 4. is a schematic representation of a challenge as used in the method of Fig. 1 and/or platform of Fig. 2. Fig. 5 is a schematic representation of another challenge as used in the method of Fig. 1 and/or platform of Fig. 2.

Fig. 6 is a schematic representation of another challenge as used in the method of Fig. 1 and/or platform of Fig. 2.

Fig. 7 is a schematic representation of another challenge as used in the method of Fig. 1 and/or platform of Fig. 2.

Fig. 8 is a schematic representation of another challenge as used in the method of Fig. 1 and/or platform of Fig. 2.

Fig. 9 is a schematic representation of an assessment score as an output of the method of Fig. 1 and/or platform of Fig. 2.

Fig. 10 is a graphical representation of another assessment score as an output of the method of Fig. 1 and/or platform of Fig. 2.

Fig. 1 1 is a graphical representation of another assessment score as an output of the method of Fig. 1 and/or platform of Fig. 2.

Detailed Description

The following detailed description of embodiments of a method for measurement and analysis of a mental capability or condition and a platform therefor refer to the accompanying drawings.

Figs. 1 to 1 1 illustrate preferred embodiments of a granular method for measurement and analysis of a mental capability or condition 100, a platform 200 for the method 100, challenge events 400, 500, 600, 700, 800 and assessment scores 900, 1000, 1 100.

Referring to Figs. 1 , and 3 to 1 1 , a method for granular measurement of a mental capability or condition 100 includes presenting a subject with a challenge event 102, the challenge event including facial expression image recognition 104; timing 106 the subject’s interaction with the challenge event from the subject’s first interaction 108 to the subject’s completion of the challenge event 1 10; collecting a plurality of psychometric features 1 12 associated with at least one physical reaction 1 14 of the subject during the subject’s interaction with the challenge event 102; and comparing 1 16 the collected psychometric features 1 18 of the subject with a target measure 120, the target measure being generated based on collected psychometric features of other subjects (not shown). Preferably, the facial expression image recognition 104 requires the subject to identify if an emotion presented in a facial image, for example 602, 702, matches a label 604, 704 presented concurrently with the facial image 602, 702. In a preferred embodiment, the facial expression image recognition 104 requires the subject to identify which face presented in a facial image 802, 804, 806, 808, 810, 812 matches an emotion most likely experienced by a subject in a specified situation. In a particularly preferred embodiment, the method 100 includes combining psychometric features from a plurality of challenge events, for example 400, 500, 600, 700, and 800, in which the subject participated, and generating a metric representative of an interaction pattern 900, 1000, 1 100. In a further preferred embodiment, the method 100 includes analysing a mental capability or condition of the subject using one or more interaction patterns (not shown). In yet a further preferred embodiment, the method 100 includes using pattern recognition 400 to predict a mental capability or condition of the subject. A further preferred embodiment, the method 100 includes presenting the subject with an additional challenge event that includes a polygonal agreement test. In yet a further preferred embodiment, the method 100 further includes presenting the subject with an additional challenge event that includes a numerical analysis 500. In another preferred embodiment, the method 100 includes presenting the subject with an additional challenge event that includes a textual error identification test.

Referring to Figs. 2 to 1 1 , a psychometric measurement and analysis platform 200 includes a plurality of databases 202, each database being configured to store a psychometric feature 204 associated with a subject (not shown), the psychometric feature being recorded in the database while the subject undergoes a psychometric test that includes facial expression image recognition (for example, facial expressions 602, 702, 802, 804, 806, 808, 810, 812); a user interface 206 requiring a physical interaction by the subject, the user interface being used in combination with the psychometric test being presented to the subject on a display; and a processor 208 configured to tag 210 a psychometric feature 212 associated with the subject’s interaction with the user interface 206 and in reaction to the psychometric test presented on the display, the tag 210 including a first ID 214 associated with one of the databases 200, the tag 210 including a second ID 216 associated with an identification of the subject, the processor being configured to compare the subject’s psychometric feature 212 with a psychometric feature 218 resident on one or more of the databases to generate a psychometric analysis of the subject 220. In a preferred embodiment, the databases 202 are configured as a distributed network. In a further preferred embodiment, the user interface 206 includes a mouse (not shown). In a further preferred embodiment, the user interface 206 includes a touch-screen (not shown). In a further preferred embodiment, the processor 208 is configured to combine psychometric features from a plurality of psychometric tests in which the subject participated, and generate a metric representative of an interaction pattern, for example 900, 1000, and 1 100. In a further preferred embodiment, the processor 208 is configured to generate a plurality of different types of metrics, the processor 208 being configured to weight the metrics to generate a subject’s cognitive score (not shown).

An exemplary embodiment is set forth below. The platform includes two separate assessments: Matching Faces and Emotional Ties. Matching faces requires candidates to quickly identify the emotion displayed on a person’s face, while Emotional Ties requires candidates to read a number of everyday situations and predict the types of emotional consequences that may arise as a result of these situations. These assessments have been specifically developed to assess a candidate’s ability to accurately perceive emotions and effectively understand the connections between emotions, and situations that lead to specific emotional reactions.

CHALLENGE-BASED ASSESSMENTS

The present platform in a preferred aspect is driving change and innovation in the development of challenge-based assessments. This is achieved through cutting edge cloud technologies, big data, a custom-built analytics platform, psychometric modelling and challenge design.

Challenge-based assessments, unlike traditional assessments, take more into account than just the correct answer, providing richer insight into a candidate’s strengths and abilities. Challenge-based assessments draw the best effort from the candidate, providing insight into their true potential. This is done while maintaining robust psychometric properties and complementing traditional psychometric assessment data.

A further strength of challenge-based assessments is their robustness against faking and response distortion, providing the candidate with tasks that assess ability while not exposing the nature of the construct being measured. This further strengthens the ability of challenge-based assessments to provide recruiters with true insight into their candidates. Candidate engagement is increased by providing real time, in-challenge feedback that indicates to the candidate how they are performing as they move through the tasks.

Over the previous two decades, emotional intelligence (El) has attracted much attention in both popular and academic literature. Two distinct conceptualisations of El have emerged during this time that attempt to broadly define El and guide its measurement - an ability-based model and trait-based model of El. The ability-based approach views El as a type of intelligence, akin to cognitive ability, and utilises performance-based assessment. The trait model views El more like personality and assesses it via self-report measures. The ability-based approach is more suited to recruitment contexts than the self-report approach.

The platform’s approach to assessing El has been guided by the ability- based model first proposed by two prominent El researchers, Mayer and Salovey in 1997. Considered the most prevalent of ability-based models, this approach incorporates four branches of El-related abilities - perceiving, using, understanding, and managing.

The platform has been developed to specifically assess the first and third branches of Mayer and Salovey’s model - emotional perception and emotional understanding. Emotion perception refers to the appraisal and expression of emotion and focuses on the ability to quickly and accurately perceive emotions in others’, typically using nonverbal information. Accurate identification of emotions assists in making decisions regarding the most appropriate way to respond to others. Emotional understanding is concerned with how well one can effectively identify connections between events and emotional reactions. A comprehensive understanding of emotions fosters a knowledge of emotional triggers and the prediction of emotional outcomes to different situations and events.

The platform includes two brief assessment challenges - Matching Faces and Emotional Ties that measure the perceiving and understanding aspects of El. More information on Matching Faces and Emotional Ties is provided below.

The platform uses a device agnostic approach and natively displays in common browsers without the need to download any additional plugins or settings to ensure that all candidates have a positive and frictionless testing experience no matter the operating system, device type or size.

PERCEIVING EMOTION

The ability to accurately perceive the emotion displayed by others is an important aspect of interpersonal interaction. It includes the ability to use non-verbal information such as facial expressions, body language and tone of voice to determine specific emotions being conveyed during interactions and the authenticity of those emotions. An accurate read of the emotional state of others is a precursor for being able to respond effectively.

Matching Faces measures the ability to perceive emotions. In Matching Faces, the candidate is required to quickly identify the emotion displayed on a person’s face. They need to indicate if the word they see and the emotion they perceive are a match.

There are 30 rounds with a time limit of 3 seconds per round. Matching Faces increases in difficulty as the candidate progresses. The actors used throughout the challenge equally represent males and females, a range of ages, and diverse cultural backgrounds.

UNDERSTANDING EMOTION

Emotional understanding encompasses the ability to comprehend emotional language, understand how emotions may change over time and combine to form more complex emotional states. A strong understanding of emotions is important for predicting emotional progression and the emotional outcome of different situations. Emotional Ties measures the ability to understand emotions. In Emotional Ties, the candidate is required to read several everyday situations and predict the types of emotional consequences that may arise as a result.

There are 20 rounds with a maximum time limit of 1 minute per round. The mix of different situations presented include three different formats with one or more people involved.

NORMATIVE GROUPS

When a candidate engages the platform, their performance is compared against a group of other people who have also completed this assessment. This group is important for subsequent interpretation, so the more relevant the comparison group is to the position, the greater the confidence that can be placed in this interpretation.

The platform has four types of comparison groups that may be used when interpreting candidate scores (up to three of these can be displayed in the candidate’s report at any one time). The types of comparison groups available for selection is dependent on the assessment language that has been selected for the position and is briefly outlined below:

1. General Population - Comprised of a large group of individuals from a wide range of industries and job types.

2. Industry Group - Industry comparisons provides additional information that aids in determining whether a candidate has the ability that is consistent in peers who are operating within a similar industry.

3. Management Level - Managerial level comparisons provides additional information that aids in determining whether a candidate has the ability that is consistent in peers operating at a similar level.

4. Company Specific Benchmark - provides a benchmark of the existing level of ability of current employees at the organisation. REPORTING

Candidate results are presented in a user friendly and easy to read format. The scores provided include an overall emotional intelligence score, as well as scores for Matching Faces and Emotional Ties.

Candidate percentile scores are placed into one of five performance classifications, ranging from Far Below Average to Far Above Average. The example shown in Fig. 9 illustrates an Overall Score for a candidate based on the General Population comparison group, including dynamic interpretive text to describe the likely behaviour the candidate will display in the workplace.

Katherine’s overall score was higher than 81 % of the general population group, which indicates that she is likely to:

• Display high levels of emotional intelligence when interacting with others, working in teams, and making decisions.

• Be able to accurately read and interpret emotions displayed by others,

and therefore respond accordingly.

• Flave a strong awareness of emotions and their impact on self and others in different situations.

• Successfully build and develop relationships with others, such as colleagues, customers and clients.

The report also includes several recommended interview questions tailored to candidate scores, designed to provide further insight into candidate strengths and development areas. An example excerpt is shown below.

Interview questions for Katherine

These questions are based on Katherine’s scores on the platform. They highlight areas you may choose to investigate further if she progresses to an interview.

Positioning statement:

As part of the recruitment process you completed an assessment, which assessed aspects of your emotional intelligence. I have a few questions for you in relation to this assessment. Recommended interview questions - General questions for Katherine:

How did you find completing the assessment?

Is there anything about your test experience that you would like to share?

The platform measures aspects of emotional intelligence. What role do you think emotions play in the workplace?

In what situation do you think it would be important to identify how someone else is feeling, and understanding that emotion?

VALIDATION

TEST CONSTRUCTION

Two large scale validation exercises were conducted to refine the platform and initially establish its sound psychometric properties.

An overview of the steps involved in the development of the platform is provided below.

Concept and Design

Basic assessment design mock ups were produced and challenge mechanics were discussed and explored, with an extensive user interface review undertaken involving multiple rounds of iterations. Details such as fonts, shading, colour palettes, and icons were all considered in the design phase of the project.

Image Capture

Professional actors and a photographer were secured to produce a bank of images for use in the assessment. Real actors displaying genuine emotions was considered critical for the validity of the tool. Around 10,000 images were captured for potential use from 21 different actors. The seven universal emotions (happiness, sadness, anger, surprise, disgust, contempt, fear) formed the basis of the emotions expressed by the actors and underpinned all item development.

Image Rating

An extensive image review and rating process was undertaken to determine the emotional content of the images and reduce the image bank to the best available stimuli. This process included the use of Microsoft’s Cognitive Services Emotion API, a panel of Organisational Psychologists applying the Facial Action Coding System (FACS), and ratings from over 900 individuals via survey. The primary emotion and the strength of the emotion (e.g. , subtle, moderate, strong) in each image was determined.

Item Content

The item content was then developed for each mini-challenge separately. For Matching Faces, this included emotional labels such as‘Flappiness’ being matched with specific images - some displaying happiness and some not. The initial item bank was well balanced with an even representation across the seven universal emotions from a diverse range of actors. For Emotional Ties, the item development phase included developing hundreds of short scenarios mapped to the seven universal emotions, and pairing these with a subset of images to create assessment items. All item content underwent multiple rounds of review and refinement by Psychologists before being accepted.

First Validation

An initial validation study was completed involving approximately 1400 participants completing the platform along with another well-known measure of El, the Situational Test of Emotional Understanding (STEU). The results from this initial validation indicated that Matching Faces and Emotional Ties showed strong potential in measuring El.

Modifications and Re-design

Following the completion of the initial validation, a number of changes were made to Matching Faces and Emotional Ties, including:

• Updated tutorial content for Matching Faces and Emotional Ties .

• The restructure of Emotional Ties (split into 3 parts based on scenario type).

• The redesign of the user experience for mobile device completion. In particular, Emotional Ties was changed significantly from the original design to enhance the mobile experience.

• Round time adjustments for both mini challenges. Item streaming logic was also designed at this stage in the development of the platform. Specifically, Linear on the Fly (LOFT) item streaming was implemented, ensuring a unique candidate experience each time the assessment is completed, while maintaining identical levels of difficulty to ensure candidate comparison remains fair.

The scope and scale of these changes were significant enough to warrant a second validation study prior to market launch.

Second Validation

A second validation study was conducted to confirm and enhance the findings from the first validation study based on the content and format revisions. Over 1600 candidates participated in this phase of the project. The findings from this validation study are summarised in the section below.

PSYCHOMETRIC PROPERTIES

Validation studies have demonstrated the strong psychometric properties of the platform.

• As part of two large scale validation exercises, over 3,000 participants completed the two mini-challenges (Matching Faces and Emotional Ties) and the Situational Test of Emotional Understanding (STEU). Adopting a cross-validation approach to modelling, scores for Matching Faces, Emotional Ties and Overall were found to strongly correlate with STEU performance, i.e., r = 40(.45), .54(.61 ), .57(.65) respectively. Note the second correlation presented here in brackets represents the corrected coefficient, accounting for unreliability of the criterion.

• For each mini-challenge, the metrics that combine to produce both challenge and overall scores each contribute in statistically unique and significant ways to predict the convergent measure (STEU). This indicates that scoring for each challenge is not only valid but has also been derived in a manner that takes account of multiple aspects of emotional intelligence.

• A highly significant correlation was demonstrated between the scores and self-reported conflict at work, i.e.,“I experience a lot of conflict with people at work,” r = -.19 *** , pc.001. The Case Study below has more detail on this finding.

• A highly significant correlation was demonstrated between the scores and self-reported stress management, i.e.“I have a hard time making it through stressful events,” r = -.12 *** , pc.001. The Case Study below has more detail on this finding.

• Small gender differences, with women performing slightly better than men, were observed for Emotional Ties (d = .28) and the overall (d = .27). This is commensurate with general research findings where similar gender differences in emotional intelligence are often reported.

• A non-significant correlation was found with respect to age and performance, indicating that performance on the platform was not related to the age of participants. This was further evident when contrasting the c40 and >40 age groups, where no difference in performance was again evident. A small, negative correlation was demonstrated with respect to age and performance on Matching Faces, r = -.19 ** , pc.01 , which incorporates a speeded aspect in its administration. Such effects however do not persist at an overall level where metrics across both challenges are combined and weighted when calculating a final score.

CASE STUDY

A sample of 931 individuals utilised the platform along with self-report questions relating to stress management and conflict in the workplace.

Participants were asked to indicate the extent to which they agreed with the statement Ί have a hard time making it through stressful events’. Those who scored less than 20% on the platform were twice as likely to respond with ‘Agree’ or ‘Strongly Agree’ to this question, with 27% of this group responding this way, compared to 13% in the higher performing group. This result is shown graphically in Fig. 10.

The difference between these two groups was further analysed through an effect size, with a small to moderate effect found. These findings highlight the relationship between El as measured by the platform and self-reported experience of stress, which has important practical workplace applications for a range of roles and industries.

Participants were also asked to indicate the extent to which they agreed with the statement Ί experience a lot of conflict with people at work’. Those who scored less than 20% on the platform were twice as likely to select‘Agree’ or‘Strongly Agree’ to this question, with 12% of this group responding this way compared to 5% in the higher performing group. This result is shown graphically in Fig. 1 1 .

A small to moderate effect size was found between these two groups. This finding provides support for the use of the platform in roles requiring a high degree of interpersonal effectiveness.

CANDIDATE REACTIONS

The platform in another preferred aspect was designed with the candidate experience as a central feature of the assessment. Throughout the design and development of the platform, several rounds of user feedback sessions were conducted, and iterations made to enhance the experience. In addition, candidates participating in the validation processes were invited to provide feedback on their experience completing a challenge, with an overwhelmingly positive response. The feedback from approximately 1 ,000 candidates is summarised below.

• 80% of candidates reported a positive experience completing an event challenge.

• 91 % of candidates felt comfortable completing the event challenge as part of a job application.

• 71 % of candidates felt the event challenge was better than other employment tests they’ve

completed.

• 84% of candidates recommended employers use

assessments like the event challenge to asses job

applicants.

Candidates also commented on what they particularly liked about the event challenge. Some of the recurring feedback included: • The use of real and diverse people throughout the event challenge

• The interesting everyday situations

• The clear and easy to understand nature of the assessment

• the event challenge didn’t feel like a test

• The clear link between the tasks in the event challenge and workplace skills and abilities

The positive feedback from candidates who have completed the event challenge is a strength of the assessment and reflects the engaging and innovative approach to measuring El.

A skilled person will appreciate that the disclosed method and platform can be applied using cutting edge cloud technologies, big data, a custom-built analytics platform, psychometric modelling and engaging interactive design. It will also be appreciated that the method disclosed herein delivers a unique and state-of-the-art way to measure emotional intelligence. Indeed, the disclosed method presents, for the first time, a granular method of assessment focused on first collecting a plurality of events, then combining those into metrics, and then combining those metrics into the overall cognitive score.

It will also be appreciated that the method disclosed here is highly engaging, interactive, and focused on improving the subject’s experience as a central feature of the method. By using the disclosed method, an ability to identify emotions in faces and determine the emotional consequences of everyday situations can be determined. Such abilities, i.e., the identification of emotions in faces and an appreciation of the emotional consequences of everyday situations are important in roles where people interaction is critical, such as in customer and client service, working in a team, and in leadership and management roles.

Furthermore, the disclosed method and platform facilitate an improved, engaging, and enjoyable experience for subjects being assessed using the method, while effectively and accurately measuring the subject’s ability to identify and understand a wide range of emotions. A skilled person will appreciate that the method and platform disclosed herein may be accessed using a mobile device such as a mobile phone and/or a tablet. In a preferred aspect, data generated through one or more assessments of one or more individuals may be used to build a database of predictive emotional responses and identifications. This database is accessible as part of a neural network that can be trained to better provide a computer system programmed with artificial intelligence to implement a more realistic output incorporating (or accounting for) human emotional responses and interactions. As a favourable result, a neural network thus trained may be able to display empathy, a quality that eludes conventional computer systems programmed according to artificial intelligence principles.

The features described with respect to one embodiment may be applied to other embodiments, or combined with, or interchanged with, the features of other embodiments without departing from the scope of the present invention.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.