Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICES AND PROCESSES FOR DATA SAMPLE SELECTION FOR THERAPY-DIRECTED TASKS
Document Type and Number:
WIPO Patent Application WO/2022/195107
Kind Code:
A1
Abstract:
Data samples related to health management are selected. This comprises receiving samples associated with respective times, distributed in a sliding time window as current samples (23) and in a past period as past samples (25). Selected past samples are determined by keeping a first share (26) of the past samples, consisting in most recent ones, and a second share (29) through eliminating among the past samples deprived from the first share, called a complementary share (27), part of the past samples in function of at least some of the current samples and of elimination conditions (32) depending on similarity criteria (31) applied to at least the first and complementary shares. The selected past samples are provided with the current samples for performing therapy-directed tasks. Applications to medical diagnosis, therapeutic treatment, medical rehabilitation and drug development.

Inventors:
MILLERAND GAËTAN (FR)
BOUSSARD MATTHIEU (FR)
PELLISSIER FRANÇOIS (FR)
BECHARAT CHAHRAM (FR)
Application Number:
PCT/EP2022/057232
Publication Date:
September 22, 2022
Filing Date:
March 18, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CRAFT AI (FR)
International Classes:
G16H50/20; G16H50/30; G16H50/70
Domestic Patent References:
WO2020245727A12020-12-10
WO2019173233A12019-09-12
Foreign References:
US20140025315A12014-01-23
Other References:
A. D'AMATOM. BOUSSARD: "Forgetting methods for white box learning", INTERNATIONAL CONFERENCE ON PRACTICAL APPLICATIONS FOR AGENTS AND MULTI-AGENT SYSTEMS, IN TRENDS IN PRACTICAL APPLICATIONS OF SCALABLE MULTI-AGENT SYSTEMS, PAAMS, 2016
T. CHENC. GUESTRIN: "XGBoost: a scalable tree boosting system", PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, pages 785 - 794
T. PUECH ET AL.: "A fully automated periodicity detection in time series", INTERNATIONAL WORKSHOP ON ADVANCED ANALYSIS AND LEARNING ON TEMPORAL DATA, 2019, pages 43 - 54
A. LEROUX ET AL.: "Information gain ratio correction: Improving prediction with more balanced decision tree splits", ARXIV:1801.08310, 2018
Attorney, Agent or Firm:
ICOSA (FR)
Download PDF:
Claims:
CLAIMS

A device (10) for selecting data samples related to health management so as to proceed with at least one therapy-directed task, said device comprising:

- at least one input (11) adapted to receive available data samples (22) related to health management and associated with respective sample times, said available data samples being distributed into a sliding time window (230) including at least one new batch (21) of data samples, said available data samples being then called current samples (23), and into a past period (250) preceding said time window (230), said available data samples being then called past samples (25);

- at least one processor configured for determining in said past period, selected past samples (24) to be kept for said at least one therapy-directed task, by eliminating part (28) of said past samples in function of elimination conditions (32);

- at least one output (14) adapted to provide said selected past samples (24) as a complement to said current samples (23) for performing said at least one therapy-directed task; characterized in that said at least one processor is configured for determining said selected past samples by: splitting said past samples into a first share (26) consisting in most recent ones of said past samples and a complementary share, said complementary share comprising past samples deprived of said first share; classifying the past samples into similarity clusters (53; 530) using at least one similarity criterion; classifying the current samples into the said similarity clusters using the said at least one similarity criterion; and when the number of samples into one of the similarity clusters reaches a predetermined threshold, eliminating at least one sample of the complementary share contained in the said similarity cluster so that the number of samples returns under the predetermined threshold.

The device (10) for selecting data samples according to claim 1, characterized in that: - said at least one input (11) is adapted to repeatedly receive over time updated sets of said available data samples (22), derived from keeping in previous sets of said available data samples at least part of said current samples (23) and of said selected past samples (25); - said at least one processor is configured for repeatedly determining said selected past samples (24) among said updated sets of said available data samples;

- said at least one output (14) is adapted to repeatedly provide said selected past samples of said updated sets, for dynamically proceeding with said at least one therapy-directed task over time. 3. The device (10) for selecting data samples according to claim 1 or 2, characterized in that said at least one processor is configured for eliminating in said respective clusters (53; 530) said oldest ones (281) of said past samples through taking account of said at least some of said current samples (23; 231) in a chronological order of said current samples. 4. The device (10) for selecting data samples according to any of claims 1 to 3, characterized in that said clusters (53; 530) are built from at least one decision tree model (50), said clusters corresponding to leaf nodes.

5. The device (10) for selecting data samples according to claim any one of claims 1 to 4, characterized in that said at least one similarity criterion (31) used for eliminating said part (28; 282) of said complementary share (27) is given by lowest prediction errors associated with said past samples (25; 252) in relation with a machine learning model for said at least one therapy-directed task, said machine learning model providing predictions based on at least the first share (26) and the complementary share (27) of said past samples. 6. The device (10) for selecting data samples according to any of the preceding claims, characterized in that said at least one processor is configured for eliminating said part (28; 281; 282) of said past samples based on said at least one similarity criterion (31) being applied to said past samples (25; 251; 252) irrespective of said current samples (23; 231; 232). 7. The device (10) for selecting data samples according to any of the preceding claims, characterized in that said at least one processor is configured for determining said first share (26) and said complementary share (29) of said past samples (25) so that said first share and said complementary share have relative proportions given by a hyper-parameter (33).

8. The device (10) for selecting data samples according to claim 7, characterized in that:

- said at least one input (11) is adapted to receive evaluations of relevance (37) of said selected past samples (24), said evaluations of relevance being respectively associated with successive time positions (2301, 2305) of said time window (230) and determined with respect to machine learning processing for said at least one therapy-directed task corresponding to next time positions of said time window; and

- said at least one processor is configured for determining said hyper- parameter (33) by selecting among at least two candidate values ( qi , c/2··· qio\ 331,

335) of said hyper-parameter a best candidate value (336) providing smallest averaged errors (38) associated with said evaluations of relevance (37) over said successive time positions (2301, 2305) of said time window (230).

9. The device (10) for selecting data samples according to any of the preceding claims, characterized in that said at least one therapy-directed task includes at least one of a medical diagnosis and a therapeutic treatment, and at least one of an individual and a collective therapy-directed task.

10. A system (1) for health management comprising a device (10) for selecting data samples according to any of claims 1 to 9, and a device (15) for providing decisional guidance (35) relevant to said at least one therapy-directed task based on said selected past samples (24) and said current samples (23) through machine learning processing (16).

11. A method for selecting data samples related to health management so as to proceed with at least one therapy-directed task, said method comprising: - receiving available data samples (22) related to health management and associated with respective sample times, said available data samples being distributed into a sliding time window (230) including at least one new batch (21) of data samples, said available data samples being then called current samples (23), and into a past period (250) preceding said time window, said available data samples being then called past samples (25);

- determining (63) with at least one processor in said past period, selected past samples (24) to be kept for said at least one therapy-directed task, by eliminating part (28) of said past samples in function of elimination conditions (32);

- providing said selected past samples as a complement to said current samples for performing said at least one therapy-directed task; characterized in that said method comprises determining with said at least one processor said selected past samples by splitting said past samples into a first share (26) consisting in most recent ones of said past samples and a complementary share, said complementary share comprising past samples deprived of said first share; classifying the past samples into similarity subsets using at least one similarity criterion; classifying the current samples into the said similarity subsets using the said at least one similarity criterion; and

- when the number of samples into one of the similarity subsets reaches a predetermined threshold, eliminating at least one sample of the complementary share contained in the said similarity subset so that the number of samples returns under the predetermined threshold, said method being advantageously executed by a device (10) for selecting data samples according to any of claims 1 to 10.

12. A method for health management comprising a method for selecting data samples according to claim 11, and a method for providing decisional guidance (35) relevant to said at least one therapy-directed task based on said selected past samples (24) and said current samples (23) through machine learning processing (66). 13. A computer program comprising software code adapted to perform at least one of a method for selecting data samples according to claim 11 and a method for health management according to claim 12 when it is executed by a processor.

Description:
DEVICES AND PROCESSES FOR DATA SAMPLE SELECTION FOR THERAPY-DIRECTED TASKS

TECHNICAL FIELD The invention relates to the health field and more particularly the upstream selection of proper data samples.

BACKGROUND ART

Digital processing has been used for extracting proper parameters, variables and information over time, generally through dedicated formulae providing e.g. potentially troublesome periods in repetitive cycles of physiological phenomena (such as heart beats, breathing, or brain waves), organ evolutions through imaging (such as representing cancerous or fibroma growth), dynamic health statistics on populations (such as pertaining to outbreak progression or myopia expansion), or interactive iterative follow-up recommendations to healthcare personnel or patients (such as relevant to motor skills, hearing ability or memory disorders). Machine Learning (ML) implementations have proven able in a number of cases to outperform the results obtained so far by taking account available time series of data for efficiently extracting complex or hidden information precious for medical tasks. Indeed, health issues or enhancements may not be identified only with respect to static landmarks such as fixed thresholds or preset reaction types, but depend on past events, behaviors or evolutions. Also, relevant dynamic information may itself be strongly affected by such past history. Therefore, fine-tuning consideration of past data may make health predictions, recommendations, guidelines, warnings, diagnoses and treatments more relevant, efficient and/or reliable.

Theoretically, gathering over time a growing amount of data pertaining to a concerned health situation should offer a progressively enhanced and refined related ML model, providing enhanced outputs. This however proves wrong in a number of cases, in particular where relatively slow behavioral evolutions take place while rare events need to be tracked as potential incident hints. An underlying reason is due to computing resource limitations (notably processing time and memory), which constrain learning capabilities and force the introduction of significant approximations in the exploited models. Preserving the relevance of obtained results in a dynamic evolving frame while sparing the computing resources thus appears as a determining technical issue.

A particular aspect of ML constraints amount to an appropriate tradeoff between remembering and forgetting. Indeed, due to restraining bounds in computing resources and to resulting approximation models, remembering too much past information, i.e. usually too old information, may disturb the efficacy of the system. This could e.g. take the form of outdated data interfering with presently relevant ones, or of behavioral interpretation distorted by past model adjustments. On the other hand, while forgetting techniques exist for disposing of old data, they may naturally be prejudicial in a number of situations, in particular where rare events need consideration. This difficulty is well known to the person skilled in the art as the sensitivity-stability dilemma or stability-plasticity dilemma.

Notably, a strategy known as time forgetting consists in sliding a time window applied to an incoming datastream and deleting oldest samples as and when new samples are added. This may prove appropriate for relatively stable behaviors between successive time window periods, and where time windows are not too large, and concerns explicitly or implicitly a number of health-related solutions involving time series. Otherwise, either important information may be lost because of too small time windows, or adaptation may be slow (i.e. too computationally demanding) because of too large time windows.

Generally, in models involving time forgetting, a relative stability of health management - related data is relied on, and the computations must be based on a relevant time period.

A downside is that data drifts (e.g. due to general health enhancement or deterioration, progressive environment changes or evolving personal situation) may be missed or belatedly dealt with, preventing a prompt adaptation to a modified situation. An example of a dedicated approach adapted to deal with both concept drift and anomaly detection is the Seasonal Autoregressive Integrated Moving Average (SARIMA) model. A drawback of this model is that tuning the associated parameters makes it too specific and may not leave enough flexibility to learn new behaviors that can occur after a first training time.

Other widely used models in temporal health-related problems are neural networks, notably Recurrent Neural Networks (RNNs), in which connections between neurons form a directed graph along a temporal sequence, which provides a temporal dynamic behavior. This ensures that concept drifts are taken into account. However, a traditional RNN is exposed to a gradient vanishing issue while backpropagating errors through the network, which makes it unsuited to learn long term dependency. A consequence is that it usually misbehaves on anomaly detection.

An RNN with a gated state or gated memory (in which the internal state is under a direct control by the neural network) enables to overcome that difficulty, by enabling to store long-term dependencies. It can in particular take the form of a Long Short-Term Memory (LSTM), comprising a cell and three gates regulating the time flow of information into and out of the cell, or of a Gate Recurrent Unit (GRU), comprising a cell and two such gates.

An important drawback of LSTM and GRU models is that they require an important amount of data for training them, which poses computational resource issues. In addition, though LSTM or GRU may efficiently keep track of past events, this usually remains restricted in practice to a limited number of time steps, typically a few units, which may prove insufficient for reflecting potentially important longer term data.

A different approach particularly suited to continuous learning adaptation is disclosed in the article by A. D’Amato and M. Boussard, “Forgetting methods for white box learning”, International Conference on Practical Applications for Agents and Multi- Agent Systems, in Trends in Practical Applications of Scalable Multi- Agent Systems, PAAMS 2016. It is described in § 3.3 as “Leaf Forgetting” and involves a Decision Tree. As explained therein, a training set is partitioned in accordance with the leaves of the tree. When a new sample is joined to the training set, it is checked in which leaf of the previous tree the sample is arriving, and all the data stored in that leaf have respective weights increased. When a weight becomes higher than a preset maximum value, the corresponding sample is deleted from the training set. Such models may be particularly suited to the processing of data streams, because increasing weights in the various leaves may provide a balance over time between learned behaviors. This may further ensure a proper mitigated forgetting, because oldest samples corresponding to a common situation, i.e. falling in leaves that have many samples, are forgotten, while rare samples are kept over several learning periods. However, such models may lead to keeping very old situation representations that are no longer relevant, thereby prejudicing the most efficient consideration of concept drifts.

The need of potentially efficient solutions to deal with health management in a dynamic and flexible way, while being able to face promptly situation drifts as well as anomalies, thus remains quite significant and highly attractive in view of the personal and public health stakes. This stands true for both individual and collective therapy-directed tasks, and ranges from individual medical diagnosis and therapeutic treatment to the development of new drugs and the application of public health measures.

More precisely, related ML implementations enabling to avoid high, often prohibitive, inherent computing cost implementations (in terms of processing time and memory) in the absence of satisfying sensitivity-stability tradeoff achievements, are highly desirable.

SUMMARY

A purpose of the present disclosure is to provide a flexible tool for health management, which may be potentially efficient notably when dealing with datastreams. The present disclosure is also directed to judiciously selecting items among available data samples, which may be then exploited for health management, notably via machine learning operations. In this respect, the disclosure sets forth solutions that may potentially provide a flexible and efficient tradeoff between remembering and forgetting, thereby enabling to avoid excessive or prohibitive computation costs without sacrificing the relevance of the selected data samples to effective and reliable health management.

The present disclosure may further apply to various fields of health management, including notably medical diagnosis, therapeutic treatment, drug dosage adaptation, personalized medical program, physiotherapy rehabilitation, new drug development, and of public health measures. It may notably be relevant to all health applications of above-cited prior art.

Preliminary definitions

In the present disclosure, the term “ health management” broadly designates dealing with or controlling operations pertaining to preserving, enhancing or restoring health conditions, whether personal or collective, which may be directed to medical diagnosis or evaluation, medicinal, physiological or rehabilitation treatments, or development of proper medical therapies, including drugs.

“ Data samples related to health management” refers to any kind of information items potentially relevant to health management, and may include valued, nominal or ordinal data, while a single data sample may be unidimensional or multi-dimensional. Such data samples may be individual or collective and may e.g. include any combination of physiological signals, user profiles, behavioral parameters, symptoms, medical results, drug features, health statistics of groups. A “ therapy-directed task” means a piece of work to be done for sake of preserving, enhancing or restoring a health status, whether individually or collectively. Such a task may concern a therapeutic treatment as well as drug development or medical diagnosis.

By “ decisional guidance relevant to a therapy-directed task”, it is meant information, instructions or analysis results leading to some actions regarding health management. This decisional guidance may be automatically executed in machine processes or by actuators. It may provide clear and complete instructions or indications to one or more patients, physicians, healthcare personnel or pharmaceutical laboratories as to how to proceed. It may give decisional information pertaining e.g. to a disorder or a disease, so that patients, physicians, healthcare personnel or pharmaceutical laboratories can properly react.

It may provide instructions or indications pertaining to parts of complex systems, so that proper decisions should be taken on complementary grounds, whether automatically or by appropriate operators, taking account of other factors pertaining to those complex systems.

The terms “ adapted ” and “ configured ” are used in the present disclosure as broadly encompassing initial configuration, later adaptation or complementation of the present device, or any combination thereof alike, whether effected through material or software means (including firmware).

The term “ processor ” should not be construed to be restricted to hardware capable of executing software, and refers in a general way to a processing device, which can for example include a computer, a microprocessor, an integrated circuit, or a programmable logic device (PLD). The processor may also encompass one or more Graphics Processing Units (GPU), whether exploited for computer graphics and image processing or other functions. Additionally, the instructions and/or data enabling to perform associated and/or resulting functionalities may be stored on any processor-readable medium such as, e.g., an integrated circuit, a hard disk, a CD (Compact Disc), an optical disc such as a DVD (Digital Versatile Disc), a RAM (Random- Access Memory) or a ROM (Read-Only Memory). Instructions may be notably stored in hardware, software, firmware or in any combination thereof.

A similarity consists in the identification of equivalence relationships between objects based on a similarity measure (i.e. a real-valued function that quantifies the similarity between two objects) applied to couples of objects, two objects being equivalent when they fall in a same equivalence class or cluster. The objects may be unidimensional or multi-dimensional, and may pertain to valued data (metric processing) as well as to nominal or ordinal data (non-metric processing). The considered equivalence may be structural, automorphic or regular (as known to a person skilled in the art). Machine learning (ML) designates in a traditional way computer algorithms improving automatically through experience, on the ground of training data enabling to adjust parameters of computer models through gap reductions between expected outputs extracted from the training data and evaluated outputs computed by the computer models. A white box is a subsystem whose internals can be viewed, while a black box can only be viewed in terms of its inputs and outputs or of its transfer characteristics.

A hyper-parameter presently means a parameter used to carry out an upstream control of a model construction, such as a remembering-forgetting balance in sample selection or a width of a time window, by contrast with a parameter of a model itself, which depends on specific situations. In ML applications, hyper-parameters are used to control the learning process.

Datasets are collections of data used to build an ML mathematical model, so as to make data-driven predictions or decisions. In supervised learning (i.e. inferring functions from known input-output examples in the form of labelled training data), three types of ML datasets (also designated as ML sets) are typically dedicated to three respective kinds of tasks: training , i.e. fitting the parameters, validation , i.e. tuning ML hyperparameters (which are parameters used to control the learning process), and testing , i.e. checking independently of a training dataset exploited for building a mathematical model that the latter model provides satisfying results. A Decision Tree is a decision support tool using a tree-like model of decisions, having internal nodes, branches and leaf nodes (or leaves), each internal node amounting to testing an attribute, each branch representing an outcome of such testing, and each leaf representing a class label associated with a decision taken from attribute computations. Each leaf may notably be viewed as an equivalence class or cluster, thereby providing a similarity model with structural equivalence.

Gradient boosting is an ML method that combines iteratively weak ML prediction models (such as e.g. Decision Trees) into a single stronger ML prediction model, by optimizing a cost function over a function space by an iterative gradient descent (i.e. by iteratively choosing a function pointing in the negative gradient direction). Reservoir sampling is a randomized algorithm for maintaining a reservoir having a preset size from a population of items having an unknown size and revealed over time, in a single pass over the items, by initializing the reservoir with the first items of the population and then iteratively and randomly selecting in the population an item to be selected and an associated replacement position in the reservoir.

A neural network or artificial neural network (ANN) designates a category of ML comprising nodes (called neurons ), and connections between neurons modeled by weights. For each neuron, an output is given in function of an input or a set of inputs by an activation function. Neurons are generally organized into multiple layers , so that neurons of one layer connect only to neurons of the immediately preceding and immediately following layers.

An ensemble model is an ML model that uses multiple constituent ML algorithms to obtain better predictive performance than the constituent ML algorithms taken alone.

A random forest is a particular ensemble model relying on constructing a multitude of decision trees at training time, and on outputting results based on the joint consideration of the individual trees, i.e. the mode of their classes in classification, and the mean prediction in regression.

The above ML definitions are compliant with their usual meaning, and can be completed with numerous associated features and properties, and definitions of related numerical objects, well known to a person skilled in the ML field.

Additional terms will be defined, specified or commented wherever useful throughout the following description.

Objects of the disclosure

An object of the present disclosure is notably a device for selecting data samples related to health management so as to proceed with at least one therapy -directed task. The device may comprise:

- at least one input adapted to receive available data samples related to health management and associated with respective sample times, those available data samples being distributed into a sliding time window including at least one new batch of data samples, the available data samples being then called current samples, and into a past period preceding that time window, the available data samples being then called past samples; - at least one processor configured for determining in the past period, selected past samples to be kept for the therapy-directed task(s), by eliminating part of the past samples in function of elimination conditions;

- at least one output adapted to provide the selected past samples as a complement to the current samples performing the therapy -directed task(s). According to the disclosure, the processor(s) may be configured for determining the selected past samples by: splitting said past samples into a first share consisting in most recent ones of said past samples and a complementary share, said complementary share comprising past samples deprived of said first share; - classifying the past samples into similarity clusters using at least one similarity criterion; classifying the current samples into the said similarity clusters using the said at least one similarity criterion; and when the number of samples into one of the similarity clusters reaches a predetermined threshold, eliminating at least one sample of the complementary share contained in the said similarity cluster so that the number of samples returns under the predetermined threshold.

The data samples related to health management may be repeatedly received, including possibly via streaming. They may be derived from automatic measurements, e.g. from meters, sensors or monitoring units.

The received available data samples may be derived from a preselection among a broader set of received data samples, e.g. through random sampling executed on the flight on incoming samples. This may notably prevent an excessive amount of entered data while ensuring a representativeness of the gathered available data samples within the sliding time window. For example, a reservoir sampling strategy may be exploited. In this way, all incoming samples have the same probability to be in the reservoir, the updated probability of the reservoir making it possible to do it on the flight.

The sliding time window may have a constant length, e.g. corresponding to a cyclic behavior such as a day, a week, a month or a year. It may be set by a user. The samples are said “distributed” into the sliding time window and the past period for sake of conciseness, for meaning that they are associated with respective times located in the relevant time periods.

The targeted downstream therapy-directed task(s) may be automatized, e.g. by machine- controlled medication adjustment, dynamic diagnosis establishment, rehabilitation step determination or drug dosage setting. It/They may be repeated over time, e.g. iteratively, based on progressively incoming information.

It deserves noting that the first share of the past samples is used together with the complementary share of the past samples in applying the similarity criterion/criteria on which the elimination conditions depend, while that first share is immune from deletion contrary to the complementary share. As to the current samples, at least some of them are determining in the deletion of some of the past samples, through the elimination conditions. They may be kept together with the selected past samples for the therapy-directed task(s).

The device for selecting data samples may be coupled to a device for providing health management information, instructions or guidance (included in the form of a joint device grouping related functionalities), which may implement machine learning. Both devices may rely on same or like similarity criteria.

The device for selecting data samples as disclosed may be particularly efficient for avoiding excessive computation costs while potentially enabling reliable health management. This is due to its dual nature, combining the exploitation of the first share and of the second share.

More precisely, keeping the first share of the past samples amounts to some kind of time forgetting (or random forgetting), in which deleted past samples are associated with times lying before the joining of the time window and of a preceding time interval. In addition, keeping the second share of the past samples amounts to some kind of generalization of leaf forgetting, presently called “similarity forgetting”, in which clusters may be obtained from other models than decision trees, or similarity criteria may rely on similarity measures without involving clusters, while part of the past samples (i.e. the first share) is immune from deletion.

This combination of time forgetting and similarity forgetting offers a potentially powerful tool for efficiently facing the sensitivity- stability dilemma, able to quickly follow concept drifts, while keeping track of older behaviors helpful in e.g. detecting anomalies. In particular, similarity forgetting (the second share) may allow to keep only track of rare behaviors throughout ages thanks to exploiting similarity criteria in sample deletion. For example, samples falling in common clusters may be more prone to be replaced by new ones, because more new samples fall in those clusters. This may give the possibility to remind behaviors on a long term scale while updating quickly common behaviors and following global concept drifts. On the other hand, time forgetting (the first share) may allow to turn down excessively old behaviors while preserving most recent past information, the deletion actions being focused on the past samples preceding the first share.

A remarkable property of the present device for selecting data samples is that in some embodiments, it may enable a controlled balance between time forgetting and similarity forgetting, which may be stricken in function of the targeted kind of therapy-directed tasks. This may be reflected in the respective sizes (i.e. the numbers of the past samples) of the first share and of the second share. For example, in monitoring health constants to follow a patient’s state, appropriate tuning suited to remembering odd events (e.g. arrhythmia, blood sugar peaks) that occurred during last weeks may emphasize similarity forgetting rather than time forgetting so as to keep abnormal behavior in memory, while setting a time window large enough to take into account a complete week. By contrast, in case of a datastream subject to important concept drift, e.g. due to a change of medical treatment, a proper tuning may on the contrary substantially emphasize time forgetting rather than similarity forgetting, so as to forget old behavior as fast as possible.

The balance between time forgetting and similarity forgetting may be modified along time, included possibly along successive positions of the sliding time window. The device for selecting data samples may further make possible a basically white box solution, in particular when using decision trees, potentially providing fully and readily explainable results, by contrast with ANN solutions such as e.g. LSTM or GRU. This may thus substantially facilitate proper decisions pertaining to therapy -directed tasks, completing and going beyond reactive adaptations to current situations or punctual anomaly treatments.

The oldest past samples eliminated in the respective clusters are then considered at cluster levels, rather than at global level. Namely, some past samples may be deleted while older ones may be kept, insofar as the latter belong to other clusters than the former.

In particular modes focused on dynamic processing: - the input(s) is/are adapted to repeatedly receive over time updated sets of the available data samples, derived from keeping in previous sets of the available data samples at least part of the current samples and of the selected past samples;

- the processor(s) is/are configured for repeatedly determining the selected past samples among the updated sets of the available data samples; - the output(s) is/are adapted to repeatedly provide the selected past samples of the updated sets, for dynamically proceeding with the therapy-directed task(s) over time.

In datastream processing, for a given lastly updated set of the available data samples, not only the past samples but also most of the current samples in the sliding time window may have already been considered in the previous set of the available data samples. Then, the new batch of data samples contributes to modifications of the available data samples by adding new samples into the sliding time window, while oldest samples in the previous time position of the time window are switched to past samples of the past period. In particular modes based on clusters, the processor(s) is/are configured for eliminating in the respective clusters the oldest ones of the past samples through taking account of the at least some of the current samples in a chronological order of the current samples.

The time arrival of the current samples is then decisive for the identification and elimination of the oldest past samples in the respective clusters, i.e. for properly focusing on the appropriate clusters and on their respective oldest past samples. In this way, the current samples may drive away the concerned past samples as and when considered, which may be particularly suited notably to a streaming system.

In some modes, the elimination conditions for at least one of the clusters include a cumulated amount of the available data samples in that cluster being above a preset threshold.

This may take the form of weights assigned to the samples in the cluster.

In attractive implementations, the clusters are built from at least one decision tree model, the clusters corresponding to leaf nodes. For example, one or more decision tree models may be produced, and the similarity criteria may correspond to belonging to same leaves of the built trees. The same may apply to tree-based models such as notably random forests, or the gradient boosting framework known as XGBoost (open-source software library) and described e.g. by T. Chen and C. Guestrin in “XGBoost: a scalable tree boosting system”, Proceedings of the 22 nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785-794, 2016.

In other advantageous modes, the similarity criterion/criteria used for eliminating the part of the complementary share is given by lowest prediction errors associated with the past samples in relation with a machine learning model for the therapy-directed task(s), that machine learning model providing predictions based on at least the first share and the complementary share of the past samples.

With such models, the construction of clusters may no longer be required for past sample elimination, since the mere comparison of prediction errors among the past samples of the complementary share may be enough for determining the past samples to be deleted. However, the potential white box advantages of decision trees may be lost.

In some modes, the processor(s) is/are configured for eliminating the part of the past samples based on the similarity criterion/criteria being applied to the past samples irrespective of the current samples.

Thus, in the implementations with clusters, the current samples may be ignored in the formation of the clusters, while at least some of them are determining in the deletion of part of the past samples, through the elimination conditions applied to the clusters.

In the implementations with prediction errors, the current samples may be ignored in the predictions of the ML model providing the prediction errors, while at least some of those current samples are determining in the deletion of part of the past samples, through e.g. proceeding with replacement operations based on the total number of the current samples.

In embodiments, the processor(s) is/are configured for determining the first share and the second share of the past samples so that the first share and the complementary share have relative proportions given by a hyper-parameter.

According to an automatic determination of that hyper-parameter:

- the input(s) is/are adapted to receive evaluations of relevance of the selected past samples, those evaluations of relevance being respectively associated with successive time positions of the time window and determined with respect to machine learning processing for the therapy-directed task(s) corresponding to next time positions of the time window; and

- the processor(s) is/are configured for determining the hyper-parameter by selecting among at least two candidate values of the hyper-parameter a best candidate value providing smallest averaged errors associated with the evaluations of relevance over the successive time positions of the time window.

This hyper-parameter determination may thus be provided by a rolling validation, proper to time-series data, ensuring that the evaluation is based on not yet exploited data rather than already seen data. A model may notably be built for every time position of the time window and evaluated with the data of the next time position of the time window, obtained errors on each validation fold being averaged over all the considered successive positions of the time window. This strategy may be able to detect and take account of concept drifts in data, which often correspond to an important drop of predictive performances of a model.

The number of tested candidate values may e.g. be comprised between 5 and 12 (included), and more precisely between 8 and 10 (included).

The width of the sliding time window may be automatically adjusted likewise. Advantageously, the therapy-directed task(s) include(s) a medical diagnosis and/or a therapeutic treatment, and an individual and/or a collective therapy-directed task.

In some modes, the similarity criterion/criteria includes at least one criterion used in machine learning processing for obtaining decisional guidance relevant to the therapy-directed task(s). This may establish a strong connection between the selection of the kept data samples and the downstream processing of those data samples, thereby potentially providing particularly efficient synergies.

In particular, the at least one similarity criterion may be fully applied at the data sample selection stage and then exploited in obtaining the decisional guidance. The disclosure also relates to a system for health management. That system comprises a device for selecting data samples according to the present disclosure, and a device for providing decisional guidance relevant to the therapy-directed task(s) based on the selected past samples and the current samples through machine learning processing.

Another object of the disclosure is a method for selecting data samples related to health management so as to proceed with at least one therapy-directed task. The method may comprise: - receiving available data samples related to health management and associated with respective sample times, those available data samples being distributed into a sliding time window including at least one new batch of data samples, the available data samples being then called current samples, and into a past period preceding the time window, the data samples being then called past samples;

- determining with at least one processor in that past period, selected past samples to be kept for the therapy-directed task(s), by eliminating part of the past samples in function of elimination conditions;

- providing the selected past samples as a complement to the current samples for the therapy-directed task(s).

According to the disclosure, the method may comprise determining with the processor(s) the selected past samples by: splitting said past samples into a first share consisting in most recent ones of said past samples and a complementary share, said complementary share comprising past samples deprived of said first share; classifying the past samples into similarity clusters using at least one similarity criterion; classifying the current samples into the said similarity clusters using the said at least one similarity criterion; and when the number of samples into one of the similarity clusters reaches a predetermined threshold, eliminating at least one sample of the complementary share contained in the said similarity cluster so that the number of samples returns under the predetermined threshold.

The method for selecting data samples is advantageously executed by a device for selecting data samples according to any of the embodiments of the disclosure.

A further object of the disclosure is a method for health management comprising a method for selecting data samples according to the disclosure, and a method for providing decisional guidance relevant to the therapy-directed task(s) based on the selected past samples and the current samples through machine learning processing. In addition, the disclosure relates to a computer program comprising software code adapted to perform a method for selecting data samples and/or a method for health management according to the disclosure, when it is executed by a processor.

The present disclosure further pertains to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method for selecting data samples and/or a method for health management, compliant with the present disclosure.

Such a non-transitory program storage device can be, without limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples, is merely an illustrative and not exhaustive listing as readily appreciated by one of ordinary skill in the art: a portable computer diskette, a hard disk, a ROM, an EPROM (Erasable Programmable ROM) or a Flash memory, a portable CD-ROM (Compact-Disc ROM).

LIST OF FIGURES

The present disclosure will be better understood, and other specific features and advantages will emerge upon reading the following description of particular and non-restrictive illustrative embodiments, the description making reference to the annexed drawings wherein:

Figure 1 is a block diagram representing schematically a particular mode of a system for health management including a device for selecting data samples, compliant with the present disclosure;

Figure 2 represents a timeline with successive positions of a sliding time window, as exploited in the device for selecting data samples of figure 1;

Figure 3 focuses on a particular part of the timeline of figure 2, pertaining to data samples exploited in the device for selecting data samples of figure 1 while preceding the sliding time window (past samples); Figure 4 focuses on the processing of the past samples by the device for selecting data samples of figure 1 ;

Figure 5 schematically illustrates a decision tree relied on in a first processing mode of the past samples of figure 4; Figure 6 shows the mechanism exploited with the decision tree of figure 5 in one of its leaves;

Figure 7 schematically illustrates a distribution of data samples in function of machine learning prediction errors, as relied on in a second processing mode of the past samples of figure 4; Figure 8 represents the exploitation of a similarity ratio in tuning a balance of a first share and a second share of the past samples in the processing of figure 4;

Figure 9 illustrates a model for assessing the relevance of an instance of the similarity ratio of figure 8, in relation with machine learning processing;

Figure 10 show the exploitation of the model of figure 9 in determining a proper value of the similarity ratio of figure 8;

Figure 11A is a flow chart showing successive steps executed with the system for health management of figure 1 ;

Figure 11B details steps of data sample selection in the flow chart of figure 11 A;

Figure 12 is a flow chart showing successive steps executed with the device for selecting data sample of figure 1 in determining a proper value of the similarity ratio dealt with in figures 8 to 10;

Figures 13A, 13B, 13C, 13D schematically illustrates various applications of the system for health management of figure 1 in medical treatment, diagnosis, drug development and rehabilitation situations; Figure 14 diagrammatically shows an apparatus integrating the functions of the system for health management of figure 1.

On the figures, the drawings are not to scale, and identical or similar elements are designated by the same references.

ILLUSTRATIVE EMBODIMENTS

The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope.

All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein may represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which may be shared.

It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.

The present disclosure will be described in reference to a particular functional embodiment of a system 1 for health management, as illustrated on Figure 1, and comprising an upstream device 10 for selecting data samples and a downstream device 15 for providing decisional guidance 35 relevant to one or more therapy-directed tasks, that device 15 being fed by selected data samples provided by the device 10.

It is observed that the system 1 for health management is not intended to supplant the expertise and skills of medical or caregiving staff, or to dilute their responsibilities, but can support their short-term, mid-term or long-term actions.

Though the presently described devices 10 and 15 are versatile and provided with several functions that can be carried out alternatively or in any cumulative way, other implementations within the scope of the present disclosure include devices having only parts of the present functionalities. Each of the devices 10 and 15 is advantageously an apparatus, or a physical part of an apparatus, designed, configured and/or adapted for performing the mentioned functions and produce the mentioned effects or results. In alternative implementations, any of the device 10 and the device 15 is embodied as a set of apparatus or physical parts of apparatus, whether grouped in a same machine or in different, possibly remote, machines. The device 10 and/or the device 15 may e.g. have functions distributed over a cloud infrastructure and be available to users as a cloud-based service, or have remote functions accessible through an API. The device 10 for selecting data samples and the device 15 for providing decisional guidance may be integrated in a same apparatus or set of apparatus corresponding to the system 1 for health management, and intended to same users. In other implementations, the structure of device 10 may be completely independent of the structure of device 15, and may be provided for other users. For example, the device 10 may be exploited by a dedicated operator proposing proper upstream online selection of data samples to health management entities provided with ML capabilities embodied in the device 15, as subscribeware services (a.k.a. SaaS, for Software as a Service).

The system 1 for health management may have various kinds of applications, and be e.g. directed to a patient, a physician, a nurse, a therapist, a caregiver, hospital or clinic staff, allied health professionals, a pharmaceutical laboratory, a medical equipment manufacturer, a medical research team, epidemiologists, a rehabilitation center, health authorities.

The system 1 may be suited to one or more target goals integrated in the internal processes of the system 1. Those goals may comprise any of e.g. individual therapeutic treatment, rehabilitation program, drug dosage adjustment or medical diagnosis, collective health provisions such as medical treatment, vaccination or health protocol elaboration, drug development.

The system 1 may rely on ML operations, and automatically produce, set or adjust ML models, based on the processed data samples and the target goals as known to a skilled person. In this respect, the system 1 may implement reinforcement learning (in which interactions with a dynamic environment for performing a goal are exploited for maximizing feedbacks amounting to rewards), associated with any of the above- mentioned goals. It may instead implement unsupervised learning (in which algorithms find themselves appropriate structures in raw inputs, i.e. without learning stage with labelled inputs), possibly in the form of feature learning (in which representations needed for feature detection or classification are automatically discovered), notably in the frame of anomaly identification. Alternatively or in combination, the system 1 may exploit supervised learning, for example for initializing appropriate mappings between input information and targeted goals.

In ML models based on supervised learning, each point (i.e. each data) contains one or more attributes (i.e. features such as cardiac frequency, age, weight, blood pressure, and the like, or coordinates) and is associated to one label (or class). The label is the value to be predicted by the ML model based on values of the attributes.

In the context of the invention, a similarity score may be defined to evaluate the similarity of a new data item to one or more data items previously received. The objective is to determine the similar points called into question by this new data item (i.e., the points previously considered as relevant for the model, but determined as not relevant following the reception of the new data item). Also, the actual label of the new data item may be ignored. There are several possibilities to define such similarity score, as detailed below.

In one or several embodiments, the similarity score may be defined based on attributes. For example, the similarity measure may be a function of a mathematical distance, for example the Euclidean distance, and the similarity between two points may be evaluated based on the distance between the values of the attributes of these points. The smaller the distance, the more the points are considered similar, and the greater the similarity score (for example, the similarity score can be a real value between 0 and 1, a score equal to 1 corresponding to an equal distance to 0 and a score equal to 0 corresponding to an infinite distance). According to these embodiments, the point among the previous points received that minimizes the distance to the new point is considered to be the most similar, i.e., the one associated with the highest similarity score. A disadvantage of this approach is that it does not take into account the potential nonlinearities of the phenomenon that these attributes represent. As a consequence, two points corresponding to strongly different data may have a very small distance, and therefore a similarity score close to 1.

In other embodiments, the similarity score may be defined based on the predictions of the labels. For example, a ML model (e.g., a gradient boosting model or a neural network model) may be trained on a set of training data to predict labels. A new vector called “enhanced vector” may be constructed from the attributes of previous data and the label predicted by the model for a new point. Similarity between two points may be evaluated by calculating a distance (Euclidean distance for example) between two enhanced vectors: the smaller the distance, the more the points are considered similar, and the greater the similarity score (for example, the similarity score can be a real value between 0 and 1, a score equal to 1 corresponding to an equal distance to 0 and a score equal to 0 corresponding to an infinite distance).

In preferred embodiments, decision tree learning techniques may be used. A decision tree is typically built by recursively splitting the set of points to obtain the most homogeneous subsets of data possible. The splitting of the set of points may be performed based on different metrics that measure the homogeneity between a point and the different subsets. Metrics based on estimate of positive correctness, Gini impurity, variance reduction or measure of goodness may be used. Each leaf of the decision tree may correspond to a set of points considered similar by the algorithm. It is thus possible to exploit this notion of similarity as a measure for the selection of points: all the points in the same leaf of the tree are considered similar. A decision tree model may therefore be learned on past data, and for a new point, the similar past points considered as being similar to this new point are those which belong to the same leaf as the new point. These embodiments make it possible to effectively capture the non-linearities in the attributes and to best select the points to be deleted.

A similarity criterion may be defined based on the similarity score or based on the similarity measure. For example, two points may be considered as similar if the similarity score is greater than a predefined threshold (i.e. the similarity criterion corresponds to a similarity score greater than the predefined threshold). In the case of decision trees, the similarity criterion may be the belonging of a point to a leaf: two points of a same leaf are considered to be similar.

The device 10 for selecting data samples may be exploited in a learning stage of the device 15 for providing decisional guidance as well as in an inference stage once the device 15 has produced a trained model. Instead of ML operations, and though usually less flexible and versatile than the latter, the system 1 may rely on one or more expert systems, fed with the data samples selected by the device 10.

The system 1 may be fully automatized, the decisional guidance 35 being directly exploited for controlling, directing or adjusting proper actuators, or being transferred to one or more processors for intermediate operations before effective health management circuit control.

Alternatively, the system 1 may provide the decisional guidance 35 as user-directed information, so as to drive users to undertake appropriate actions or to prompt proper reactions. For example, the system 1 may alert on risks linked to automatically detected anomalies, or to potentially damaging or dangerous situations. In other examples, the system 1 may indicate behavior, equipment or medication changes required in view of one or more target goals.

In advantageous implementations, the system 1 is configured for repeated consideration over time of incoming data samples. This is all the more attractive in ML operations, in which the data samples may contribute to evolving the ML model. The system 1 may be more particularly suited to datastreams, being thereby able to regularly or continuously process new batches of incoming information.

In alternative implementations (which may be combined with the previous ones), the system 1 is adapted to deal with a single set of data samples previously obtained over time. For example, the system 1 is exploited for punctual individual medical diagnosis, statistical analysis of the effect of a therapeutic treatment on a population, or efficiency assessment of a new drug. In another example, the system 1 is coupled with a software engine dealing with an incoming stream of data samples and exploited for health management in a simpler way, less demanding in computing resources than the system 1 (e.g. from an expert system while the system 1 relies on ML operations, or based on simplified supervised ML while the system 1 relies on reinforcement learning), so as to complement and strengthen on demand the relevance of decisional guidance. The data samples input to the system 1 are associated with respective sample times, e.g. via time stamps, and may comprise any kind of information pertaining to health management, including targeted goals. They may notably comprise any selection among or combination of: physiological signals related to heart rate, blood pressure, ECG, pulse oximetry, respiration rate, brainwaves, temperature, muscle activity, electromyography (EMG), electrooculography (EOG), glucose level, creatinine, renal clearance; user profiles including age, gender, weight, geographic area, diet, mobility, followed medical treatment, phenotype, medical history, genome; behavioral parameters including physical activities, food or drink intake, sleeping hours, activity patterns such as working or leisure, on-screen activities, memory performances, interactions with environment; symptoms including headaches, pains, sleep disorders, coughing, wheezing, stomach upset, gastric reflux, memory loss, sensory losses, dental pains; medical results related to blood sample, urine monitoring, radiographs; monitored or logged information from an internal or external environment, such as e.g. temperature, humidity, wind, atmospheric composition (such as C02 content), number of neighboring people, available space; temporal information (irrespective of and complementary to the sample times) such as time of day, day of week, month or season; drug features including molecular composition, posology, efficiency, side effects; health statistics of groups including mortality rates, morbidity rates, life expectancy, healthy life expectancy The data samples may be provided to the system 1 via any appropriate communication network involving wired (e.g. Ethernet), wireless (e.g. WiFi, WiMax - standing for Worldwide interoperability for Microwave Access, or Bluetooth) or cellular (e.g. UMTS - standing for Universal Mobile Telecommunications System, LTE - standing for Long-Term Evolution, or 5G) transmissions, as well known to a person skilled in the field.

They may be automatically obtained e.g. by sensors, meters, detectors or monitors. Alternatively or in combination, at least part of them may be user-entered.

More information will now be given on the data samples received by the system 1 (pertaining to input data as well as output data) for selection and for resulting decisional guidance, with reference to Figure 2 besides Figure 1. The system is adapted to a dynamic process, so that one or more batches 21 of data samples ranging over a time interval 210 are completing previously obtained and kept data samples 20. For example, in regularly and possibly periodically gathering data in a given environment (e.g. every 10 minutes, half-hour or hour), each batch 21 of data may correspond to newly collected input and output data at a given time slot or at successive time slots (e.g. at 5 or 10 successive time slots). For sake of convenience, the time interval 210 is considered to start at the end of the previous data recording and to finish with the lastly collected data sample.

In addition, a sliding time window 230 having a window width 30 is taken into account in the system 1. The window width 30 may be set by a user so as to cover a roughly recurring cycle of events and situations. For example, the width may correspond to one day, one week, one month or one year. The window width 30 may be set to cover multiple batches of data samples, so that the width 30 is larger, and possibly substantially larger, than the time interval 210. Instead of being set by the user, the window width 30 may be automatically determined by the system 1, or may have been previously determined before being exploited by the system 1. Notably, it may be derived from periodicity detection in time series. This may be achieved by Fourier transform analysis through a periodogram (giving an estimate of the spectral density of a signal) in the time domain and/or by autocorrelation of the time signal, as known to a skilled person. An example of an enhanced hybrid solution of the latter methods is presented by T. Puech et al. in “A fully automated periodicity detection in time series”, International Workshop on Advanced Analysis and Learning on Temporal Data, pages 43-54, AALTD 2019. Another solution for potentially determining the window width in a dynamic way will be described below. Since the processing by the system 1 is relying on the data samples of the sliding time window 230, called the current samples 23, having the time window 230 correspond to a time cycle makes possible an integration of time variations over that cycle in health management. In addition, sliding the time window 230 so as to include the time interval 210 corresponding to the latest batches 21 of data may enable to take finely account of time variations extending beyond the time cycle. The time window 230 may thus have an end time position ti, t2 adapted to catch latest batches 21 of data samples.

The system 1 is thus relying on the data samples of the new batches 21 and on previously kept data samples 20, distributed over the part of the time window 230 preceding the time interval 210. The previously kept samples 20 extend however below the time window 230, and further covers past samples 25 older than the start of the time window 230 and distributed over a time interval 250. As a whole, a processing time range 220 comprising the time window 230 and the contiguously preceding time interval 250 is taken into account by the system 1. The system 1 is interacting with a user interface 18, via which information can be entered and retrieved by a user. The user interface 18 includes any means appropriate for entering or retrieving data, information or instructions, notably visual, tactile and/or audio capacities that can encompass any or several of the following means as well known by a person skilled in the art: a screen, a keyboard, a trackball, a touchpad, a touchscreen, a loudspeaker, a voice recognition system.

The system 1 for health management will now be detailed. In what follows, the modules are to be understood as functional entities rather than material, physically distinct, components. They can consequently be embodied either as grouped together in a same tangible and concrete component, or distributed into several such components. Also, each of those modules is possibly itself shared between at least two physical components. In addition, the modules are implemented in hardware, software, firmware, or any mixed form thereof as well. They are preferably embodied within at least one processor of the device 10 or of the device 15.

Starting with the device 15 for obtaining decisional guidance, it is provided for receiving selected data samples from the device 10 (including inputs and outputs) and for inducing from those samples the decisional guidance 35, on the ground of a module 16 for ML processing. The latter may be built from usual techniques directed to achieving the objectives detailed in the present disclosure, including notably Decision Trees, ANN, ensemble models or gradient boosting, as well known to a person skilled in the art. It will therefore not be developed here.

More precisely, the module 16 is configured for producing an ML prediction model 36 from the received samples, suited to produce the decisional guidance 35 based on those samples and possibly on further received samples.

In some variants, expert systems are used in the device 15 instead of ML processing. This may be relevant in some specific cases as simpler and faster, but usually at the cost of sensitive preliminary adjustment of the related parameters, and of less flexibility and lower robustness to variations in the concerned situations. The device 10 for selecting data samples comprises an input module 11 adapted to receive the new batches 21 of data samples (e.g. via a received datastream) and the previously kept data samples 20, for example from one or more local or remote database(s) 17. The latter can take the form of storage resources available from any kind of appropriate storage means, which can be notably a RAM or an EEPROM (Electrically-Erasable Programmable Read-Only Memory) such as a Flash memory, possibly within an

SSD (Solid-State Disk). In variant implementations, the previously kept data samples 20 may be streamed to the device 10.

The input module 11 is also adapted to receive the time window width 30 (e.g. from the database 17), when already available rather than being computed by the device 10. Other parameters can be received by the input module 11 (e.g. from the database 17), including similarity criteria 31, elimination conditions 32 and optionally a similarity ratio 33, as developed below (in following examples, the similarity ratio 33 is instead computed by the device 10).

The device 10 optionally comprises a preselection module 12, adapted to preselect part of the newly received data samples, e.g. in the new batches 21 when a preselection has already been effected on the previously kept data samples 20, or more broadly on the current samples 23 of the time window 230. The preselection module 12 may be particularly useful when receiving a datastream, since it enables to reduce in real time the size of the sample load to a desired dimension. For example, a sampling round executed on the flight while gathering data ensures that the selected data are representative of what happens within the time window 230. The sampling may use a reservoir sampling strategy, so that all received samples have a same probability to be in the reservoir, the updated probability of the reservoir making it possible to do it on the flight.

Based on that preselection, a set of available data samples 22 is ready to be processed, which consists in the union of the past samples 25 and of the preselected samples of the time window 230 (i.e. the current samples 23).

In variant modes, the preselection module 12 is absent from the device 10 (whether the preselection has been executed upstream or all received samples are kept), so that the set of available data samples 22 to be processed is directly received as such by the device 10 instead of being mixed up with other data samples.

The device 10 also comprises a data sample selection module 13, configured for effecting an extraction of data samples among the received past samples 25, those extracted samples being called selected past samples 24. Such a selection introduces a forgetting mechanism in the oldest data, so as to avoid excessive computational costs while potentially getting a good sensitivity- stability tradeoff in a flexible way.

As illustrated on Figures 3 and 4, two kinds of forgetting operations are combined, respectively associated with a more recent sub-interval 260 and a contiguous older sub-interval 270 of the time interval 250.

More precisely, the past samples 25 consist of a first share 26 and a complementary share 27, respectively distributed in the sub-intervals 260 and 270. A first aspect of the selection carried out by the module 13 is that all data samples of the first share 26 are kept, i.e. that share is immune from deletion and falls within the selected past samples 24. A second aspect of the selection is that part of the data samples belonging to the complementary share 27 are deleted, which leads to a reduced set forming a second share 29 of the selected past samples 24. The module 13 determines deleted past samples 28 by the elimination conditions 32, which depend on both the similarity criteria 31 applied to the past samples 25 and on the current samples 23 associated with the time window 230.

In a first implementation mode of the selection, illustrated on Figures 5 and 6, a decision tree 50 is built on the ground of the past samples 25, and comprises (in compliance with usual practice) a root 51, internal nodes 52, and leaves 53. The decision tree 50 may be obtained based on the automatic identification of feature similarities in the past samples 25, as known to a skilled person.

The decision tree 50 may be balanced in an enhanced way so as to improve accuracy, by using a gain function as described by A. Leroux et al. in “Information gain ratio correction: Improving prediction with more balanced decision tree splits”, arXiv:1801.08310, 2018.

At least part of the current samples 23 is then mapped to the decision tree 50, where each of the leaves 53 may encompass one or more of the past samples 25. In each leaf 530 (Figure 6), the arrival of one of the current samples 231 may induce the deletion of one of the past samples 281 belonging to the complementary share 27. For example, the assignment of any of the current samples 23 to one of the leaves 53 causes the incrementation of a leaf counter, until a deletion threshold is reached. The oldest sample 281 of the complementary share 27 in the leaf 530 is then removed.

In variant implementations, each of the samples belonging to the complementary share 27 in the leaf 530 is pre-weighted further to the decision tree construction, as an increasing function of the number of the past samples 25 in the leaf 530. The related weights are then increased whenever one of the current samples 321 arrives. Once a preset deletion threshold is reached for any of the weighted samples 281, the latter is removed.

In this way, rare behaviors tend to be kept (low-traffic leaf) while common behaviors tend to be forgotten (high-traffic leaf).

In advantageous embodiments, the current samples 23 are mapped to the decision tree 50 in a chronological order of those current samples 23 for the deletion process. In variants, the current samples 23 are mapped to the decision tree 50 while the deletion process is executed in a chronological order of the past samples 25 of the leaves 53.

At any rate, the total number of deleted samples may be compliant with a preset number of kept samples in the complementary share 27, i.e. the target size of the second share 29. The total number of the selected past samples 24, i.e. in the first share 26 and in the second share 29, may be determined from an overall desired number of data samples for downstream processing by the device 15 (this overall number regarding the union of the current samples 23 and of the selected past samples 24) and by a desired balance between the first share 25 and the second share 29. The latter balance may be given by the similarity ratio 33 (as a hyper-parameter), noted q, giving the number of samples in the second share 29 divided by the total number of the selected past samples 24.

For example, a decision tree is to be built by the device 15 based on inputs from the device 10, and a maximum number of related operations is set. If e.g. the maximum number is 50,000 and the current samples 23 associated with the time window 230 are 25,000, 25,000 samples can be kept in the time interval 250 associated with the past samples 25. Among them, (1 - q) x 25,000 samples can be taken for the first share 26 (which is close to a time forgetting approach), and q x 25,000 samples for the second share 29 (which is close to a similarity forgetting approach). Keeping the (1 - q) x 25,000 samples of the first share 26 provides the time length of the preserved sub-interval 260, while the q x 25,000 samples of the second share 29 may be possibly scattered within the sub-interval 290. The device 15 is thus fed with the appropriate number 50,000 of samples for building its decision tree.

In variants, the current samples 23 or part of them are used together with the past samples 25 to build the decision tree 50, which is then exploited as described above. In other implementations, multiples decision trees are built and exploited as above, e.g. with random forest or XGBoost models. In some modes, the module 13 is adapted to assign the samples to the leaves of each of the trees, and to compute a similarity score related to the number of samples in each of the decision trees. That similarity score is exploited in a similar way to the above for past sample deletions. In a second implementation mode of the selection, the first share 26 and the second share 29 of the past samples 25 are used in an ML model providing predictions for therapy-directed tasks, and which may be carried out by the device 15. The past samples 25 are then respectively associated with resulting prediction errors, and lowest values of the latter are privileged for sample deletion in relation with the arrival of the current samples 23. Indeed, the prediction error provides a fair estimation of a similarity of a particular point.

For example, as illustrated on Figure 7, on the ground of a prediction error distribution 55 based on the past samples 25, past samples 252 belonging to the complementary share 27 and associated with the smallest errors are subject to being deleted (like the past sample 282) whenever one of the current samples 232 is entered. The deleted past samples 28 then depend only on the number of the current samples 23.

In this way, samples more likely to be common are discarded in the first place, since corresponding to smaller prediction errors, while uncommon samples have more chances to be kept since corresponding to larger errors.

That generic exploitation of ML predictions makes the present strategy applicable to a vast range of models, including gradient boosting methods, neural networks and ensemble algorithms.

In variants, the current samples 23 or part of them are used together with the past samples 25 to build the ML model providing predictions for therapy-directed tasks, which is then exploited as described above.

More will be said now about the similarity ratio 33. As mentioned above and illustrated on Figure 8, that ratio enables to tune the balance between the first share 26 and the second share 29 of the selected past samples 24. In particular embodiments, the similarity ratio 33 is provided to the device 10, and properly set in function of the situations. For example, a high value is chosen for detecting abnormalities in physiological parameters (the high level of similarity remembering enabling to keep odd behavior in memory), while a low value is chosen for reflecting a change of medical treatment (the low level of similarity remembering enabling to forget old behavior as fast as possible).

In advantageous alternative implementations, the similarity ratio 33 is automatically tuned by the module 13 of the device 10, taking account of the ML processing of the device 15. This may enable to enhance the performances in a flexible and dynamic way. A rolling validation is for example carried out on the ground of time-series data, as explained below in relation with Figures 9 and 10.

At successive time positions of the time window 230 (indexed as n and n+1 on Figure 9), for a given value q of the flexibility ratio 33, the ML prediction model 36 is built by the ML processing module 16 from the available data samples 22, through obtaining the current samples 23 and the selected past samples 24 with the module 13 for data sample selection. As apparent from the above explanations, the available data samples 22, the current samples 23 and selected past samples 24, and the ML prediction model 36 usually differ at each time position step of the time window 230 (while being presently noted in the same way for sake of convenience).

The selected past samples 24 at a given step, e.g. step n , are then submitted to the ML prediction model 36 of a next step, presently n+1 , from which an overall evaluation of relevance 37 can be induced. The evaluation 37 thus reflects the relevance of the samples selected at a given step to the data of the next step. The evaluations 37 are obtained for successive time positions of the time window 230 and averaged, leading to an error 38 (Figure 10). The same process is repeated for various candidate values qi, q2... q m of the similarity ratio 33, m being for example equal to 10. The value selected for the similarity ratio 33 is then the one entailing the smallest error 38.

This automatic selection of the similarity ratio 33 ensures that the algorithm is not based on already seen data, while being able to detect concept drift.

In practice, it is observed that testing around 10 values of the similarity ratio 33 may provide a satisfying selection of a proper value q while avoiding additional computation costs. A similar technique may possibly be employed for automatically determining the time window width 30 (which is another hyper-parameter), instead of the previously described related implementations.

The device 10 further includes an output module 14, adapted to receive the current samples 23 and the selected past samples 24 from the data sample selection module 13, and to provide them to the device 15.

In operation, the system 1 may for example execute the following process, in relation with Figures 11A and 11B:

- preselect the available data samples 22 from the new batches 21 and from the previously kept samples 20 (step 62),

- proceed with data sample selection among the available data samples 22, by using the time width 30, the similarity criteria 31 and the elimination conditions 32, so as to produce the current samples 23 and the selected past samples 24 (step 63),

- apply an ML processing to the selected data samples and induce therefrom the decisional guidance 35 (step 66).

More precisely, the data sample selection step 63 may hinge on the following operations:

- partition the available data samples 22 into the current samples 23 and the past samples 25, based on the time window width 30 (step 631),

- partition the past samples 25 into the first share 26 and the complementary share 27, based on the similarity ratio 33 (step 632),

- apply a similarity algorithm to the past samples 632, including the first share 26 and the complementary share 27, based on the similarity criteria 31, so as to obtain a similarity structuration 5 of the past samples 25 (step 633),

- extract the second share 29 from the similarity structuration 5, the elimination conditions 32 and the current samples 23 while taking account of the similarity ratio 33 (step 634).

In addition, in operation, the system 1 may for example execute the following process for automatically and dynamically determining the similarity ratio 33, as shown on

Figure 12: - for a first value 331 of the similarity ratio 33 and a first value 2301 of the time window position, proceed with the step 63 of data sample selection on first available data samples 221,

- apply the ML processing step 66 to the obtained current samples 23 and selected past samples 24, so as to obtain the ML prediction model 36,

- test whether a previous time window position is available (step 671),

- if a previous time window position is not available, switch to a next available time window position associated with a current value 2305 of the time window position and with current available data samples 225 (step 672), and proceed with the data selection step 63 and the ML processing step 66 as above based on the updated data,

- if a previous time window position is available, apply the ML prediction to the past samples 245 selected at the previous step so as to obtain the related evaluation of relevance 37 (step 673),

- test whether a next time window position is available (step 674), - if a next time window position is available, switch to that next position having a current value 2305 and associated with current available data samples 225 (step 672), and proceed with the data selection step 63, the ML processing step 66 and the ML prediction step 673 as above based on the updated data,

- if a next time window position is not available, average the evaluations of relevance 37 over the successive time window positions for the first value 331 of the similarity ratio, so as to produce the estimated error 38 (step 675),

- test whether a next value of the similarity ratio 33 is available (step 676),

- if a next value of the similarity ratio 33 is available, switch to that next value 335 and re-execute the above steps so as to obtain the corresponding error 38 (step 677),

- if a next value of the similarity ratio 33 is not available, select the value 336 of the similarity ratio 33 associated with the smallest value of the error 38 (step 678).

Some illustrative implementations are as follows, in relation with Figures 13A, 13B, 13C and 13D. In a first example (Figure 13A), a patient 411 is subject to automatic regular tracking of physiological parameters by a wearable monitoring device 412 (or more than one), in the frame of medical supervision by one or more medical practitioners, who may be e.g. a physician in a hospital center 414. Beyond medical monitoring, the supervision may be linked to a medical treatment, e.g. drug intake, insulin delivery or intravenous infusion. The physiological parameters may be related to sleeping apnea, blood glucose, heartbeats and/or blood pressure, and are transmitted over time by the monitoring device 412 as dynamic data 415 via a communication network 413 to a control apparatus 410 coupled with the system 1 for health management within the hospital center 414. The monitoring device 412 may include or cooperate with e.g. a pacemaker, a sugar rate monitor, a Holter monitor, a TENS (for Transcutaneous Electrical Nerve Stimulation). The dynamic data 415 enable the medical practitioners to follow up health conditions of the patient 411 as appropriate. Depending on the situations, the patient 411 may be notably at home or in hospital. Instead of, or in addition to, the wearable monitoring device 412, the patient 411 may periodically log health measurement information, e.g. blood pressure taken twice a day, number of night-time awakenings or daily weight at wake-up, in feeding the dynamic data 415.

The system 1 is adapted to process the received dynamic data 415, and to issue a warning 416 to the control apparatus 410 when monitored constants seem to be off track regarding the patient’s standards. In this way, the medical practitioners can focus on risky situations and/or important cases, since human medical resources are usually limited and rare, so that medical staff expertise is precious. This may notably help physicians to have better monitoring of their patients and to increase the number of saved lives. More precisely, the forgetting strategy with a fine-tuning of a balance between similarity and time forgetting may allow the system 1 to remember latest odd events that occurred e.g. during the last month of the patient’s life. For example, different periods of arrhythmia are saved over time and can be used to predict their occurrence, thereby accelerating support to the patient 411. An interesting aspect is that the system 1 may aggregate and furnish very relevant metrics to the medical practitioners so as to help them in their acts, while fulfilling medicine- specific adaptation needs, since the state of the patient 411 evolves over time.

On one hand, the forgetting can enable to quickly update the understanding of a nominal behavior of the patient 411 over time, which may allow e.g. the medical practitioners to promptly adapt a medication dosage so as to reach expected curative effects and control or prevent crises. On the other hand, the remembering of odd constants variations over time may e.g. allow the medical practitioners to set a proper drug administration dose without being harmful to the patient 411.

Based on the relevant information obtained by the medical practitioners, appropriate steps may be taken, e.g. contacting the patient 411 for modifying the treatment, asking the patient 411 to attend a medical appointment, or providing medical assistance. In some implementations, the control apparatus 410 is adapted to send warnings or instructions 417 to the patient 411 (e.g. transmitted via the communication network 413, received by a communication apparatus and available to the patient 411 via a user interface), for example for stopping or decreasing medication intake.

In variant modes, the system 1 is suited to self-medication. The control apparatus 410, which may be at the patient’ s home, is then configured for directly providing feedback recommendations regarding drug posology adjustments to the patient 411 in function of processing by the system 1 (e.g. dosing of hypotensive agents taking account of notably blood pressure). A right posology may then be adopted at the right time, particularly in case of crises, the occurrence of which can be recognized thanks to the proper memory balance of the system 1. Such adjustments are subject to medical control, and for example, restricted to a medication range preset beforehand by the patient’s physician and associated with triggering the sending of a warning to that physician when abnormalities are assessed.

In a second example (Figure 13B), a pharmaceutical laboratory or company 424 is developing a drug, which involves a long process before marketing authorization and commercialization. Such a process is often long and uncertain, and feedbacks from groups 421-i of test patients 4210 (with integer i identifying a group number) must be closely considered. For example, a first group 421-1 is using a first posology, a second group 421-2 is using a second posology, and a third group 421-3 is provided with placebos. The patients 4210 may be provided with respective physiological parameter monitoring and/or medical data logging devices 422, adapted to send dynamic data 425 via a communication network 423 to a control apparatus 420 coupled with the system 1 for health management within the pharmaceutical laboratory 424.

In this respect, relying on the first example above may provide a helpful tool with a view to assessing the efficiency and side effects of a medication, and to tuning the related posology, based on the data respectively received from the multiple patients 4210. Those various collected data, which bring general hints on drug effects (efficiency and side effects), may be individually processed in parallel and/or successively for the respective patients 4210 by the system 1, provided with a proper balance between time forgetting and similarity forgetting, and the resulting information be dealt with jointly through e.g. statistical considerations. Also, the medication conditions (e.g. drug posology) may be adjusted for already monitored or new groups 421-i of the test patients 4210, by sending suited instructions 426 with the control unit 420 (e.g. via the network 423 to the devices 422, incorporating user interfaces). As a matter of fact, drug development is often based on a series of adjustments that converge to the adopted solution.

In some advantageous alternative embodiments, a collective data processing is fully part of the system 1 instead of being performed downstream. Namely, the group identities and individual patient identities within the groups 421-i are integrated as parameters within the sent data 425 and processed accordingly by the system 1. This may enable to take directly account of collective time evolutions, e.g. associated with seasonal variations or with population immunization, and to point out finely health or drug reaction singularities pertaining to individuals or to particular patient classes. The associated forgetting enables to rebalance the dataset, and to build a model giving a more adjusted grid for a drug prescription in case the drug is accepted.

The system 1 may allow to update an expected nominal reaction to a drug depending on a posology and of patient health conditions. It may contribute to reveal faster that a majority of the patients 4210 reacts positively and that a new iteration is better than a previous one. It may remember specific cases where the drug has severe side effects or had better efficiency on specific patients. It may allow researchers to find some extra use for the drug, or to avoid prescribing it in dangerous cases.

A third example (Figure 13C) relates to diagnosis refinement, helping physicians to establish proper diagnoses not only based on personal evolutions of the patients, but also on collective disease evolutions and discoveries. Indeed, medical knowledge as well as disease landscapes are constantly moving, and new diseases are regularly appearing or coming to light. Accordingly, the disease classification is increasingly refined, sometimes reaching impressive detail levels. This makes a precise diagnosis often complicated, and needing much examination. This is notably where ML can often provide helpful support to physicians, potentially giving them insight into relevant worldwide medical information.

In an illustration case, a patient 431 is under the care of a physician working in a hospital center or a doctor’s office 434 provided with (or remotely connected to) the system 1, which may include e.g. classification ML capacities directed to disease recognition. The patient 431 or the physician can log symptom features 435 into the system 1 with a user interface, possibly via a communication network. The system 1 is further adapted to be fed with past data recorded on the patient 431 about the patient’s analyses, medical examinations, treatments and/or disease history, e.g. retrieved from a database 432 kept in the hospital center or doctor’s office 434 while meeting privacy requirements. This may provide a time series of health data relevant to the system 1, based on proper forgetting strategy and directed to predicting one or more most probable diseases corresponding to the latest symptom features 435 thanks e.g. to the classification ML capacities. The latter may be trained with regularly updated medical data 437 integrating most recent disease knowledge (e.g. recently discovered new type of flu or of coronavirus) and possibly originating from worldwide sources, e.g. available from databases 436 via communication networks 433. The physician may further possibly contribute to enriching the databases 436 by transmitting via the networks 433 anonymized information 438 on the treated cases. Those operations may be performed either in the frame of a single consultation or repeatedly over time for the same patient 431.

Beyond such an individual exploitation of the system 1, attractive embodiments involve a collective application to disease identification by e.g. hospitals, academic research laboratories or health agencies. This may notably enable to dynamically establish properly structured collective intelligence and/or labelled datasets stored in the databases 436, possibly relevant to the field designated as big data, and available to remote local ML training. In this respect, the system 1 may be fed over time with various pieces of information pertaining to known, investigated or emerging diseases (e.g. symptoms, evolution, prevention, treatment, environment, epidemiology, etc.), such as illustrated above with the anonymized information 438 recorded by a physician. By a proper forgetting-remembering approach, the system 1 may be suited to flexibly adapting to worldwide disease evolutions (e.g. due to hygiene, diet, environment and medicine enhancements, or to pollution, environmental degradation, and increased virulence of infectious agents or microorganisms) and progressive discoveries, while keeping automatically track of rare diseases through the ages, including associated sets of symptoms.

A fourth example (Figure 13D) regards medical rehabilitation, and may concern notably post-traumatic recovery after injury or illness. It may pertain to recovering at least partly motor, sensory or cognitive skills. A patient 441 is following up a rehabilitation program prepared and supervised by e.g. his general practitioner, a rehabilitation center, a physiotherapist or a neurologist, in a dedicated establishment or at home, as appropriate depending on the required health care.

An apparatus working as a local assistant 444 is available to the patient 441 for supporting and properly guiding the patient’s efforts over time, taking account of the already accomplished progresses. A detector 442 is adapted to capture the patient’s capacity at a given point of time, and to communicate the related data 445 to the local assistant 444. The detector 442 may e.g. include sensors adapted to detect leg, arm or head movements (for respectively leg, arm or neck rehabilitation), or to track eye movements, interactive responses or speech expression in dedicated exercises (for cognitive or speech rehabilitation). In some cases, the detector 442 may be replaced with a logging system, e.g. a laptop user interface.

The local assistant 444 includes the system 1 for health management, which encompasses AI functionalities configured for the rehabilitation program. For example, the system 1 may be provided with reinforcement learning having a goal associated with recovered, or at least significantly enhanced, patient’s capacities. The system 1 may instead or in addition involve rehabilitation schemes based on supervised learning, trained with datasets generated from multiple previously available labelled data on rehabilitation processes with other patients. The system 1 may instead or in addition rely on an expert system. At any rate, the system 1 is adapted to upstream selection of data samples as detailed above, the selected data samples being then exploited in the AI operations.

Thanks to its forgetting aspects, the system 1 may disregard former out-of-date flawed behaviors and progressively integrate favorable evolutions for next predictions. Also, thanks to remembering some specific odd events, e.g. significant failures, slowdowns or hurdles in rehabilitation progresses or prejudicial health consequences of some efforts, or on the contrary accelerated progresses in specific circumstances, the system 1 may properly adjust its rehabilitation strategy.

The lastly updated situation combined with the automatically selected pieces of rehabilitation history may thus lead to finely suited recommendations 446 to the patient 441 in executing one or more next rehabilitation steps. The step succession may accordingly be repeated over time as follows: transmission of the patient’s capacity status from the detector 442 to the local assistant 444 via the data 445; automatic analysis by the system 1 taking account of the patient’s progression history; and communication of the recommendations 446 to the patient 441 directed to the next rehabilitation step(s). The time scale of those operations further depends on the kind of rehabilitation. For example, for recovering leg or ankle mobility, a daily monitoring and communication with the local assistant 444 may be appropriate. If, by contrast, progresses are intended to be made during each rehabilitation session, for example by monitoring movement enhancements from the session start to its end, shorter periodic exchanges are suited, e.g. every 10 minutes or every minute.

Behind the above examples, it can be observed that in health management, the presently described forgetting mechanisms can be particularly relevant notably due to two related particularities.

First, forgetting may provide an efficient way to select medical data, by automatically discarding samples that are too old and no longer relevant, or which are common and then not interesting to store. This may lead to better data selection to the attention of physicians, thereby focusing on most accurate data and where coupled with a suited ML model, facilitating right medical diagnosis and/or decision making.

Secondly, models in machine learning perform often badly when facing a dataset that is imbalanced, while such a situation is relatively common and important in medicine, in which e.g. a rare disease or an unexpected seizure may occur infrequently and be missed due to confusion with common events. This could translate into two-class problems with 99 % of the samples falling into a class and only 1 % in the other class, pushing the

ML model to classify all samples into the 99 % class. The rebalance provided by the present forgetting strategy may notably allow medical practitioners to detect faster relatively rare diseases. A suited balance between time forgetting and similarity forgetting as explained above may further avoid over-detecting such rare diseases or events. A particular apparatus 7, visible on Figure 14, is embodying the system 1 as described above. It corresponds for example to a mainframe computer, a workstation, a laptop, a tablet, a smartphone, or a head-mounted display (HMD).

That apparatus 7 is suited to selecting data samples and providing decisional guidance. It comprises the following elements, connected to each other by a bus 75 of addresses and data that also transports a clock signal:

- a microprocessor 71 (or CPU) ;

- a graphics card 72 comprising several Graphical Processing Units (or GPUs) 720 and a Graphical Random Access Memory (GRAM) 721; the GPUs are quite suited to repeated computations on the data samples, due to their highly parallel structure; - a non-volatile memory of ROM type 76;

- a RAM 77;

- one or several I/O (Input/Output) devices 74 such as for example a keyboard, a mouse, a trackball, a webcam; other modes for introduction of commands such as for example vocal recognition are also possible;

- a power source 78 ; and

- a radiofrequency unit 79.

According to a variant, the power supply 78 is external to the apparatus 7.

The apparatus 7 also comprises a display device 73 of display screen type directly connected to the graphics card 72 to display synthesized images calculated and composed in the graphics card. The use of a dedicated bus 730 to connect the display device 73 to the graphics card 72 offers the advantage of having much greater data transmission bitrates and thus reducing the latency time for the displaying of images composed by the graphics card, e.g. for ML representations. According to a variant, a display device is external to apparatus 7 and is connected thereto by a cable or wirelessly for transmitting the display signals. The apparatus 7, for example through the graphics card 72, comprises an interface for transmission or connection adapted to transmit a display signal to an external display means such as for example an LCD or plasma screen or a video-projector. In this respect, the RF unit 79 can be used for wireless transmissions. It is noted that the word "register" used hereinafter in the description of memories 77 and 721 can designate in each of the memories mentioned, a memory zone of low capacity (some binary data) as well as a memory zone of large capacity (enabling a whole program to be stored or all or part of the data representative of data calculated or to be displayed). Also, the registers represented for the RAM 77 and the GRAM 721 can be arranged and constituted in any manner, and each of them does not necessarily correspond to adjacent memory locations and can be distributed otherwise (which covers notably the situation in which one register includes several smaller registers).

When switched-on, the microprocessor 71 loads and executes the instructions of the program contained in the RAM 77. The random access memory 77 comprises notably:

- in a register 770, the operating program of the microprocessor 71 ;

- in a register 771, the hyper-parameters including the window width 30 and the similarity ratio 33;

- in a register 772, the similarity criteria 31 ;

- in a register 773, the elimination conditions 32;

- in a register 774, the decisional guidance 35.

Algorithms implementing the steps of the method specific to the present disclosure and described above are stored in the memory GRAM 721. When switched on and once the parameters 771 to 774 are loaded into the RAM 77, the graphic processors 720 of graphics card 72 load appropriate information and parameters into the GRAM 721 and execute the instructions of algorithms in the form of microprograms.

The random access memory GRAM 721 comprises notably:

- in a register 7211, the new batches 21 ;

- in a register 7212, the previously kept samples 20;

- in a register 7213, the available data samples 22;

- in a register 7214, the current samples 23;

- in a register 7215, the selected past samples 24;

- in a register 7216, the ML prediction model 36.

As will be understood by a skilled person, the presence of the graphics card 72 is not mandatory, and can be replaced with entire CPU processing and/or simpler visualization implementations .

In variant modes, the apparatus 7 may include only the functionalities of the device 10 for selecting data samples, or only the functionalities of the device 15 for providing decisional guidance. In addition, the device 10 and/or the device 15 may be implemented differently than a standalone software, and an apparatus or set of apparatus comprising only parts of the apparatus 7 may be exploited through an API call or via a cloud interface.