882 resultados para Bayesian model selection


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recognition and identification processes for deceased persons. Determining the identity of deceased persons is a routine task performed essentially by police departments and forensic experts. This thesis highlights the processes necessary for the proper and transparent determination of the civil identities of deceased persons. The identity of a person is defined as the establishment of a link between that person ("the source") and information pertaining to the same individual ("identifiers"). Various identity forms could emerge, depending on the nature of the identifiers. There are two distinct types of identity, namely civil identity and biological identity. The paper examines four processes: identification by witnesses (the recognition process) and comparisons of fingerprints, dental data and DNA profiles (the identification processes). During the recognition process, the memory function is examined and helps to clarify circumstances that may give rise to errors. To make the process more rigorous, a body presentation procedure is proposed to investigators. Before examining the other processes, three general concepts specific to forensic science are considered with regard to the identification of a deceased person, namely, matter divisibility (Inman and Rudin), transfer (Locard) and uniqueness (Kirk). These concepts can be applied to the task at hand, although some require a slightly broader scope of application. A cross comparison of common forensic fields and the identification of deceased persons reveals certain differences, including 1 - reverse positioning of the source (i.e. the source is not sought from traces, but rather the identifiers are obtained from the source); 2 - the need for civil identity determination in addition to the individualisation stage; and 3 - a more restricted population (closed set), rather than an open one. For fingerprints, dental and DNA data, intravariability and intervariability are examined, as well as changes in these post mortem (PM) identifiers. Ante-mortem identifiers (AM) are located and AM-PM comparisons made. For DNA, it has been shown that direct identifiers (taken from a person whose civil identity has been alleged) tend to lead to determining civil identity whereas indirect identifiers (obtained from a close relative) direct towards a determination of biological identity. For each process, a Bayesian model is presented which includes sources of uncertainty deemed to be relevant. The results of the different processes combine to structure and summarise an overall outcome and a methodology. The modelling of dental data presents a specific difficulty with respect to intravariability, which in itself is not quantifiable. The concept of "validity" is, therefore, suggested as a possible solution to the problem. Validity uses various parameters that have an acknowledged impact on teeth intravariability. In cases where identifying deceased persons proves to be extremely difficult due to the limited discrimination of certain procedures, the use of a Bayesian approach is of great value in bringing a transparent and synthetic value. RESUME : Titre: Processus de reconnaissance et d'identification de personnes décédées. L'individualisation de personnes décédées est une tâche courante partagée principalement par des services de police, des odontologues et des laboratoires de génétique. L'objectif de cette recherche est de présenter des processus pour déterminer valablement, avec une incertitude maîtrisée, les identités civiles de personnes décédées. La notion d'identité est examinée en premier lieu. L'identité d'une personne est définie comme l'établissement d'un lien entre cette personne et des informations la concernant. Les informations en question sont désignées par le terme d'identifiants. Deux formes distinctes d'identité sont retenues: l'identité civile et l'identité biologique. Quatre processus principaux sont examinés: celui du témoignage et ceux impliquant les comparaisons d'empreintes digitales, de données dentaires et de profils d'ADN. Concernant le processus de reconnaissance, le mode de fonctionnement de la mémoire est examiné, démarche qui permet de désigner les paramètres pouvant conduire à des erreurs. Dans le but d'apporter un cadre rigoureux à ce processus, une procédure de présentation d'un corps est proposée à l'intention des enquêteurs. Avant d'entreprendre l'examen des autres processus, les concepts généraux propres aux domaines forensiques sont examinés sous l'angle particulier de l'identification de personnes décédées: la divisibilité de la matière (Inman et Rudin), le transfert (Locard) et l'unicité (Kirk). Il est constaté que ces concepts peuvent être appliqués, certains nécessitant toutefois un léger élargissement de leurs principes. Une comparaison croisée entre les domaines forensiques habituels et l'identification de personnes décédées montre des différences telles qu'un positionnement inversé de la source (la source n'est plus à rechercher en partant de traces, mais ce sont des identifiants qui sont recherchés en partant de la source), la nécessité de devoir déterminer une identité civile en plus de procéder à une individualisation ou encore une population d'intérêt limitée plutôt qu'ouverte. Pour les empreintes digitales, les dents et l'ADN, l'intra puis l'inter-variabilité sont examinées, de même que leurs modifications post-mortem (PM), la localisation des identifiants ante-mortem (AM) et les comparaisons AM-PM. Pour l'ADN, il est démontré que les identifiants directs (provenant de la personne dont l'identité civile est supposée) tendent à déterminer une identité civile alors que les identifiants indirects (provenant d'un proche parent) tendent à déterminer une identité biologique. Puis une synthèse des résultats provenant des différents processus est réalisée grâce à des modélisations bayesiennes. Pour chaque processus, une modélisation est présentée, modélisation intégrant les paramètres reconnus comme pertinents. À ce stade, une difficulté apparaît: celle de quantifier l'intra-variabilité dentaire pour laquelle il n'existe pas de règle précise. La solution préconisée est celle d'intégrer un concept de validité qui intègre divers paramètres ayant un impact connu sur l'intra-variabilité. La possibilité de formuler une valeur de synthèse par l'approche bayesienne s'avère d'une aide précieuse dans des cas très difficiles pour lesquels chacun des processus est limité en termes de potentiel discriminant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This letter presents advanced classification methods for very high resolution images. Efficient multisource information, both spectral and spatial, is exploited through the use of composite kernels in support vector machines. Weighted summations of kernels accounting for separate sources of spectral and spatial information are analyzed and compared to classical approaches such as pure spectral classification or stacked approaches using all the features in a single vector. Model selection problems are addressed, as well as the importance of the different kernels in the weighted summation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24 h. Events are modelled as a Poisson process and the 24 h precipitation by a Generalised Pareto Distribution (GPD) of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA) corresponds to finite support variables as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. Bayesian techniques are used to estimate the parameters. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimated GPD is mainly in the Fréchet DA, something incompatible with the common sense assumption of that precipitation is a bounded phenomenon. The bounded character of precipitation is then taken as a priori hypothesis. Consistency of this hypothesis with the data is checked in two cases: using the raw-data (in mm) and using log-transformed data. As expected, a Bayesian model checking clearly rejects the model in the raw-data case. However, log-transformed data seem to be consistent with the model. This fact may be due to the adequacy of the log-scale to represent positive measurements for which differences are better relative than absolute

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Salmonella is distributed worldwide and is a pathogen of economic and public health importance. As a multi-host pathogen with a long environmental persistence, it is a suitable model for the study of wildlife-livestock interactions. In this work, we aim to explore the spill-over of Salmonella between free-ranging wild boar and livestock in a protected natural area in NE Spain and the presence of antimicrobial resistance. Salmonella prevalence, serotypes and diversity were compared between wild boars, sympatric cattle and wild boars from cattle-free areas. The effect of age, sex, cattle presence and cattle herd size on Salmonella probability of infection in wild boars was explored by means of Generalized Linear Models and a model selection based on the Akaike’s Information Criterion. Prevalence was higher in wild boars co-habiting with cattle (35.67%, CI 95% 28.19–43.70) than in wild boar from cattle-free areas (17.54%, CI 95% 8.74–29.91). Probability of a wild boar being a Salmonella carrier increased with cattle herd size but decreased with the host age. Serotypes Meleagridis, Anatum and Othmarschen were isolated concurrently from cattle and sympatric wild boars. Apart from serotypes shared with cattle, wild boars appear to have their own serotypes, which are also found in wild boars from cattle-free areas (Enteritidis, Mikawasima, 4:b:- and 35:r:z35). Serotype richness (diversity) was higher in wild boars co-habiting with cattle, but evenness was not altered by the introduction of serotypes from cattle. The finding of a S. Mbandaka strain resistant to sulfamethoxazole, streptomycin and chloramphenicol and a S. Enteritidis strain resistant to ciprofloxacin and nalidixic acid in wild boars is cause for public health concern.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geophysical data may provide crucial information about hydrological properties, states, and processes that are difficult to obtain by other means. Large data sets can be acquired over widely different scales in a minimally invasive manner and at comparatively low costs, but their effective use in hydrology makes it necessary to understand the fidelity of geophysical models, the assumptions made in their construction, and the links between geophysical and hydrological properties. Geophysics has been applied for groundwater prospecting for almost a century, but it is only in the last 20 years that it is regularly used together with classical hydrological data to build predictive hydrological models. A largely unexplored venue for future work is to use geophysical data to falsify or rank competing conceptual hydrological models. A promising cornerstone for such a model selection strategy is the Bayes factor, but it can only be calculated reliably when considering the main sources of uncertainty throughout the hydrogeophysical parameter estimation process. Most classical geophysical imaging tools tend to favor models with smoothly varying property fields that are at odds with most conceptual hydrological models of interest. It is thus necessary to account for this bias or use alternative approaches in which proposed conceptual models are honored at all steps in the model building process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Health and inequalities in health among inhabitants of European cities are of major importance for European public health and there is great interest in how different health care systems in Europe perform in the reduction of health inequalities. However, evidence on the spatial distribution of cause-specific mortality across neighbourhoods of European cities is scarce. This study presents maps of avoidable mortality in European cities and analyses differences in avoidable mortality between neighbourhoods with different levels of deprivation. Methods: We determined the level of mortality from 14 avoidable causes of death for each neighbourhood of 15 large cities in different European regions. To address the problems associated with Standardised Mortality Ratios for small areas we smooth them using the Bayesian model proposed by Besag, York and Mollié. Ecological regression analysis was used to assess the association between social deprivation and mortality. Results: Mortality from avoidable causes of death is higher in deprived neighbourhoods and mortality rate ratios between areas with different levels of deprivation differ between gender and cities. In most cases rate ratios are lower among women. While Eastern and Southern European cities show higher levels of avoidable mortality, the association of mortality with social deprivation tends to be higher in Northern and lower in Southern Europe. Conclusions: There are marked differences in the level of avoidable mortality between neighbourhoods of European cities and the level of avoidable mortality is associated with social deprivation. There is no systematic difference in the magnitude of this association between European cities or regions. Spatial patterns of avoidable mortality across small city areas can point to possible local problems and specific strategies to reduce health inequality which is important for the development of urban areas and the well-being of their inhabitants

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coastal birds are an integral part of coastal ecosystems, which nowadays are subject to severe environmental pressures. Effective measures for the management and conservation of seabirds and their habitats call for insight into their population processes and the factors affecting their distribution and abundance. Central to national and international management and conservation measures is the availability of accurate data and information on bird populations, as well as on environmental trends and on measures taken to solve environmental problems. In this thesis I address different aspects of the occurrence, abundance, population trends and breeding success of waterbirds breeding on the Finnish coast of the Baltic Sea, and discuss the implications of the results for seabird monitoring, management and conservation. In addition, I assess the position and prospects of coastal bird monitoring data, in the processing and dissemination of biodiversity data and information in accordance with the Convention on Biological Diversity (CBD) and other national and international commitments. I show that important factors for seabird habitat selection are island area and elevation, water depth, shore openness, and the composition of island cover habitats. Habitat preferences are species-specific, with certain similarities within species groups. The occurrence of the colonial Arctic Tern (Sterna paradisaea) is partly affected by different habitat characteristics than its abundance. Using long-term bird monitoring data, I show that eutrophication and winter severity have reduced the populations of several Finnish seabird species. A major demographic factor through which environmental changes influence bird populations is breeding success. Breeding success can function as a more rapid indicator of sublethal environmental impacts than population trends, particularly for long-lived and slowbreeding species, and should therefore be included in coastal bird monitoring schemes. Among my target species, local breeding success can be shown to affect the populations of the Mallard (Anas platyrhynchos), the Eider (Somateria mollissima) and the Goosander (Mergus merganser) after a time lag corresponding to their species-specific recruitment age. For some of the target species, the number of individuals in late summer can be used as an easier and more cost-effective indicator of breeding success than brood counts. My results highlight that the interpretation and application of habitat and population studies require solid background knowledge of the ecology of the target species. In addition, the special characteristics of coastal birds, their habitats, and coastal bird monitoring data have to be considered in the assessment of their distribution and population trends. According to the results, the relationships between the occurrence, abundance and population trends of coastal birds and environmental factors can be quantitatively assessed using multivariate modelling and model selection. Spatial data sets widely available in Finland can be utilised in the calculation of several variables that are relevant to the habitat selection of Finnish coastal species. Concerning some habitat characteristics field work is still required, due to a lack of remotely sensed data or the low resolution of readily available data in relation to the fine scale of the habitat patches in the archipelago. While long-term data sets exist for water quality and weather, the lack of data concerning for instance the food resources of birds hampers more detailed studies of environmental effects on bird populations. Intensive studies of coastal bird species in different archipelago areas should be encouraged. The provision and free delivery of high-quality coastal data concerning bird populations and their habitats would greatly increase the capability of ecological modelling, as well as the management and conservation of coastal environments and communities. International initiatives that promote open spatial data infrastructures and sharing are therefore highly regarded. To function effectively, international information networks, such as the biodiversity Clearing House Mechanism (CHM) under the CBD, need to be rooted at regional and local levels. Attention should also be paid to the processing of data for higher levels of the information hierarchy, so that data are synthesized and developed into high-quality knowledge applicable to management and conservation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The topic of this thesis is the simulation of a combination of several control and data assimilation methods, meant to be used for controlling the quality of paper in a paper machine. Paper making is a very complex process and the information obtained from the web is sparse. A paper web scanner can only measure a zig zag path on the web. An assimilation method is needed to process estimates for Machine Direction (MD) and Cross Direction (CD) profiles of the web. Quality control is based on these measurements. There is an increasing need for intelligent methods to assist in data assimilation. The target of this thesis is to study how such intelligent assimilation methods are affecting paper web quality. This work is based on a paper web simulator, which has been developed in the TEKES funded MASI NoTes project. The simulator is a valuable tool in comparing different assimilation methods. The thesis contains the comparison of four different assimilation methods. These data assimilation methods are a first order Bayesian model estimator, an ARMA model based on a higher order Bayesian estimator, a Fourier transform based Kalman filter estimator and a simple block estimator. The last one can be considered to be close to current operational methods. From these methods Bayesian, ARMA and Kalman all seem to have advantages over the commercial one. The Kalman and ARMA estimators seems to be best in overall performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’intérêt principal de cette recherche porte sur la validation d’une méthode statistique en pharmaco-épidémiologie. Plus précisément, nous allons comparer les résultats d’une étude précédente réalisée avec un devis cas-témoins niché dans la cohorte utilisé pour tenir compte de l’exposition moyenne au traitement : – aux résultats obtenus dans un devis cohorte, en utilisant la variable exposition variant dans le temps, sans faire d’ajustement pour le temps passé depuis l’exposition ; – aux résultats obtenus en utilisant l’exposition cumulative pondérée par le passé récent ; – aux résultats obtenus selon la méthode bayésienne. Les covariables seront estimées par l’approche classique ainsi qu’en utilisant l’approche non paramétrique bayésienne. Pour la deuxième le moyennage bayésien des modèles sera utilisé pour modéliser l’incertitude face au choix des modèles. La technique utilisée dans l’approche bayésienne a été proposée en 1997 mais selon notre connaissance elle n’a pas été utilisée avec une variable dépendante du temps. Afin de modéliser l’effet cumulatif de l’exposition variant dans le temps, dans l’approche classique la fonction assignant les poids selon le passé récent sera estimée en utilisant des splines de régression. Afin de pouvoir comparer les résultats avec une étude précédemment réalisée, une cohorte de personnes ayant un diagnostique d’hypertension sera construite en utilisant les bases des données de la RAMQ et de Med-Echo. Le modèle de Cox incluant deux variables qui varient dans le temps sera utilisé. Les variables qui varient dans le temps considérées dans ce mémoire sont iv la variable dépendante (premier évènement cérébrovasculaire) et une des variables indépendantes, notamment l’exposition

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’expérience subjective accompagnant un mouvement se construit a posteriori en intégrant différentes sources d’informations qui s’inter-influencent à différents moments tant avant qu’après le mouvement. Cette expérience subjective est interprétée par un modèle d’attribution bayésien afin de créer une expérience d’agentivité et de contrôle sur les mouvements de son propre corps. Afin de déterminer l’apport de l’interaction entre les paramètres considérés par le modèle d’attribution et d’investiguer la présence de disparités inter-individuelles dans la formation de l’expérience subjective du mouvement, une série de 90 pulsations simples de stimulation magnétique transcrânienne (SMT) sur le cortex moteur primaire (M1) suivi de multiples questions sur l’expérience subjective reliée au mouvement provoqué a été effectuée chez 20 participants normaux. Les données objectives du mouvement ont été recueillies par électromyographie (EMG) et capture du mouvement. Un modèle de régression a entre autres été effectué pour chaque participant afin de voir quelle proportion du jugement subjectif pouvait être expliqué par des indices objectifs et cette proportion variait grandement entre les participants. Les résultats de la présente étude indiquent la présence d’une capacité individuelle à se former des jugements subjectifs reflétant adéquatement la réalité comme en témoigne la cohérence entre les différentes mesures d’acuité et plusieurs variables mesurant l’expérience subjective.