14 resultados para Observation (Scientific method)
em Université de Lausanne, Switzerland
Resumo:
The application of statistics to science is not a neutral act. Statistical tools have shaped and were also shaped by its objects. In the social sciences, statistical methods fundamentally changed research practice, making statistical inference its centerpiece. At the same time, textbook writers in the social sciences have transformed rivaling statistical systems into an apparently monolithic method that could be used mechanically. The idol of a universal method for scientific inference has been worshipped since the "inference revolution" of the 1950s. Because no such method has ever been found, surrogates have been created, most notably the quest for significant p values. This form of surrogate science fosters delusions and borderline cheating and has done much harm, creating, for one, a flood of irreproducible results. Proponents of the "Bayesian revolution" should be wary of chasing yet another chimera: an apparently universal inference procedure. A better path would be to promote both an understanding of the various devices in the "statistical toolbox" and informed judgment to select among these.
Resumo:
La douleur est fréquente en milieu de soins intensifs et sa gestion est l'une des missions des infirmières. Son évaluation est une prémisse indispensable à son soulagement. Cependant lorsque le patient est incapable de signaler sa douleur, les infirmières doivent se baser sur des signes externes pour l'évaluer. Les guides de bonne pratique recommandent chez les personnes non communicantes l'usage d'un instrument validé pour la population donnée et basé sur l'observation des comportements. A l'heure actuelle, les instruments d'évaluation de la douleur disponibles ne sont que partiellement adaptés aux personnes cérébrolésées dans la mesure où ces personnes présentent des comportements qui leur sont spécifiques. C'est pourquoi, cette étude vise à identifier, décrire et valider des indicateurs, et des descripteurs, de la douleur chez les personnes cérébrolésées. Un devis d'étude mixte multiphase avec une dominante quantitative a été choisi pour cette étude. Une première phase consistait à identifier des indicateurs et des descripteurs de la douleur chez les personnes cérébrolésées non communicantes aux soins intensifs en combinant trois sources de données : une revue intégrative des écrits, une démarche consultative utilisant la technique du groupe nominal auprès de 18 cliniciens expérimentés (6 médecins et 12 infirmières) et les résultats d'une étude pilote observationnelle réalisée auprès de 10 traumatisés crâniens. Les résultats ont permis d'identifier 6 indicateurs et 47 descripteurs comportementaux, vocaux et physiologiques susceptibles d'être inclus dans un instrument d'évaluation de la douleur destiné aux personnes cérébrolésées non- communicantes aux soins intensifs. Une deuxième phase séquentielle vérifiait les propriétés psychométriques des indicateurs et des descripteurs préalablement identifiés. La validation de contenu a été testée auprès de 10 experts cliniques et 4 experts scientifiques à l'aide d'un questionnaire structuré qui cherchait à évaluer la pertinence et la clarté/compréhensibilité de chaque descripteur. Cette démarche a permis de sélectionner 33 des 47 descripteurs et valider 6 indicateurs. Dans un deuxième temps, les propriétés psychométriques de ces indicateurs et descripteurs ont été étudiés au repos, lors de stimulation non nociceptive et lors d'une stimulation nociceptive (la latéralisation du patient) auprès de 116 personnes cérébrolésées aux soins intensifs hospitalisées dans deux centres hospitaliers universitaires. Les résultats montrent d'importantes variations dans les descripteurs observés lors de stimulation nociceptive probablement dues à l'hétérogénéité des patients au niveau de leur état de conscience. Dix descripteurs ont été éliminés, car leur fréquence lors de la stimulation nociceptive était inférieure à 5% ou leur fiabilité insuffisante. Les descripteurs physiologiques ont tous été supprimés en raison de leur faible variabilité et d'une fiabilité inter juge problématique. Les résultats montrent que la validité concomitante, c'est-à-dire la corrélation entre l'auto- évaluation du patient et les mesures réalisées avec les descripteurs, est satisfaisante lors de stimulation nociceptive {rs=0,527, p=0,003, n=30). Par contre la validité convergente, qui vérifiait l'association entre l'évaluation de la douleur par l'infirmière en charge du patient et les mesures réalisés avec les descripteurs, ainsi que la validité divergente, qui vérifiait si les indicateurs discriminent entre la stimulation nociceptive et le repos, mettent en évidence des résultats variables en fonction de l'état de conscience des patients. Ces résultats soulignent la nécessité d'étudier les descripteurs de la douleur chez des patients cérébrolésés en fonction du niveau de conscience et de considérer l'hétérogénéité de cette population dans la conception d'un instrument d'évaluation de la douleur pour les personnes cérébrolésées non communicantes aux soins intensifs. - Pain is frequent in the intensive care unit (ICU) and its management is a major issue for nurses. The assessment of pain is a prerequisite for appropriate pain management. However, pain assessment is difficult when patients are unable to communicate about their experience and nurses have to base their evaluation on external signs. Clinical practice guidelines highlight the need to use behavioral scales that have been validated for nonverbal patients. Current behavioral pain tools for ICU patients unable to communicate may not be appropriate for nonverbal brain-injured ICU patients, as they demonstrate specific responses to pain. This study aimed to identify, describe and validate pain indicators and descriptors in brain-injured ICU patients. A mixed multiphase method design with a quantitative dominant was chosen for this study. The first phase aimed to identify indicators and descriptors of pain for nonverbal brain- injured ICU patients using data from three sources: an integrative literature review, a consultation using the nominal group technique with 18 experienced clinicians (12 nurses and 6 physicians) and the results of an observational pilot study with 10 traumatic brain injured patients. The results of this first phase identified 6 indicators and 47 behavioral, vocal and physiological descriptors of pain that could be included in a pain assessment tool for this population. The sequential phase two tested the psychometric properties of the list of previously identified indicators and descriptors. Content validity was tested with 10 clinical and 4 scientific experts for pertinence and comprehensibility using a structured questionnaire. This process resulted in 33 descriptors to be selected out of 47 previously identified, and six validated indicators. Then, the psychometric properties of the descriptors and indicators were tested at rest, during non nociceptive stimulation and nociceptive stimulation (turning) in a sample of 116 brain-injured ICLI patients who were hospitalized in two university centers. Results showed important variations in the descriptors observed during the nociceptive stimulation, probably due to the heterogeneity of patients' level of consciousness. Ten descriptors were excluded, as they were observed less than 5% of the time or their reliability was insufficient. All physiologic descriptors were deleted as they showed little variability and inter observer reliability was lacking. Concomitant validity, testing the association between patients' self report of pain and measures performed using the descriptors, was acceptable during nociceptive stimulation (rs=0,527, p=0,003, n=30). However, convergent validity ( testing for an association between the nurses' pain assessment and measures done with descriptors) and divergent validity (testing for the ability of the indicators to discriminate between rest and a nociceptive stimulation) varied according to the level of consciousness These results highlight the need to study pain descriptors in brain-injured patients with different level of consciousness and to take into account the heterogeneity of this population forthe conception of a pain assessment tool for nonverbal brain-injured ICU patients.
Resumo:
We present a new method for lysis of single cells in continuous flow, where cells are sequentially trapped, lysed and released in an automatic process. Using optimized frequencies, dielectrophoretic trapping allows exposing cells in a reproducible way to high electrical fields for long durations, thereby giving good control on the lysis parameters. In situ evaluation of cytosol extraction on single cells has been studied for Chinese hamster ovary (CHO) cells through out-diffusion of fluorescent molecules for different voltage amplitudes. A diffusion model is proposed to correlate this out-diffusion to the total area of the created pores, which is dependent on the potential drop across the cell membrane and enables evaluation of the total pore area in the membrane. The dielectrophoretic trapping is no longer effective after lysis because of the reduced conductivity inside the cells, leading to cell release. The trapping time is linked to the time required for cytosol extraction and can thus provide additional validation of the effective cytosol extraction for non-fluorescent cells. Furthermore, the application of one single voltage for both trapping and lysis provides a fully automatic process including cell trapping, lysis, and release, allowing operating the device in continuous flow without human intervention.
Resumo:
BACKGROUND: There is uncertain evidence of effectiveness of 5-aminosalicylates (5-ASA) to induce and maintain response and remission of active Crohn's disease (CD), and weak evidence to support their use in post-operative CD. AIM: To assess the frequency and determinants of 5-ASA use in CD patients and to evaluate the physicians' perception of clinical response and side effects to 5-ASA. METHODS: Data from the Swiss Inflammatory Bowel Disease Cohort, which collects data since 2006 on a large sample of IBD patients, were analysed. Information from questionnaires regarding utilisation of treatments and perception of response to 5-ASA were evaluated. Logistic regression modelling was performed to identify factors associated with 5-ASA use. RESULTS: Of 1420 CD patients, 835 (59%) were ever treated with 5-ASA from diagnosis to latest follow-up. Disease duration >10 years and colonic location were both significantly associated with 5-ASA use. 5-ASA treatment was judged to be successful in 46% (378/825) of treatment episodes (physician global assessment). Side effects prompting stop of therapy were found in 12% (98/825) episodes in which 5-ASA had been stopped. CONCLUSIONS: 5-Aminosalicylates were frequently prescribed in patients with Crohn's disease in the Swiss IBD cohort. This observation stands in contrast to the scientific evidence demonstrating a very limited role of 5-ASA compounds in the treatment of Crohn's disease.
Resumo:
BACKGROUND AND OBJECTIVE: In bladder cancer, conventional white light endoscopic examination of the bladder does not provide adequate information about the presence of "flat" urothelial lesions such as carcinoma in situ. In the present investigation, we examine a new technique for the photodetection of such lesions by the imaging of protoporphyrin IX (PpIX) fluorescence following topical application of 5-aminolevulinic acid (ALA). STUDY DESIGN/MATERIALS AND METHODS: Several hours after bladder instillation of an aqueous solution of ALA in 34 patients, a Krypton ion laser or a filtered Xenon arc-lamp was used to excite PpIX fluorescence. Tissue samples for histological analysis were taken while observing the bladder wall either by means of a video camera, or by direct endoscopic observation. RESULTS: A good correlation was found between the PpIX fluorescence and the histopathological diagnosis. On a total of 215 biopsies, 143 in fluorescent and 72 in nonfluorescent areas, all visible tumors on white light cytoscopy appeared in a bright red fluorescence with the photodetection technique. In addition, this method permitted to discover 47 unsuspected carcinomatous lesions on white light observation, among which 40% were carcinoma in situ. CONCLUSION: PpIX fluorescence induced by instillation into the bladder of 5-ALA is an efficient method of mapping the mucosa in bladder carcinoma.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
The purpose of this paper is to describe the development and to test the reliability of a new method called INTERMED, for health service needs assessment. The INTERMED integrates the biopsychosocial aspects of disease and the relationship between patient and health care system in a comprehensive scheme and reflects an operationalized conceptual approach to case mix or case complexity. The method is developed to enhance interdisciplinary communication between (para-) medical specialists and to provide a method to describe case complexity for clinical, scientific, and educational purposes. First, a feasibility study (N = 21 patients) was conducted which included double scoring and discussion of the results. This led to a version of the instrument on which two interrater reliability studies were performed. In study 1, the INTERMED was double scored for 14 patients admitted to an internal ward by a psychiatrist and an internist on the basis of a joint interview conducted by both. In study 2, on the basis of medical charts, two clinicians separately double scored the INTERMED in 16 patients referred to the outpatient psychiatric consultation service. Averaged over both studies, in 94.2% of all ratings there was no important difference between the raters (more than 1 point difference). As a research interview, it takes about 20 minutes; as part of the whole process of history taking it takes about 15 minutes. In both studies, improvements were suggested by the results. Analyses of study 1 revealed that on most items there was considerable agreement; some items were improved. Also, the reference point for the prognoses was changed so that it reflected both short- and long-term prognoses. Analyses of study 2 showed that in this setting, less agreement between the raters was obtained due to the fact that the raters were less experienced and the scoring procedure was more susceptible to differences. Some improvements--mainly of the anchor points--were specified which may further enhance interrater reliability. The INTERMED proves to be a reliable method for classifying patients' care needs, especially when used by experienced raters scoring by patient interview. It can be a useful tool in assessing patients' care needs, as well as the level of needed adjustment between general and mental health service delivery. The INTERMED is easily applicable in the clinical setting at low time-costs.
Resumo:
Gene correction at the site of the mutation in the chromosome is the absolute way to really cure a genetic disease. The oligonucleotide (ODN)-mediated gene repair technology uses an ODN perfectly complementary to the genomic sequence except for a mismatch at the base that is mutated. The endogenous repair machinery of the targeted cell then mediates substitution of the desired base in the gene, resulting in a completely normal sequence. Theoretically, it avoids potential gene silencing or random integration associated with common viral gene augmentation approaches and allows an intact regulation of expression of the therapeutic protein. The eye is a particularly attractive target for gene repair because of its unique features (small organ, easily accessible, low diffusion into systemic circulation). Moreover therapeutic effects on visual impairment could be obtained with modest levels of repair. This chapter describes in details the optimized method to target active ODNs to the nuclei of photoreceptors in neonatal mouse using (1) an electric current application at the eye surface (saline transpalpebral iontophoresis), (2) combined with an intravitreous injection of ODNs, as well as the experimental methods for (3) the dissection of adult neural retinas, (4) their immuno-labelling, and (5) flat-mounting for direct observation of photoreceptor survival, a relevant criteria of treatment outcomes for retinal degeneration.
Resumo:
Ocular toxoplasmosis is the principal cause of posterior uveitis and a leading cause of blindness. Animal models are required to improve our understanding of the pathogenesis of this disease. The method currently used for the detection of retinal cysts in animals involves the observation, under a microscope, of all the sections from infected eyes. However, this method is time-consuming and lacks sensitivity. We have developed a rapid, sensitive method for observing retinal cysts in mice infected with Toxoplasma gondii. This method involves combining the flat-mounting of retina - a compromise between macroscopic observation and global analysis of this tissue - and the use of an avirulent recombinant strain of T. gondii expressing the Escherichia coli beta-galactosidase gene, visually detectable at the submacroscopic level. Single cyst unilateral infection was found in six out of 17 mice killed within 28 days of infection, whereas a bilateral infection was found in only one mouse. There was no correlation between brain cysts number and ocular infection.
Resumo:
Background : The issue of gender is acknowledged as a key issue for the AIDS epidemic. World AIDS Conferences (WAC) have constituted a major discursive space for the epidemic. We sought to establish the balance regarding gender in the AIDS scientific discourse by following its development in the published proceedings of WAC. Fifteen successive WAC 1989-2012 served to establish a "barometer" of scientific interest in heterosexual and homo/bisexual men and women throughout the epidemic. It was hypothesised that, as in other domains of Sexual and Reproductive Health, heterosexual men would be "forgotten" partners. Method : Abstracts from each conference were entered in electronic form into an Access database. Queries were created to generate five categories of interest and to monitor their annual frequency. All abstract titles including the term "men" or "women" were identified. Collections of synonyms were systematically and iteratively developed in order to classify further abstracts according to whether they included terms referring to "homo/bisexual" or "heterosexual". Reference to "Mother to Child Transmission" (MTCT) was also flagged. Results : The category including "men", but without additional reference to "homo-bisexuel" (i.e. referring to men in general and/or to heterosexual men) consistently appears four times less often than the equivalent category for women. Excluding abstracts on women and MTCT has little impact on this difference. Abstracts including reference to both "men" and "homo-bisexual" emerge as the secondmost frequent category; presence of the equivalent category for women is minimal. Conclusion : The hypothesised absence of heterosexual men in the AIDS discourse was confirmed. Although the relative presence of homo-bisexual men and women as a focal subject may be explained by epidemiological data, this is not so in the case of heterosexual men and women. This imbalance has consequences for HIV prevention.
Resumo:
Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.
Resumo:
This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).
Resumo:
INTRODUCTION: This article is part of a research study on the organization of primary health care (PHC) for mental health in two of Quebec's remote regions. It introduces a methodological approach based on information found in health records, for assessing the quality of PHC offered to people suffering from depression or anxiety disorders. METHODS: Quality indicators were identified from evidence and case studies were reconstructed using data collected in health records over a 2-year observation period. Data collection was developed using a three-step iterative process: (1) feasibility analysis, (2) development of a data collection tool, and (3) application of the data collection method. The adaptation of quality-of-care indicators to remote regions was appraised according to their relevance, measurability and construct validity in this context. RESULTS: As a result of this process, 18 quality indicators were shown to be relevant, measurable and valid for establishing a critical quality appraisal of four recommended dimensions of PHC clinical processes: recognition, assessment, treatment and follow-up. CONCLUSIONS: There is not only an interest in the use of health records to assess the quality of PHC for mental health in remote regions but also a scientific value for the rigorous and meticulous methodological approach developed in this study. From the perspective of stakeholders in the PHC system of care in remote areas, quality indicators are credible and provide potential for transferability to other contexts. This study brings information that has the potential to identify gaps in and implement solutions adapted to the context.