131 resultados para Mathematical Investigation
Resumo:
The present study examined the relationship between depth of defense interpretations by therapists, and patient defensive functioning, on the therapeutic alliance in a sample of 36 patients undergoing short-term dynamic psychotherapy. Defense interpretation depth was defined as the degree to which therapist interpretations contained information regarding the motivation for patient defenses and historical origins of the defensive processes (Greensen, 1967). Mean depth of interpretation was compared between sessions that were identified beforehand as either high-alliance or low-alliance sessions using the Helping Alliance Questionnaire (HAq-II: Luborsky et al., 1996). Results indicated that defensive functioning was correlated to defense interpretation depth in low-alliance sessions. Moreover, mean depth of interpretation was also higher in low-alliance sessions, pointing to the possible "destabilizing" effects that these interpretations may have on both defensive functioning and the therapeutic alliance. These results are discussed within the context of previous studies of therapeutic technique in dynamic psychotherapy.
Resumo:
Depuis deux années et demie, l'équipe de Dahlia participe d'un projet, élaboré par le Dr Ch. Bryois, dont les objectifs principaux d'accueil des premiers séjours et pour une durée moyenne de 15 jours, contredisaient les habitudes institutionnelles. Ses porte-parole nous expliquent comment l'équipe a vécu la mise en place de ce modèle, les problèmes rencontrés, les avancées qu'il a introduites dans sa pratique et son élaboration théorique. L'initiateur de cette unité nous a donné, dans une interview, sa propre interprétation de cette expérience.
Resumo:
DNA is nowadays swabbed routinely to investigate serious and volume crimes, but research remains scarce when it comes to determining the criteria that may impact the success rate of DNA swabs taken on different surfaces and situations. To investigate these criteria in fully operational conditions, DNA analysis results of 4772 swabs taken by the forensic unit of a police department in Western Switzerland over a 2.5-year period (2012-2014) in volume crime cases were considered. A representative and random sample of 1236 swab analyses was extensively examined and codified, describing several criteria such as whether the swabbing was performed at the scene or in the lab, the zone of the scene where it was performed, the kind of object or surface that was swabbed, whether the target specimen was a touch surface or a biological fluid, and whether the swab targeted a single surface or combined different surfaces. The impact of each criterion and of their combination was assessed in regard to the success rate of DNA analysis, measured through the quality of the resulting profile, and whether the profile resulted in a hit in the national database or not. Results show that some situations - such as swabs taken on door and window handles for instance - have a higher success rate than average swabs. Conversely, other situations lead to a marked decrease in the success rate, which should discourage further analyses of such swabs. Results also confirm that targeting a DNA swab on a single surface is preferable to swabbing different surfaces with the intent to aggregate cells deposited by the offender. Such results assist in predicting the chance that the analysis of a swab taken in a given situation will lead to a positive result. The study could therefore inform an evidence-based approach to decision-making at the crime scene (what to swab or not) and at the triage step (what to analyse or not), contributing thus to save resource and increase the efficiency of forensic science efforts.
Resumo:
A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Thermal processes are widely used in small molecule chemical analysis and metabolomics for derivatization, vaporization, chromatography, and ionization, especially in gas chromatography mass spectrometry (GC/MS). In this study the effect of heating was examined on a set of 64 small molecule standards and, separately, on human plasma metabolite extracts. The samples, either derivatized or underivatized, were heated at three different temperatures (60, 100, and 250 °C) at different exposure times (30 s, 60 s, and 300 s). All the samples were analyzed by liquid chromatography coupled to electrospray ionization mass spectrometry (LC/MS) and the data processed by XCMS Online ( xcmsonline.scripps.edu ). The results showed that heating at an elevated temperature of 100 °C had an appreciable effect on both the underivatized and derivatized molecules, and heating at 250 °C created substantial changes in the profile. For example, over 40% of the molecular peaks were altered in the plasma metabolite analysis after heating (250 °C, 300s) with a significant formation of degradation and transformation products. The analysis of 64 small molecule standards validated the temperature-induced changes observed on the plasma metabolites, where most of the small molecules degraded at elevated temperatures even after minimal exposure times (30 s). For example, tri- and diorganophosphates (e.g., adenosine triphosphate and adenosine diphosphate) were readily degraded into a mono-organophosphate (e.g., adenosine monophosphate) during heating. Nucleosides and nucleotides (e.g., inosine and inosine monophosphate) were also found to be transformed into purine derivatives (e.g., hypoxanthine). A newly formed transformation product, oleoyl ethyl amide, was identified in both the underivatized and derivatized forms of the plasma extracts and small molecule standard mixture, and was likely generated from oleic acid. Overall these analyses show that small molecules and metabolites undergo significant time-sensitive alterations when exposed to elevated temperatures, especially those conditions that mimic sample preparation and analysis in GC/MS experiments.
Resumo:
In order to broaden our knowledge and understanding of the decision steps in the criminal investigation process, we started by evaluating the decision to analyse a trace and the factors involved in this decision step. This decision step is embedded in the complete criminal investigation process, involving multiple decision and triaging steps. Considering robbery cases occurring in a geographic region during a 2-year-period, we have studied the factors influencing the decision to submit biological traces, directly sampled on the scene of the robbery or on collected objects, for analysis. The factors were categorised into five knowledge dimensions: strategic, immediate, physical, criminal and utility and decision tree analysis was carried out. Factors in each category played a role in the decision to analyse a biological trace. Interestingly, factors involving information available prior to the analysis are of importance, such as the fact that a positive result (a profile suitable for comparison) is already available in the case, or that a suspect has been identified through traditional police work before analysis. One factor that was taken into account, but was not significant, is the matrix of the trace. Hence, the decision to analyse a trace is not influenced by this variable. The decision to analyse a trace first is very complex and many of the tested variables were taken into account. The decisions are often made on a case-by-case basis.