192 resultados para Preliminary Observations
Resumo:
Insulin determination in blood sampled during post-mortem investigation has been repeatedly asserted as being of little diagnostic value due to the rapid occurrence of decompositional changes and blood haemolysis. In this study, we assessed the feasibility of insulin determination in post-mortem serum, vitreous humour, bile, and cerebrospinal and pericardial fluids in one case of fatal insulin self-administration and a series of 40 control cases (diabetics and non-diabetics) using a chemiluminescence enzyme immunoassay. In the case of suicide by insulin self-administration, insulin concentrations in pericardial fluid and bile were higher than blood clinical reference values, though lower than post-mortem serum concentration. Insulin concentrations in vitreous (11.50 mU/L) and cerebrospinal fluid (17.30 mU/L) were lower than blood clinical reference values. Vitreous insulin concentrations in non-diabetic control cases were lower than the estimated detection limit of the method. These preliminary results tend to confirm the usefulness of insulin determination in vitreous humour in situations of suspected fatal insulin administration. Additional findings pertaining to insulin determination in bile, pericardial, and cerebrospinal fluid would suggest that analysis performed in post-mortem serum and injection sites could be complemented, in individual cases, by investigations carried out in alternative biological fluids. Lastly, these results would indicate that analysis with chemiluminescence enzyme immunoassay may provide suitable data, similar to analysis with liquid chromatography-tandem mass spectrometry (LC-MS/MS) and immunoradiometric assay, to support the hypothesis of insulin overdose. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
The value of forensic results crucially depends on the propositions and the information under which they are evaluated. For example, if a full single DNA profile for a contemporary marker system matching the profile of Mr A is assessed, given the propositions that the DNA came from Mr A and given it came from an unknown person, the strength of evidence can be overwhelming (e.g., in the order of a billion). In contrast, if we assess the same result given that the DNA came from Mr A and given it came from his twin brother (i.e., a person with the same DNA profile), the strength of evidence will be 1, and therefore neutral, unhelpful and irrelevant 1 to the case at hand. While this understanding is probably uncontroversial and obvious to most, if not all practitioners dealing with DNA evidence, the practical precept of not specifying an alternative source with the same characteristics as the one considered under the first proposition may be much less clear in other circumstances. During discussions with colleagues and trainees, cases have come to our attention where forensic scientists have difficulty with the formulation of propositions. It is particularly common to observe that results (e.g., observations) are included in the propositions, whereas-as argued throughout this note-they should not be. A typical example could be a case where a shoe-mark with a logo and the general pattern characteristics of a Nike Air Jordan shoe is found at the scene of a crime. A Nike Air Jordan shoe is then seized at Mr A's house and control prints of this shoe compared to the mark. The results (e.g., a trace with this general pattern and acquired characteristics corresponding to the sole of Mr A's shoe) are then evaluated given the propositions 'The mark was left by Mr A's Nike Air Jordan shoe-sole' and 'The mark was left by an unknown Nike Air Jordan shoe'. As a consequence, the footwear examiner will not evaluate part of the observations (i.e., the mark presents the general pattern of a Nike Air Jordan) whereas they can be highly informative. Such examples can be found in all forensic disciplines. In this article, we present a few such examples and discuss aspects that will help forensic scientists with the formulation of propositions. In particular, we emphasise on the usefulness of notation to distinguish results that forensic scientists should evaluate from case information that the Court will evaluate.
Resumo:
Aim The reported prevalence of MET overexpression varies from 25-55% in non-small cell lung cancer (NSCLC) and clinical correlations are emerging slowly. In a well-defined NSCLC cohort of the Lungscape program, we explore the epidemiology, the natural history of IHC MET positivity and its association to OS, RFS and TTR. Methods Resected stage I-III NSCLC identified based on the quality of clinical data and FFPE tissue availability were assessed for MET expression using immunohistochemistry (IHC) on TMAs (CONFIRM anti total c-MET assay, clone SP44, Ventana BenchMark platform). All cases were analysed at participating pathology laboratories using the same protocol, after passing an external quality assurance program. MET positive status is defined as ≥ 50% of tumor cells staining with 2+ or 3+ intensity. Results A total of 2709 cases are included in the iBiobank and will be analysed. IHC MET expression is currently available for 1552 patients, with positive MET IHC staining in 380 cases [24.5%; IHC 3+ in 157 cases (41.3%) and 2+ in 223 cases (58.7%)]. The cohort of 1552 patients includes 48.2%, 44.7% and 4.4% cases of adenocarcinoma, squamous and large cell histologies, respectively. IHC MET status was independent of stage, age and smoking history. Significant differences in MET positivity were associated with gender (32% vs. 21% for female vs. male, p < 0.001), with performance status (25% vs. 18% for 0 vs. 1-3, p = 0.006), and histology (34%, 14% and 24% for adenocarcinoma, squamous and large cell carcinoma, p < 0.001). IHC MET positivity was independent of the IHC ALK status (p = 0.08). At last FU, 52% of patients were still alive, with a median FU of 4.8 yrs. No association of IHC MET was found with OS, RFS or TTR. Conclusions The preliminary results for this large multicentre European cohort describe a prevalence of MET overexpression that seems lower than previous observations in NSCLC, such as reported for the OAM4971g trial, suggesting potential biological differences between surgically resected and metastatic disease. Analysis for the full cohort is ongoing and results will be presented. Disclosure L. Bubendorf: Disclosures: Stock ownership: Roche Advisory boards: Roche, Pfizer Research support: Roche; K. Schulze: Full time employee of Roche; A. Das-Gupta: I am a full time employee of Roche. All other authors have declared no conflicts of interest.
Resumo:
OBJECTIVE: There is currently no guideline regarding the management of neurogenic detrusor overactivity (NDO) refractory to intra-detrusor botulinum toxin injections. The primary objective of the present study was to find a consensus definition of failure of botulinum toxin intra-detrusor injections for NDO. The secondary objective was to report current trends in the managment of NDO refractory to botulinum toxin. METHODS: A survey was created, based on data drawn from current literature, and sent via e-mail to all the experts form the Group for research in neurourology in french language (GENULF) and from the comittee of neurourology of the French urological association (AFU). The experts who did not answer to the first e-mail were contacted again twice. Main results from the survey are presented and expressed as numbers and proportions. RESULTS: Out of the 42 experts contacted, 21 responded to the survey. Nineteen participants considered that the definition of failure should be a combination of clinical and urodynamics criteria. Among the urodynamics criteria, the persistence of a maximum detrusor pressure>40cm H2O was the most supported by the experts (18/21, 85%). According to the vast majority of participants (19/21, 90.5%), the impact of injections on urinary incontinence should be included in the definition of failure. Regarding the management, most experts considered that the first line treatment in case of failure of a first intra-detrusor injection of Botox(®) 200 U should be a repeat injection of Botox(®) at a higher dosage (300 U) (15/20, 75%), regardless of the presence or not of urodynamics risk factors of upper tract damage (16/20, 80%). CONCLUSION: This work has provided a first overview of the definition of failure of intra-detrusor injections of botulinum toxin in the management of NDO. For 90.5% of the experts involved, the definition of failure should be clinical and urodynamic and most participants (75%) considered that, in case of failure of a first injection of Botox(®) 200 U, repeat injection of Botox(®) 300 U should be the first line treatment. Level of proof 4.