27 resultados para verification

em Université de Lausanne, Switzerland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study was conducted to assess if fingerprint specialists could be influenced by extraneous contextual information during a verification process. Participants were separated into three groups: a control group (no contextual information was given), a low bias group (minimal contextual information was given in the form of a report prompting conclusions), and a high bias group (an internationally recognized fingerprint expert provided conclusions and case information to deceive this group into believing that it was his case and conclusions). A similar experiment was later conducted with laypersons. The results showed that fingerprint experts were influenced by contextual information during fingerprint comparisons, but not towards making errors. Instead, fingerprint experts under the biasing conditions provided significantly fewer definitive and erroneous conclusions than the control group. In contrast, the novice participants were more influenced by the bias conditions and did tend to make incorrect judgments, especially when prompted towards an incorrect response by the bias prompt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plant circadian clock controls a wide variety of physiological and developmental events, which include the short-days (SDs)-specific promotion of the elongation of hypocotyls during de-etiolation and also the elongation of petioles during vegetative growth. In A. thaliana, the PIF4 gene encoding a phytochrome-interacting basic helix-loop-helix (bHLH) transcription factor plays crucial roles in this photoperiodic control of plant growth. According to the proposed external coincidence model, the PIF4 gene is transcribed precociously at the end of night specifically in SDs, under which conditions the protein product is stably accumulated, while PIF4 is expressed exclusively during the daytime in long days (LDs), under which conditions the protein product is degraded by the light-activated phyB and also the residual proteins are inactivated by the DELLA family of proteins. A number of previous reports provided solid evidence to support this coincidence model mainly at the transcriptional level of the PIF 4 and PIF4-traget genes. Nevertheless, the diurnal oscillation profiles of PIF4 proteins, which were postulated to be dependent on photoperiod and ambient temperature, have not yet been demonstrated. Here we present such crucial evidence on PIF4 protein level to further support the external coincidence model underlying the temperature-adaptive photoperiodic control of plant growth in A. thaliana.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intraoperative imaging, in particular intraoperative MRI, is a developing area in neurosurgery and its role is currently being evaluated. Its role in epilepsy surgery has not been defined yet and its use has been limited. In our experience with a compact and mobile low-field intraoperative MRI system, a few epilepsy surgeries have been performed using this technique. As the integration of imaging and functional data plays an important role in the planning of epilepsy surgery, intraoperative verification of the surgical result may be highly valuable. Therefore, teams that have access to intraoperative MRI should be encouraged to use this technique prospectively to evaluate its current relevance in epilepsy surgery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION: The phase III EORTC 22033-26033/NCIC CE5 intergroup trial compares 50.4 Gy radiotherapy with up-front temozolomide in previously untreated low-grade glioma. We describe the digital EORTC individual case review (ICR) performed to evaluate protocol radiotherapy (RT) compliance. METHODS: Fifty-eight institutions were asked to submit 1-2 randomly selected cases. Digital ICR datasets were uploaded to the EORTC server and accessed by three central reviewers. Twenty-seven parameters were analysed including volume delineation, treatment planning, organ at risk (OAR) dosimetry and verification. Consensus reviews were collated and summary statistics calculated. RESULTS: Fifty-seven of seventy-two requested datasets from forty-eight institutions were technically usable. 31/57 received a major deviation for at least one section. Relocation accuracy was according to protocol in 45. Just over 30% had acceptable target volumes. OAR contours were missing in an average of 25% of cases. Up to one-third of those present were incorrectly drawn while dosimetry was largely protocol compliant. Beam energy was acceptable in 97% and 48 patients had per protocol beam arrangements. CONCLUSIONS: Digital RT plan submission and review within the EORTC 22033-26033 ICR provide a solid foundation for future quality assurance procedures. Strict evaluation resulted in overall grades of minor and major deviation for 37% and 32%, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is enormous interest in designing training methods for reducing cognitive decline in healthy older adults. Because it is impaired with aging, multitasking has often been targeted and has been shown to be malleable with appropriate training. Investigating the effects of cognitive training on functional brain activation might provide critical indication regarding the mechanisms that underlie those positive effects, as well as provide models for selecting appropriate training methods. The few studies that have looked at brain correlates of cognitive training indicate a variable pattern and location of brain changes - a result that might relate to differences in training formats. The goal of this study was to measure the neural substrates as a function of whether divided attentional training programs induced the use of alternative processes or whether it relied on repeated practice. Forty-eight older adults were randomly allocated to one of three training programs. In the SINGLE REPEATED training, participants practiced an alphanumeric equation and a visual detection task, each under focused attention. In the DIVIDED FIXED training, participants practiced combining verification and detection by divided attention, with equal attention allocated to both tasks. In the DIVIDED VARIABLE training, participants completed the task by divided attention, but were taught to vary the attentional priority allocated to each task. Brain activation was measured with fMRI pre- and post-training while completing each task individually and the two tasks combined. The three training programs resulted in markedly different brain changes. Practice on individual tasks in the SINGLE REPEATED training resulted in reduced brain activation whereas DIVIDED VARIABLE training resulted in a larger recruitment of the right superior and middle frontal gyrus, a region that has been involved in multitasking. The type of training is a critical factor in determining the pattern of brain activation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Despite the continuous production of genome sequence for a number of organisms, reliable, comprehensive, and cost effective gene prediction remains problematic. This is particularly true for genomes for which there is not a large collection of known gene sequences, such as the recently published chicken genome. We used the chicken sequence to test comparative and homology-based gene-finding methods followed by experimental validation as an effective genome annotation method. RESULTS: We performed experimental evaluation by RT-PCR of three different computational gene finders, Ensembl, SGP2 and TWINSCAN, applied to the chicken genome. A Venn diagram was computed and each component of it was evaluated. The results showed that de novo comparative methods can identify up to about 700 chicken genes with no previous evidence of expression, and can correctly extend about 40% of homology-based predictions at the 5' end. CONCLUSIONS: De novo comparative gene prediction followed by experimental verification is effective at enhancing the annotation of the newly sequenced genomes provided by standard homology-based methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To evaluate the diagnostic value and image quality of CT with filtered back projection (FBP) compared with adaptive statistical iterative reconstructed images (ASIR) in body stuffers with ingested cocaine-filled packets.Methods and Materials: Twenty-nine body stuffers (mean age 31.9 years, 3 women) suspected for ingestion of cocaine-filled packets underwent routine-dose 64-row multidetector CT with FBP (120kV, pitch 1.375, 100-300 mA and automatic tube current modulation (auto mA), rotation time 0.7sec, collimation 2.5mm), secondarily reconstructed with 30 % and 60 % ASIR. In 13 (44.83%) out of the body stuffers cocaine-filled packets were detected, confirmed by exact analysis of the faecal content including verification of the number (range 1-25). Three radiologists independently and blindly evaluated anonymous CT examinations (29 FBP-CT and 68 ASIR-CT) for the presence and number of cocaine-filled packets indicating observers' confidence, and graded them for diagnostic quality, image noise, and sharpness. Sensitivity, specificity, area under the receiver operating curve (ROC) Az and interobserver agreement between the 3 radiologists for FBP-CT and ASIR-CT were calculated.Results: The increase of the percentage of ASIR significantly diminished the objective image noise (p<0.001). Overall sensitivity and specificity for the detection of the cocaine-filled packets were 87.72% and 76.15%, respectively. The difference of ROC area Az between the different reconstruction techniques was significant (p= 0.0101), that is 0.938 for FBP-CT, 0.916 for 30 % ASIR-CT, and 0.894 for 60 % ASIR-CT.Conclusion: Despite the evident image noise reduction obtained by ASIR, the diagnostic value for detecting cocaine-filled packets decreases, depending on the applied ASIR percentage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Arabidopsis thaliana, gene expression level polymorphisms (ELPs) between natural accessions that exhibit simple, single locus inheritance are promising quantitative trait locus (QTL) candidates to explain phenotypic variability. It is assumed that such ELPs overwhelmingly represent regulatory element polymorphisms. However, comprehensive genome-wide analyses linking expression level, regulatory sequence and gene structure variation are missing, preventing definite verification of this assumption. Here, we analyzed ELPs observed between the Eil-0 and Lc-0 accessions. Compared with non-variable controls, 5' regulatory sequence variation in the corresponding genes is indeed increased. However, approximately 42% of all the ELP genes also carry major transcription unit deletions in one parent as revealed by genome tiling arrays, representing a &gt;4-fold enrichment over controls. Within the subset of ELPs with simple inheritance, this proportion is even higher and deletions are generally more severe. Similar results were obtained from analyses of the Bay-0 and Sha accessions, using alternative technical approaches. Collectively, our results suggest that drastic structural changes are a major cause for ELPs with simple inheritance, corroborating experimentally observed indel preponderance in cloned Arabidopsis QTL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: The EORTC 22043-30041 trial investigates the role of the addition of androgen suppression to post-operative radiotherapy in patients who have undergone radical prostatectomy. As part of the quality assurance of radiotherapy (QART) a Dummy Run (DR) procedure was performed. MATERIALS AND METHOD: The protocol included detailed and published delineation guidelines. Participating institutions digitally submitted radiotherapy treatment volumes and a treatment plan for a standard clinical case. Submissions were centrally reviewed using the VODCA software platform. RESULTS: Thirty-eight submissions from thirty-one institutions were reviewed. Six were accepted without comments. Twenty-three were accepted with comments on one or more items: target volume delineation (22), OAR delineation (23), planning and dosimetry (3) or treatment verification (1). Nine submissions were rejected requiring resubmission, seven for target volume delineation reasons alone. Intervention to highlight the importance of delineation guidelines was made prior to the entry of the first patient in the trial. After this, a lower percentage of resubmissions was required. CONCLUSIONS: The EORTC 22043-30041 Dummy Run highlights the need for timely and effective QART in clinical trials. The variation in target volume and OAR definition demonstrates that clinical guidelines and radiotherapy protocols are not a substitute for QART procedures. Early intervention in response to the Dummy Run improved protocol understanding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Lapatinib is an effective anti-HER2 therapy in advanced breast cancer and docetaxel is one of the most active agents in breast cancer. Combining these agents in pre-treated patients with metastatic disease had previously proved challenging, so the primary objective of this study aimed to determine the maximum tolerated dose (MTD) in treatment-naive patients, by identifying acute dose-limiting toxicities (DLT) during cycle 1 in the first part of a phases 1-2 neoadjuvant European Organisation for Research and Treatment of Cancer (EORTC) trial. PATIENTS AND METHODS: Patients with large operable or locally-advanced HER2 positive breast cancer were treated with continuous lapatinib, and docetaxel every 21days for 4 cycles. Dose levels (DLs) were: 1000/75, 1250/75, 1000/85, 1250/85, 1000/100 and 1250/100 (mg/day)/(mg/m(2)). RESULTS: Twenty-one patients were included. Two DLTs occurred at dose level 5 (1000/100); one grade 4 neutropenia ⩾7days and one febrile neutropenia. A further 3 patients were therefore treated at the same dose with prophylactic granulocyte-colony stimulating factor (G-CSF), and 3 patients at dose level 6. No further DLTs were observed. CONCLUSIONS: Our recommended dose for phase II is lapatinib 1000mg/day and docetaxel 100mg/m(2) with G-CSF in HER2 positive non-metastatic breast cancer. The dose of lapatinib should have been 1250mg/day but we were mindful of the high rate of treatment discontinuation in GeparQuinto with lapatinib 1250mg/day combined with docetaxel. No grade 3-4 diarrhoea was observed. Pharmacodynamics analysis suggests that concomitant medications altering P-glycoprotein activity (in addition to lapatinib) can modify toxicity, including non-haematological toxicities. This needs verification in larger trials, where it may contribute to understanding the sources of variability in clinical toxicity and treatment discontinuation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years there has been growing interest in the question of how the particular topology of polymeric chains affects their overall dimensions and physical behavior. The majority of relevant studies are based on numerical simulation methods or analytical treatment; however, both these approaches depend on various assumptions and simplifications. Experimental verification is clearly needed but was hampered by practical difficulties in obtaining preparative amounts of knotted or catenated polymers with predefined topology and precisely set chain length. We introduce here an efficient method of production of various single-stranded DNA knots and catenanes that have the same global chain length. We also characterize electrophoretic migration of the produced single-stranded DNA knots and catenanes with increasing complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent availability of the chicken genome sequence poses the question of whether there are human protein-coding genes conserved in chicken that are currently not included in the human gene catalog. Here, we show, using comparative gene finding followed by experimental verification of exon pairs by RT-PCR, that the addition to the multi-exonic subset of this catalog could be as little as 0.2%, suggesting that we may be closing in on the human gene set. Our protocol, however, has two shortcomings: (i) the bioinformatic screening of the predicted genes, applied to filter out false positives, cannot handle intronless genes; and (ii) the experimental verification could fail to identify expression at a specific developmental time. This highlights the importance of developing methods that could provide a reliable estimate of the number of these two types of genes.