46 resultados para Error amplification
Resumo:
High-fidelity 'proofreading' polymerases are often used in library construction for next-generation sequencing projects, in an effort to minimize errors in the resulting sequence data. The increased template fidelity of these polymerases can come at the cost of reduced template specificity, and library preparation methods based on the AFLP technique may be particularly susceptible. Here, we compare AFLP profiles generated with standard Taq and two versions of a high-fidelity polymerase. We find that Taq produces fewer and brighter peaks than high-fidelity polymerase, suggesting that Taq performs better at selectively amplifying templates that exactly match the primer sequences. Because the higher accuracy of proofreading polymerases remains important for sequencing applications, we suggest that it may be more effective to use alternative library preparation methods.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
PURPOSE: To review, retrospectively, the possible causes of sub- or intertrochanteric fractures after screw fixation of intracapsular fractures of the proximal femur. METHODS: Eighty-four patients with an intracapsular fracture of proximal femur were operated between 1995 and 1998 by using three cannulated 6.25 mm screws. The screws were inserted in a triangular configuration, one screw in the upper part of the femoral neck and two screws in the inferior part. Between 1999 and 2001, we use two screws proximally and one screw distally. RESULTS: In the first series, two patients died within one week after operation. Sixty-four fractures healed without problems. Four patients developed an atrophic non-union; avascular necrosis of the femoral head was found in 11 patients. Three patients (3.6%) suffered a sub- and/or intertrochanteric fracture after a mean postoperative time of 30 days, in one case without obvious trauma. In all three cases surgical revision was necessary. Between 1999 and 2001 we did not observe any fracture after screwing. CONCLUSION: Two screws in the inferior part of the femoral neck create a stress riser in the subtrochanteric region, potentially inducing a fracture in the weakened bone. For internal fixation for proximal intracapsular femoral fracture only one screw must be inserted in the inferior part of neck.
Resumo:
BACKGROUND: A new diagnostic system, called one-step nucleic acid amplification (OSNA), has recently been designed to detect cytokeratin 19 mRNA as a surrogate for lymph node metastases. The objective of this prospective investigation was to compare the performance of OSNA with both standard hematoxylin and eosin (H&E) analysis and intensive histopathology in the detection of colon cancer lymph node metastases. METHODS: In total, 313 lymph nodes from 22 consecutive patients with stage I, II, and III colon cancer were assessed. Half of each lymph node was analyzed initially by H&E followed by an intensive histologic workup (5 levels of H&E and immunohistochemistry analyses, the gold standard for the assessment of sensitivity/specificity of OSNA), and the other half was analyzed using OSNA. RESULTS: OSNA was more sensitive in detecting small lymph node tumor infiltrates compared with H&E (11 results were OSNA positive/H&E negative). Compared with intensive histopathology, OSNA had 94.5% sensitivity, 97.6% specificity, and a concordance rate of 97.1%. OSNA resulted in an upstaging of 2 of 13 patients (15.3%) with lymph node-negative colon cancer after standard H&E examination. CONCLUSIONS: OSNA appeared to be a powerful and promising molecular tool for the detection of lymph node metastases in patients with colon cancer. OSNA had similar performance in the detection of lymph node metastases compared with intensive histopathologic investigations and appeared to be superior to standard histology with H&E. Most important, the authors concluded that OSNA may lead to a potential upstaging of >15% of patients with colon cancer.
Resumo:
Introduction: Approximately one fifth of stage I and II colon cancer patients will suffer from recurrent disease. This is partly due to the presence of small nodal tumour infiltrates, which are undetected by standard histopathology using Haematoxylin & Eosin (H&E) staining on one slice and thus may not receive beneficial adjuvant therapy. A new diagnostic, semi-automatic system, called one-step nucleic acid amplification (OSNA), was recently designed for the detection of cytokeratin 19 (CK19) mRNA as a surrogate for lymph node metastases. The objective of the present investigation was to compare the performance of OSNA with both standard H&E as well as intensive histopathologic analyses in the detection of colon cancer lymph node micro- and macro-metastases.Methods: In this prospective study 313 lymph nodes from 22 consecutive stage I - III colon cancer patients were assessed. Half of each lymph node was analysed initially based on one slice of H&E followed by an intensive histologic work-up (5 levels of H&E and immuno-histochemistry staining for each slice), the other half was analysed using OSNA.Results: All OSNA results were available after less than 40 minutes. Fifty-one lymph nodes were positive and 246 lymph nodes negative with both OSNA and standard H&E. OSNA was more sensitive to detect small nodal tumor infiltrates compared to H&E (11 OSNA pos. /H&E neg.). Compared to intensive histopathologic analyses, OSNA had a sensitivity of 94.5% and a specificity of 97.6% to detect lymph node micro- and macro-metastases with a concordance rate of 97.1%. An upstaging due to OSNA was found in 2/13 (15.3%) initially node negative colon cancer patients.Conclusion: OSNA appears to be a powerful and promising molecular tool for the detection of lymph node macro- and micro-metastases in colon cancer patients. OSNA has a similar performance in the detection of micro- and macro-metastases compared to intensive histopathologic investigations and appears to be superior to standard histology with H&E. Since the use of OSNA allows the analysis of the whole lymph node, the problem of sampling bias and undetected tumor deposits due to uninvestigated material will be overcome in the future and OSNA may thus improve staging in colon cancer patients. It is hoped that this improved staging will lead to better patient selection for adjuvant therapy and consecutively improved local and distant control as well as better overall survival.
Resumo:
Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.
Resumo:
Spodoptera frugiperda is a pest of great economic importance in the Americas. It is attacked by several species of parasitoids, which act as biological control agents. Parasitoids are morphologically identifiable as adults, but not as larvae. Laboratory rearing conditions are not always optimal to rear out parasitic wasps from S. frugiperda larvae collected from wild populations, and it frequently happens that parasitoids do not complete their life cycle and stop developing at the larval stage. Therefore, we explored ways to identify parasitoid larvae using molecular techniques. Sequencing is one possible technique, yet it is expensive. Here we present an alternate, cheaper way of identifying seven species of parasitoids (Cotesia marginiventris, Campoletis sonorensis, Pristomerus spinator, Chelonus insularis, Chelonus cautus, Eiphosoma vitticolle and Meteorus laphygmae) using PCR amplification of COI gene followed by a digestion with a combination of four restriction endonucleases. Each species was found to exhibit a specific pattern when the amplification product was run on an agarose gel. Identifying larvae revealed that conclusions on species composition of a population of parasitic wasps can be biased if only the emerging adults are taken into account.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Medulloblastoma (MB) is the most common malignant brain tumor in children and is associated with a poor outcome. cMYC amplification characterizes a subgroup of MB with very poor prognosis. However, there exist so far no targeted therapies for the subgroup of MB with cMYC amplification. Here we used kinome-wide RNA interference screening to identify novel kinases that may be targeted to inhibit the proliferation of c-Myc-overexpressing MB. The RNAi screen identified a set of 5 genes that could be targeted to selectively impair the proliferation of c-Myc-overexpressing MB cell lines: AKAP12 (A-kinase anchor protein), CSNK1α1 (casein kinase 1, alpha 1), EPHA7 (EPH receptor A7) and PCTK1 (PCTAIRE protein kinase 1). When using RNAi and a pharmacological inhibitor selective for PCTK1, we could show that this kinase plays a crucial role in the proliferation of MB cell lines and the activation of the mammalian target of rapamycin (mTOR) pathway. In addition, pharmacological PCTK1 inhibition reduced the expression levels of c-Myc. Finally, targeting PCTK1 selectively impaired the tumor growth of c-Myc-overexpressing MB cells in vivo. Together our data uncover a novel and crucial role for PCTK1 in the proliferation and survival of MB characterized by cMYC amplification.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Chromogenic immunohistochemistry (IHC) is omnipresent in cancer diagnosis, but has also been criticized for its technical limit in quantifying the level of protein expression on tissue sections, thus potentially masking clinically relevant data. Shifting from qualitative to quantitative, immunofluorescence (IF) has recently gained attention, yet the question of how precisely IF can quantify antigen expression remains unanswered, regarding in particular its technical limitations and applicability to multiple markers. Here we introduce microfluidic precision IF, which accurately quantifies the target expression level in a continuous scale based on microfluidic IF staining of standard tissue sections and low-complexity automated image analysis. We show that the level of HER2 protein expression, as continuously quantified using microfluidic precision IF in 25 breast cancer cases, including several cases with equivocal IHC result, can predict the number of HER2 gene copies as assessed by fluorescence in situ hybridization (FISH). Finally, we demonstrate that the working principle of this technology is not restricted to HER2 but can be extended to other biomarkers. We anticipate that our method has the potential of providing automated, fast and high-quality quantitative in situ biomarker data using low-cost immunofluorescence assays, as increasingly required in the era of individually tailored cancer therapy.
Resumo:
Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.