896 resultados para Software testing. Test generation. Grammars


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Testing for HIV tropism is recommended before prescribing a chemokine receptor blocker. To date, in most European countries HIV tropism is determined using a phenotypic test. Recently, new data have emerged supporting the use of a genotypic HIV V3-loop sequence analysis as the basis for tropism determination. The European guidelines group on clinical management of HIV-1 tropism testing was established to make recommendations to clinicians and virologists. Methods: We searched online databases for articles from Jan 2006 until March 2010 with the terms: tropism or CCR5-antagonist or CCR5 antagonist or maraviroc or vicriviroc. Additional articles and/or conference abstracts were identified by hand searching. This strategy identified 712 potential articles and 1240 abstracts. All were reviewed and finally 57 papers and 42 abstracts were included and used by the panel to reach a consensus statement. Results: The panel recommends HIV-tropism testing for the following indications: i) drug-naïve patients in whom toxicity or limited therapeutic options are foreseen; ii) patients experiencing therapy failure whenever a treatment change is considered. Both the phenotypic Enhanced Trofile assay (ESTA) and genotypic population sequencing of the V3-loop are recommended for use in clinical practice. Although the panel does not recommend one methodology over another it is anticipated that genotypic testing will be used more frequently because of its greater accessibility, lower cost and shorter turnaround time. The panel also provides guidance on technical aspects and interpretation issues. If using genotypic methods, triplicate PCR amplification and sequencing testing is advised using the G2P interpretation tool (clonal model) with an FPR of 10%. If the viral load is below the level of reliable amplification, proviral DNA can be used, and the panel recommends performing triplicate testing and use of an FPR of 10%. If genotypic DNA testing is not performed in triplicate the FPR should be increased to 20%. Conclusions: The European guidelines on clinical management of HIV-1 tropism testing provide an overview of current literature, evidence-based recommendations for the clinical use of tropism testing and expert guidance on unresolved issues and current developments. Current data support both the use of genotypic population sequencing and ESTA for co-receptor tropism determination. For practical reasons genotypic population sequencing is the preferred method in Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interpretability and power of genome-wide association studies can be increased by imputing unobserved genotypes, using a reference panel of individuals genotyped at higher marker density. For many markers, genotypes cannot be imputed with complete certainty, and the uncertainty needs to be taken into account when testing for association with a given phenotype. In this paper, we compare currently available methods for testing association between uncertain genotypes and quantitative traits. We show that some previously described methods offer poor control of the false-positive rate (FPR), and that satisfactory performance of these methods is obtained only by using ad hoc filtering rules or by using a harsh transformation of the trait under study. We propose new methods that are based on exact maximum likelihood estimation and use a mixture model to accommodate nonnormal trait distributions when necessary. The new methods adequately control the FPR and also have equal or better power compared to all previously described methods. We provide a fast software implementation of all the methods studied here; our new method requires computation time of less than one computer-day for a typical genome-wide scan, with 2.5 M single nucleotide polymorphisms and 5000 individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are no validated criteria for the diagnosis of sensory neuronopathy (SNN) yet. In a preliminary monocenter study a set of criteria relying on clinical and electrophysiological data showed good sensitivity and specificity for a diagnosis of probable SNN. The aim of this study was to test these criteria on a French multicenter study. 210 patients with sensory neuropathies from 15 francophone reference centers for neuromuscular diseases were included in the study with an expert diagnosis of non-SNN, SNN or suspected SNN according to the investigations performed in these centers. Diagnosis was obtained independently from the set of criteria to be tested. The expert diagnosis was taken as the reference against which the proposed SNN criteria were tested. The set relied on clinical and electrophysiological data easily obtainable with routine investigations. 9/61 (16.4 %) of non-SNN patients, 23/36 (63.9 %) of suspected SNN, and 102/113 (90.3 %) of SNN patients according to the expert diagnosis were classified as SNN by the criteria. The SNN criteria tested against the expert diagnosis in the SNN and non-SNN groups had 90.3 % (102/113) sensitivity, 85.2 % (52/61) specificity, 91.9 % (102/111) positive predictive value, and 82.5 % (52/63) negative predictive value. Discordance between the expert diagnosis and the SNN criteria occurred in 20 cases. After analysis of these cases, 11 could be reallocated to a correct diagnosis in accordance with the SNN criteria. The proposed criteria may be useful for the diagnosis of probable SNN in patients with sensory neuropathy. They can be reached with simple clinical and paraclinical investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A condition needed for testing nested hypotheses from a Bayesianviewpoint is that the prior for the alternative model concentratesmass around the small, or null, model. For testing independencein contingency tables, the intrinsic priors satisfy this requirement.Further, the degree of concentration of the priors is controlled bya discrete parameter m, the training sample size, which plays animportant role in the resulting answer regardless of the samplesize.In this paper we study robustness of the tests of independencein contingency tables with respect to the intrinsic priors withdifferent degree of concentration around the null, and comparewith other “robust” results by Good and Crook. Consistency ofthe intrinsic Bayesian tests is established.We also discuss conditioning issues and sampling schemes,and argue that conditioning should be on either one margin orthe table total, but not on both margins.Examples using real are simulated data are given

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, morphometric measurements of the ascending aorta have been done with ECG-gated multidector computerized tomography (MDCT) to help the development of future novel transcatheter therapies (TCT); nevertheless, the variability of such measurements remains unknown. Thirty patients referred for ECG-gated CT thoracic angiography were evaluated. Continuous reformations of the ascending aorta, perpendicular to the centerline, were obtained automatically with a commercially available computer aided diagnosis (CAD). Then measurements of the maximal diameter were done with the CAD and manually by two observers (separately). Measurements were repeated one month later. The Bland-Altman method, Spearman coefficients, and a Wilcoxon signed-rank test were used to evaluate the variability, the correlation, and the differences between observers. The interobserver variability for maximal diameter between the two observers was up to 1.2 mm with limits of agreement [-1.5, +0.9] mm; whereas the intraobserver limits were [-1.2, +1.0] mm for the first observer and [-0.8, +0.8] mm for the second observer. The intraobserver CAD variability was 0.8 mm. The correlation was good between observers and the CAD (0.980-0.986); however, significant differences do exist (P<0.001). The maximum variability observed was 1.2 mm and should be considered in reports of measurements of the ascending aorta. The CAD is as reproducible as an experienced reader.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fine particulate matter from traffic increases mortality and morbidity. An important source of traffic particles is brake wear. American studies reported cars to emit break wear particles at a rate of about 11mg/km to 20mg/km of driven distance. A German study estimated that break wear contributes about 12.5% to 21% of the total traffic particle emissions. The goal of this study was to build a system that allows the study of brake wear particle emissions during different braking behaviours of different car and brake types. The particles should be characterize in terms of size, number, metal, and elemental and organic carbon composition. In addition, the influence of different deceleration schemes on the particle composition and size distribution should be studied. Finally, this system should allow exposing human cell cultures to these particles. An exposure-box (0.25 cubic-m volume) was built that can be mounted around a car's braking system. This allows exposing cells to fresh brake wear particles. Concentrations of particle numbers, mass and surface, metals, and carbon compounds were quantified. Tests were conducted with A549 lung epithelial cells. Five different cars and two typical braking behaviours (full stop and normal deceleration) were tested. Particle number and size distribution was analysed for the first six minutes. In this time, two braking events occurred. Full stop produced significantly higher particle concentrations than normal deceleration (average of 23'000 vs. 10'400 #/cm3, p= 0.016). The particle number distribution was bi-modal with one peak at 60 to 100 nm (depending on the tested car and braking behaviour) and a second peak at 200 to 400 nm. Metal concentrations varied depending on the tested car type. Iron (range of 163 to 15'600 μg/m3) and Manganese (range of 0.9 to 135 μg/m3) were present in all samples, while Copper was absent in some samples (<6 to 1220 μg/m3). The overall "fleet" metal ratio was Fe:Cu:Mn = 128:14:1. Temperature and humidity varied little. A549-cells were successfully exposed in the various experimental settings and retained their viability. Culture supernatant was stored and cell culture samples were fixated to test for inflammatory response. Analysis of these samples is ongoing. The established system allowed testing brake wear particle emissions from real-world cars. The large variability of chemical composition and emitted amounts of brake wear particles between car models seems to be related to differences between brake pad compositions of different producers. Initial results suggest that the conditions inside the exposure box allow exposing human lung epithelial cells to freshly produced brake wear particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mycobacterium tuberculosis is the bacterium that causes tuberculosis (TB), a leading cause of death from infectious disease worldwide. Rapid diagnosis of resistant strains is important for the control of TB. Real-time polymerase chain reaction (RT-PCR) assays may detect all of the mutations that occur in the M. tuberculosis 81-bp core region of the rpoB gene, which is responsible for resistance to rifampin (RIF) and codon 315 of the katG gene and the inhA ribosomal binding site, which are responsible for isoniazid (INH). The goal of this study was to assess the performance of RT-PCR compared to traditional culture-based methods for determining the drug susceptibility of M. tuberculosis. BACTEC TM MGIT TM 960 was used as the gold standard method for phenotypic drug susceptibility testing. Susceptibilities to INH and RIF were also determined by genotyping of katG, inhA and rpoB genes. RT-PCR based on molecular beacons probes was used to detect specific point mutations associated with resistance. The sensitivities of RT-PCR in detecting INH resistance using katG and inhA targets individually were 55% and 25%, respectively and 73% when combined. The sensitivity of the RT-PCR assay in detecting RIF resistance was 99%. The median time to complete the RT-PCR assay was three-four hours. The specificities for tests were both 100%. Our results confirm that RT-PCR can detect INH and RIF resistance in less than four hours with high sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cohort of 123 adult contacts was followed for 18‐24 months (86 completed the follow-up) to compare conversion and reversion rates based on two serial measures of QuantiFERON (QFT) and tuberculin skin test (TST) (PPD from TUBERSOL, Aventis Pasteur, Canada) for diagnosing latent tuberculosis (TB) in household contacts of TB patients using conventional (C) and borderline zone (BZ) definitions. Questionnaires were used to obtain information regarding TB exposure, TB risk factors and socio-demographic data. QFT (IU/mL) conversion was defined as <0.35 to ≥0.35 (C) or <0.35 to >0.70 (BZ) and reversion was defined as ≥0.35 to <0.35 (C) or ≥0.35 to <0.20 (BZ); TST (mm) conversion was defined as <5 to ≥5 (C) or <5 to >10 (BZ) and reversion was defined as ≥5 to <5 (C). The QFT conversion and reversion rates were 10.5% and 7% with C and 8.1% and 4.7% with the BZ definitions, respectively. The TST rates were higher compared with QFT, especially with the C definitions (conversion 23.3%, reversion 9.3%). The QFT conversion and reversion rates were higher for TST ≥5; for TST, both rates were lower for QFT <0.35. No risk factors were associated with the probability of converting or reverting. The inconsistency and apparent randomness of serial testing is confusing and adds to the limitations of these tests and definitions to follow-up close TB contacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Brazil, human and canine visceral leishmaniasis (CVL) caused byLeishmania infantum has undergone urbanisation since 1980, constituting a public health problem, and serological tests are tools of choice for identifying infected dogs. Until recently, the Brazilian zoonoses control program recommended enzyme-linked immunosorbent assays (ELISA) and indirect immunofluorescence assays (IFA) as the screening and confirmatory methods, respectively, for the detection of canine infection. The purpose of this study was to estimate the accuracy of ELISA and IFA in parallel or serial combinations. The reference standard comprised the results of direct visualisation of parasites in histological sections, immunohistochemical test, or isolation of the parasite in culture. Samples from 98 cases and 1,327 noncases were included. Individually, both tests presented sensitivity of 91.8% and 90.8%, and specificity of 83.4 and 53.4%, for the ELISA and IFA, respectively. When tests were used in parallel combination, sensitivity attained 99.2%, while specificity dropped to 44.8%. When used in serial combination (ELISA followed by IFA), decreased sensitivity (83.3%) and increased specificity (92.5%) were observed. Serial testing approach improved specificity with moderate loss in sensitivity. This strategy could partially fulfill the needs of public health and dog owners for a more accurate diagnosis of CVL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Skin patch test is the gold standard method in diagnosing contact allergy. Although used for more than 100 years, the patch test procedure is performed with variability around the world. A number of factors can influence the test results, namely the quality of reagents used, the timing of the application, the patch test series (allergens/haptens) that have been used for testing, the appropriate interpretation of the skin reactions or the evaluation of the patient's benefit. METHODS We performed an Internet -based survey with 38 questions covering the educational background of respondents, patch test methods and interpretation. The questionnaire was distributed among all representatives of national member societies of the World Allergy Organization (WAO), and the WAO Junior Members Group. RESULTS One hundred sixty-nine completed surveys were received from 47 countries. The majority of participants had more than 5 years of clinical practice (61 %) and routinely carried out patch tests (70 %). Both allergists and dermatologists were responsible for carrying out the patch tests. We could observe the use of many different guidelines regardless the geographical distribution. The use of home-made preparations was indicated by 47 % of participants and 73 % of the respondents performed 2 or 3 readings. Most of the responders indicated having patients with adverse reactions, including erythroderma (12 %); however, only 30 % of members completed a consent form before conducting the patch test. DISCUSSION The heterogeneity of patch test practices may be influenced by the level of awareness of clinical guidelines, different training backgrounds, accessibility to various types of devices, the patch test series (allergens/haptens) used for testing, type of clinical practice (public or private practice, clinical or research-based institution), infrastructure availability, financial/commercial implications and regulations among others. CONCLUSION There is a lack of a worldwide homogeneity of patch test procedures, and this raises concerns about the need for standardization and harmonization of this important diagnostic procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most ventricular assist devices (VADs) currently used in infants are extracorporeal. These VADs require long-term anticoagulation therapy and extensive surgery, and two devices are needed for biventricular support. We designed a biventricular assist device based on shape memory alloy that reproduces the hemodynamic effects of cardiomyoplasty, supporting the heart with a compressing movement, and evaluated its performance in a dedicated mockup system. Nitinol fibers are the device's key component. Ejection fraction (EF), cardiac output (CO), and generated systolic pressure were measured on a test bench. Our test bench settings were a preload range of 0-15 mm Hg, an afterload range of 0-160 mm Hg, and a heart rate (HR) of 20, 30, 40, and 60 beats/min. A power supply of 15 volts and 3.5 amperes was necessary. The EF range went from 34.4% to 1.2% as the afterload and HR increased, along with a CO from 180 to 6 ml/min. The device generated a maximal systolic pressure of 25 mm Hg. Cardiac compression for biventricular assistance in child-sized heart using shape memory alloy is technically feasible. Further testing remains necessary to assess this VAD's in vivo performance range and its reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hereditary non-structural diseases such as catecholaminergic polymorphic ventricular tachycardia (CPVT), long QT, and the Brugada syndrome as well as structural disease such as hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC) cause a significant percentage of sudden cardiac deaths in the young. In these cases, genetic testing can be useful and does not require proxy consent if it is carried out at the request of judicial authorities as part of a forensic death investigation. Mutations in several genes are implicated in arrhythmic syndromes, including SCN5A, KCNQ1, KCNH2, RyR2, and genes causing HCM. If the victim's test is positive, this information is important for relatives who might be themselves at risk of carrying the disease-causing mutation. There is no consensus about how professionals should proceed in this context. This article discusses the ethical and legal arguments in favour of and against three options: genetic testing of the deceased victim only; counselling of relatives before testing the victim; counselling restricted to relatives of victims who tested positive for mutations of serious and preventable diseases. Legal cases are mentioned that pertain to the duty of geneticists and other physicians to warn relatives. Although the claim for a legal duty is tenuous, recent publications and guidelines suggest that geneticists and others involved in the multidisciplinary approach of sudden death (SD) cases may, nevertheless, have an ethical duty to inform relatives of SD victims. Several practical problems remain pertaining to the costs of testing, the counselling and to the need to obtain permission of judicial authorities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usability is critical to consider an interactive software system successful. Usability testing and evaluation during product development have gained wide acceptance as a strategy to improve product quality. Early introduction of usability perspectives in a product is very important in order to provide a clear visibility of the quality aspects not only for the developers, but also for the testing users as well. However, usability evaluation and testing are not commonly taken into consideration as an essential element of the software development process. Then, this paper exposes a proposal to introduce usability evaluation and testing within a software development through reuse of software artifacts. Additionally, it suggests the introduction of an auditor within the classification of actors for usability tests. It also proposes an improvement of checklists used for heuristics evaluation, adding quantitative and qualitative aspects to them

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectiu principal d'aquest projecte final de carrera consisteix en desenvolupar una aplicació web que oferisca un entorn simplificat de desenvolupament integrat (IDE) en el llenguatge C/C++, on els estudiants de batxillerat (o secundària en general) puguen iniciar-se en el seu estudi. La finalitat és proveir un entorn agradable a l'alumnat perquè puga seguir correctament les pràctiques que proposa el professor independentment de les circumstàncies pròpies d'aquest (ubicació temporal o permanentde l'alumnat, sistema operatiu que utilitza, dispositiu emprat per a fer les pràctiques, etc..).