998 resultados para Test station
Resumo:
BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.
Resumo:
Background: Acute febrile respiratory illnesses, including influenza, account for a large proportion of ambulatory care visits worldwide. In the developed world, these encounters commonly result in unwarranted antibiotic prescriptions; data from more resource-limited settings are lacking. The purpose of this study was to describe the epidemiology of influenza among outpatients in southern Sri Lanka and to determine if access to rapid influenza test results was associated with decreased antibiotic prescriptions.
Methods: In this pretest- posttest study, consecutive patients presenting from March 2013- April 2014 to the Outpatient Department of the largest tertiary care hospital in southern Sri Lanka were surveyed for influenza-like illness (ILI). Patients meeting World Health Organization criteria for ILI-- acute onset of fever ≥38.0°C and cough in the prior 7 days--were enrolled. Consenting patients were administered a structured questionnaire, physical examination, and nasal/nasopharyngeal sampling. Rapid influenza A/B testing (Veritor System, Becton Dickinson) was performed on all patients, but test results were only released to patients and clinicians during the second phase of the study (December 2013- April 2014).
Results: We enrolled 397 patients with ILI, with 217 (54.7%) adults ≥12 years and 188 (47.4%) females. A total of 179 (45.8%) tested positive for influenza by rapid testing, with April- July 2013 and September- November 2013 being the periods with the highest proportion of ILI due to influenza. A total of 310 (78.1%) patients with ILI received a prescription for an antibiotic from their outpatient provider. The proportion of patients prescribed antibiotics decreased from 81.4% in the first phase to 66.3% in the second phase (p=.005); among rapid influenza-positive patients, antibiotic prescriptions decreased from 83.7% in the first phase to 56.3% in the second phase (p=.001). On multivariable analysis, having a positive rapid influenza test available to clinicians was associated with decreased antibiotic use (OR 0.20, 95% CI 0.05- 0.82).
Conclusions: Influenza virus accounted for almost 50% of acute febrile respiratory illness in this study, but most patients were prescribed antibiotics. Providing rapid influenza test results to clinicians was associated with fewer antibiotic prescriptions, but overall prescription of antibiotics remained high. In this developing country setting, a multi-faceted approach that includes improved access to rapid diagnostic tests may help decrease antibiotic use and combat antimicrobial resistance.
Resumo:
BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.
Resumo:
Association studies of quantitative traits have often relied on methods in which a normal distribution of the trait is assumed. However, quantitative phenotypes from complex human diseases are often censored, highly skewed, or contaminated with outlying values. We recently developed a rank-based association method that takes into account censoring and makes no distributional assumptions about the trait. In this study, we applied our new method to age-at-onset data on ALDX1 and ALDX2. Both traits are highly skewed (skewness > 1.9) and often censored. We performed a whole genome association study of age at onset of the ALDX1 trait using Illumina single-nucleotide polymorphisms. Only slightly more than 5% of markers were significant. However, we identified two regions on chromosomes 14 and 15, which each have at least four significant markers clustering together. These two regions may harbor genes that regulate age at onset of ALDX1 and ALDX2. Future fine mapping of these two regions with densely spaced markers is warranted.
Resumo:
info:eu-repo/semantics/published
Resumo:
The screening and treatment of latent tuberculosis (TB) infection reduces the risk of progression to active disease and is currently recommended for HIV-infected patients. The aim of this study is to evaluate, in a low TB incidence setting, the potential contribution of an interferon-gamma release assay in response to the mycobacterial latency antigen Heparin-Binding Haemagglutinin (HBHA-IGRA), to the detection of Mycobacterium tuberculosis infection in HIV-infected patients.
Resumo:
En este trabajo presentamos un análisis estadístico del Test de Conocimientos Previos de Matemáticas (TCPM) diseñado para medir el estado inicial de destrezas y conocimientos básicos en matemáticas de los alumnos ingresantes a carreras científico-tecnológicas de la Facultad de Ciencias Físico, Matemáticas y Naturales de la Universidad Nacional de San Luis. El objetivo de la investigación está centrado en observar el diagnóstico utilizado, con miras a una eventual utilización posterior. Para determinar la bondad de la prueba realizamos un análisis de la calidad, discriminación e índice de dificultad de los ítems, así como de la validez y confiabilidad del diagnóstico, para este análisis estadístico empleamos los programas TestGraf y SPSS. El test se aplicó a 698 estudiantes ingresantes a la Universidad en el ciclo lectivo 2002. De la investigación pudimos inferir que el diagnóstico resultó: difícil para la población de aplicación; de confiabilidad aceptable, y de buena calidad de items, con variada dificultad y aceptable discriminación.
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The strong spatial and temporal variability of traffic-related air pollution detected at roadside locations in a number of European cities has raised the question of how representative the site and time period of air quality measurements actually can be. To address this question, a 7-month sampling campaign was carried out on a major road axis (Avenue Leclerc) leading to a very busy intersection (Place Basch) in central Paris, covering the surroundings of a permanent air quality monitoring station. This station has recorded the highest CO and NOx concentrations during recent years in the region of Paris. Diffusive BTX samplers as well as a mobile monitoring unit equipped with real-time CO, NOx and O3 analysers and meteorological instruments were used to reveal the small-scale pollution gradients and their temporal trends near the permanent monitoring station. The diffusive measurements provided 7-day averages of benzene, toluene, xylene and other hydrocarbons at different heights above the ground and distances from the kerb covering summer and winter periods. Relevant traffic and meteorological data were also obtained on an hourly basis. Furthermore, three semiempirical dispersion models (STREET-SRI, OSPM and AEOLIUS) were tested for an asymmetric canyon location in Av. Leclerc. The analysis of this comprehensive data set has helped to assess the representativeness of air quality monitoring information.
Resumo:
This paper investigates an isothermal fatigue test for solder joints developed at the NPL. The test specimen is a lap joint between two copper arms. During the test the displacement at the ends of the copper are controlled and the force measured. The modeling results in the paper show that the displacement across the solder joint is not equal to the displacement applied at the end of the specimen. This is due to deformation within the copper arms. A method is described to compensate for this difference. The strain distribution in the solder was determined by finite element analysis and compared to the distribution generated by a theoretical 'ideal' test which generates an almost pure shear mode in the solder. By using a damage-based constitutive law the shape of the crack generated in the specimen has been predicted for both the actual test and the ideal pure shear test. Results from the simulations are also compared with experimental data using SnAgCu solder.
Resumo:
This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.
Resumo:
In this paper, coupled fire and evacuation simulation tools are used to simulate the Station Nightclub fire. This study differs from the analysis conducted by NIST in three key areas; (1)an enhanced flame spread model and (2)a toxicity generation model are used, (3)the evacuation is coupled to the fire simulation. Predicted early burning locations in the full-scale fire simulation are in line with photographic evidence and the predicted onset of flashover is similar to that produced by NIST. However, it is suggested that both predictions of the flashover time are approximately 15 sec earlier than actually occurred. Three evacuation scenarios are then considered, two of which are coupled with the fire simulation. The coupled fire and evacuation simulation suggests that 180 fatalities result from a building population of 460. With a 15 sec delay in the fire timeline, the evacuation simulation produces 84 fatalities which are in good agreement with actual number of fatalities. An important observation resulting from this work is that traditional fire engineering ASET/RSET calculations which do not couple the fire and evacuation simulations have the potential to be considerably over optimistic in terms of the level of safety achieved by building designs.
Resumo:
This paper presents data relating to pedestrian escalator behaviour collected in an underground station in Shanghai, China. While data was not collected under emergency or simulated emergency conditions, it is argued that the data collected under rush-hour conditions - where commuters are under time pressures to get to work on time - may be used to approximate emergency evacuation conditions - where commuters are also under time pressures to exit the building as quickly as possible. Data pertaining to escalator/stair choice, proportion of walkers to riders, walker speeds and side usage are presented. The collected data is used to refine the buildingEXODUS escalator model allowing the agents to select whether to use an escalator or neighbouring parallel stair based on congestion conditiions at the base of the stair/escalator and expected travel times. The new model, together with the collected data, is used to simulate a series of hypothetical evacuation scenarios to demonstrate the impact of escalators on evacuation performance.