944 resultados para mixed verification methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims The effects of a system based on minimally trained first responders (FR) dispatched simultaneously with the emergency medical services (EMS) of the local hospital in a mixed urban and rural area in Northwestern Switzerland were examined. Methods and results In this prospective study 500 voluntary fire fighters received a 4-h training in basic-life-support using automated-external-defibrillation (AED). FR and EMS were simultaneously dispatched in a two-tier rescue system. During the years 2001–2008, response times, resuscitation interventions and outcomes were monitored. 1334 emergencies were included. The FR reached the patients (mean age 60.4 ± 19 years; 65% male) within 6 ± 3 min after emergency calls compared to 12 ± 5 min by the EMS (p < 0.0001). Seventy-six percent of the 297 OHCAs occurred at home. Only 3 emergencies with resuscitation attempts occurred at the main railway station equipped with an on-site AED. FR were on the scene before arrival of the EMS in 1166 (87.4%) cases. Of these, the FR used AED in 611 patients for monitoring or defibrillation. CPR was initiated by the FR in 164 (68.9% of 238 resuscitated patients). 124 patients were defibrillated, of whom 93 (75.0%) were defibrillated first by the FR. Eighteen patients (of whom 13 were defibrillated by the FR) were discharged from hospital in good neurological condition. Conclusions Minimally trained fire fighters integrated in an EMS as FR contributed substantially to an increase of the survival rate of OHCAs in a mixed urban and rural area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To investigate the dosimetric properties of an electronic portal imaging device (EPID) for electron beam detection and to evaluate its potential for quality assurance (QA) of modulated electron radiotherapy (MERT). Methods: A commercially available EPID was used to detect electron beams shaped by a photon multileaf collimator (MLC) at a source-surface distance of 70 cm. The fundamental dosimetric properties such as reproducibility, dose linearity, field size response, energy response, and saturation were investigated for electron beams. A new method to acquire the flood-field for the EPID calibration was tested. For validation purpose, profiles of open fields and various MLC fields (square and irregular) were measured with a diode in water and compared to the EPID measurements. Finally, in order to use the EPID for QA of MERT delivery, a method was developed to reconstruct EPID two-dimensional (2D) dose distributions in a water-equivalent depth of 1.5 cm. Comparisons were performed with film measurement for static and dynamic monoenergy fields as well as for multienergy fields composed by several segments of different electron energies. Results: The advantageous EPID dosimetric properties already known for photons as reproducibility, linearity with dose, and dose rate were found to be identical for electron detection. The flood-field calibration method was proven to be effective and the EPID was capable to accurately reproduce the dose measured in water at 1.0 cm depth for 6 MeV, 1.3 cm for 9 MeV, and 1.5 cm for 12, 15, and 18 MeV. The deviations between the output factors measured with EPID and in water at these depths were within ±1.2% for all the energies with a mean deviation of 0.1%. The average gamma pass rate (criteria: 1.5%, 1.5 mm) for profile comparison between EPID and measurements in water was better than 99% for all the energies considered in this study. When comparing the reconstructed EPID 2D dose distributions at 1.5 cm depth to film measurements, the gamma pass rate (criteria: 2%, 2 mm) was better than 97% for all the tested cases. Conclusions: This study demonstrates the high potential of the EPID for electron dosimetry, and in particular, confirms the possibility to use it as an efficient verification tool for MERT delivery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

• Premise of the study: Isometric and allometric scaling of a conserved floral plan could provide a parsimonious mechanism for rapid and reversible transitions between breeding systems. This scaling may occur during transitions between predominant autogamy and xenogamy, contributing to the maintenance of a stable mixed mating system. • Methods: We compared nine disjunct populations of the polytypic, mixed mating species Oenothera flava (Onagraceae) to two parapatric relatives, the obligately xenogamous species O. acutissima and the mixed mating species O. triloba. We compared floral morphology of all taxa using principal component analysis (PCA) and developmental trajectories of floral organs using ANCOVA homogeneity of slopes. • Key results: The PCA revealed both isometric and allometric scaling of a conserved floral plan. Three principal components (PCs) explained 92.5% of the variation in the three species. PC1 predominantly loaded on measures of floral size and accounts for 36% of the variation. PC2 accounted for 35% of the variation, predominantly in traits that influence pollinator handling. PC3 accounted for 22% of the variation, primarily in anther–stigma distance (herkogamy). During O. flava subsp. taraxacoides development, style elongation was accelerated relative to anthers, resulting in positive herkogamy. During O. flava subsp. flava development, style elongation was decelerated, resulting in zero or negative herkogamy. Of the two populations with intermediate morphology, style elongation was accelerated in one population and decelerated in the other. • Conclusions: Isometric and allometric scaling of floral organs in North American Oenothera section Lavauxia drive variation in breeding system. Multiple developmental paths to intermediate phenotypes support the likelihood of multiple mating system transitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REASONS FOR PERFORMING STUDY: The diagnosis of equine back disorders is challenging. Objectively determining movement of the vertebral column may therefore be of value in a clinical setting. OBJECTIVES: To establish whether surface-mounted inertial measurement units (IMUs) can be used to establish normal values for range of motion (ROM) of the vertebral column in a uniform population of horses trotting under different conditions. STUDY DESIGN: Vertebral ROM was established in Franches-Montagnes stallions and a general population of horses and the variability in measurements compared between the two groups. Repeatability and the influence of specific exercise condition (on ROM) were assessed. Finally, attempts were made to explain the findings of the study through the evaluation of factors that might influence ROM. METHODS: Dorsoventral (DV) and mediolateral (ML) vertebral ROM was measured at a trot under different exercise conditions in 27 Franches-Montagnes stallions and six general population horses using IMUs distributed over the vertebral column. RESULTS: Variability in the ROM measurements was significantly higher for general population horses than for Franches-Montagnes stallions (both DV and ML ROM). Repeatability was strong to very strong for DV measurements and moderate for ML measurements. Trotting under saddle significantly reduced the ROM, with sitting trot resulting in a significantly lower ROM than rising trot. Age is unlikely to explain the low variability in vertebral ROM recorded in the Franches-Montagnes horses, while this may be associated with conformational factors. CONCLUSIONS: It was possible to establish a normal vertebral ROM for a group of Franches-Montagnes stallions. While within-breed variation was low in this population, further studies are necessary to determine variation in vertebral ROM for other breeds and to assess their utility for diagnosis of equine back disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Index tracking has become one of the most common strategies in asset management. The index-tracking problem consists of constructing a portfolio that replicates the future performance of an index by including only a subset of the index constituents in the portfolio. Finding the most representative subset is challenging when the number of stocks in the index is large. We introduce a new three-stage approach that at first identifies promising subsets by employing data-mining techniques, then determines the stock weights in the subsets using mixed-binary linear programming, and finally evaluates the subsets based on cross validation. The best subset is returned as the tracking portfolio. Our approach outperforms state-of-the-art methods in terms of out-of-sample performance and running times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a similar to 70% agreement between the best hydrographical-optical algorithm pairings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite many researches on development in education and psychology, not often is the methodology tested with real data. A major barrier to test the growth model is that the design of study includes repeated observations and the nature of the growth is nonlinear. The repeat measurements on a nonlinear model require sophisticated statistical methods. In this study, we present mixed effects model in a negative exponential curve to describe the development of children's reading skills. This model can describe the nature of the growth on children's reading skills and account for intra-individual and inter-individual variation. We also apply simple techniques including cross-validation, regression, and graphical methods to determine the most appropriate curve for data, to find efficient initial values of parameters, and to select potential covariates. We illustrate with an example that motivated this research: a longitudinal study of academic skills from grade 1 to grade 12 in Connecticut public schools. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mental health of war-impacted individuals has been an issue of growing concern to many researchers and practitioners internationally (Miller, Kulkarni, & Kushner, 2006). According to the United Nations High Commissioner for Refugees (2006a), Africans are disproportionately impacted by conflict-related displacement. To date, however, much of the research on the mental health of refugees has been based mostly on Western views of health and trauma. The current study is a mixed-methods investigation of stressors, coping strategies, and meaning making of Liberian refugees in the Buduburam Refugee Camp of Ghana. Results from the Brief COPE, focus groups, and semi-structured ethnographic interviews are discussed. Understanding stressors and coping among this population can contribute to culturally informed research and practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detection of multidrug-resistant tuberculosis (MDR-TB), a frequent cause of treatment failure, takes 2 or more weeks to identify by culture. RIF-resistance is a hallmark of MDR-TB, and detection of mutations in the rpoB gene of Mycobacterium tuberculosis using molecular beacon probes with real-time quantitative polymerase chain reaction (qPCR) is a novel approach that takes ≤2 days. However, qPCR identification of resistant isolates, particularly for isolates with mixed RIF-susceptible and RIF-resistant bacteria, is reader dependent and limits its clinical use. The aim of this study was to develop an objective, reader-independent method to define rpoB mutants using beacon qPCR. This would facilitate the transition from a research protocol to the clinical setting, where high-throughput methods with objective interpretation are required. For this, DNAs from 107 M. tuberculosis clinical isolates with known susceptibility to RIF by culture-based methods were obtained from 2 regions where isolates have not previously been subjected to evaluation using molecular beacon qPCR: the Texas–Mexico border and Colombia. Using coded DNA specimens, mutations within an 81-bp hot spot region of rpoB were established by qPCR with 5 beacons spanning this region. Visual and mathematical approaches were used to establish whether the qPCR cycle threshold of the experimental isolate was significantly higher (mutant) compared to a reference wild-type isolate. Visual classification of the beacon qPCR required reader training for strains with a mixture of RIF-susceptible and RIF-resistant bacteria. Only then had the visual interpretation by an experienced reader had 100% sensitivity and 94.6% specificity versus RIF-resistance by culture phenotype and 98.1% sensitivity and 100% specificity versus mutations based on DNA sequence. The mathematical approach was 98% sensitive and 94.5% specific versus culture and 96.2% sensitive and 100% specific versus DNA sequence. Our findings indicate the mathematical approach has advantages over the visual reading, in that it uses a Microsoft Excel template to eliminate reader bias or inexperience, and allows objective interpretation from high-throughput analyses even in the presence of a mixture of RIF-resistant and RIF-susceptible isolates without the need for reader training.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed longitudinal designs are important study designs for many areas of medical research. Mixed longitudinal studies have several advantages over cross-sectional or pure longitudinal studies, including shorter study completion time and ability to separate time and age effects, thus are an attractive choice. Statistical methodology used in general longitudinal studies has been rapidly developing within the last few decades. Common approaches for statistical modeling in studies with mixed longitudinal designs have been the linear mixed-effects model incorporating an age or time effect. The general linear mixed-effects model is considered an appropriate choice to analyze repeated measurements data in longitudinal studies. However, common use of linear mixed-effects model on mixed longitudinal studies often incorporates age as the only random-effect but fails to take into consideration the cohort effect in conducting statistical inferences on age-related trajectories of outcome measurements. We believe special attention should be paid to cohort effects when analyzing data in mixed longitudinal designs with multiple overlapping cohorts. Thus, this has become an important statistical issue to address. ^ This research aims to address statistical issues related to mixed longitudinal studies. The proposed study examined the existing statistical analysis methods for the mixed longitudinal designs and developed an alternative analytic method to incorporate effects from multiple overlapping cohorts as well as from different aged subjects. The proposed study used simulation to evaluate the performance of the proposed analytic method by comparing it with the commonly-used model. Finally, the study applied the proposed analytic method to the data collected by an existing study Project HeartBeat!, which had been evaluated using traditional analytic techniques. Project HeartBeat! is a longitudinal study of cardiovascular disease (CVD) risk factors in childhood and adolescence using a mixed longitudinal design. The proposed model was used to evaluate four blood lipids adjusting for age, gender, race/ethnicity, and endocrine hormones. The result of this dissertation suggest the proposed analytic model could be a more flexible and reliable choice than the traditional model in terms of fitting data to provide more accurate estimates in mixed longitudinal studies. Conceptually, the proposed model described in this study has useful features, including consideration of effects from multiple overlapping cohorts, and is an attractive approach for analyzing data in mixed longitudinal design studies.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^