992 resultados para size accuracy
Resumo:
OBJECTIVE: To assess the change in non-compliant items in prescription orders following the implementation of a computerized physician order entry (CPOE) system named PreDiMed. SETTING: The department of internal medicine (39 and 38 beds) in two regional hospitals in Canton Vaud, Switzerland. METHOD: The prescription lines in 100 pre- and 100 post-implementation patients' files were classified according to three modes of administration (medicines for oral or other non-parenteral uses; medicines administered parenterally or via nasogastric tube; pro re nata (PRN), as needed) and analyzed for a number of relevant variables constitutive of medical prescriptions. MAIN OUTCOME MEASURE: The monitored variables depended on the pharmaceutical category and included mainly name of medicine, pharmaceutical form, posology and route of administration, diluting solution, flow rate and identification of prescriber. RESULTS: In 2,099 prescription lines, the total number of non-compliant items was 2,265 before CPOE implementation, or 1.079 non-compliant items per line. Two-thirds of these were due to missing information, and the remaining third to incomplete information. In 2,074 prescription lines post-CPOE implementation, the number of non-compliant items had decreased to 221, or 0.107 non-compliant item per line, a dramatic 10-fold decrease (chi(2) = 4615; P < 10(-6)). Limitations of the computerized system were the risk for erroneous items in some non-prefilled fields and ambiguity due to a field with doses shown on commercial products. CONCLUSION: The deployment of PreDiMed in two departments of internal medicine has led to a major improvement in formal aspects of physicians' prescriptions. Some limitations of the first version of PreDiMed were unveiled and are being corrected.
Resumo:
Powerpoint presentation by Rob Pridequx from the National Audit office
Resumo:
The number of physical activity measures and indexes used in the human literature is large and may result in some difficulty for the average investigator to choose the most appropriate measure. Accordingly, this review is intended to provide information on the utility and limitations of the various measures. Its primary focus is the objective assessment of free-living physical activity in humans based on physiological and biomechanical methods. The physical activity measures have been classified into three categories: Measures based on energy expenditure or oxygen uptake, such as activity energy expenditure, activity-related time equivalent, physical activity level, physical activity ratio, metabolic equivalent, and a new index of potential interest, daytime physical activity level. Measures based on heart rate monitoring, such as net heart rate, physical activity ratio heart rate, physical activity level heart rate, activity-related time equivalent, and daytime physical activity level heart rate. Measures based on whole-body accelerometry (counts/U time). Quantification of the velocity and duration of displacement in outdoor conditions by satellites using the Differential Global Positioning System may constitute a surrogate for physical activity, because walking is the primary activity of man in free-living conditions. A general outline of the measures and indexes described above is presented in tabular form, along with their respective definition, usual applications, advantages, and shortcomings. A practical example is given with typical values in obese and non-obese subjects. The various factors to be considered in the selection of physical activity methods include experimental goals, sample size, budget, cultural and social/environmental factors, physical burden for the subject, and statistical factors, such as accuracy and precision. It is concluded that no single current technique is able to quantify all aspects of physical activity under free-living conditions, requiring the use of complementary methods. In the future, physical activity sensors, which are of low-cost, small-sized, and convenient for subjects, investigators, and clinicians, are needed to reliably monitor, during extended periods in free-living situations, small changes in movements and grade as well as duration and intensity of typical physical activities.
Resumo:
Technology (i.e. tools, methods of cultivation and domestication, systems of construction and appropriation, machines) has increased the vital rates of humans, and is one of the defining features of the transition from Malthusian ecological stagnation to a potentially perpetual rising population growth. Maladaptations, on the other hand, encompass behaviours, customs and practices that decrease the vital rates of individuals. Technology and maladaptations are part of the total stock of culture carried by the individuals in a population. Here, we develop a quantitative model for the coevolution of cumulative adaptive technology and maladaptive culture in a 'producer-scrounger' game, which can also usefully be interpreted as an 'individual-social' learner interaction. Producers (individual learners) are assumed to invent new adaptations and maladaptations by trial-and-error learning, insight or deduction, and they pay the cost of innovation. Scroungers (social learners) are assumed to copy or imitate (cultural transmission) both the adaptations and maladaptations generated by producers. We show that the coevolutionary dynamics of producers and scroungers in the presence of cultural transmission can have a variety of effects on population carrying capacity. From stable polymorphism, where scroungers bring an advantage to the population (increase in carrying capacity), to periodic cycling, where scroungers decrease carrying capacity, we find that selection-driven cultural innovation and transmission may send a population on the path of indefinite growth or to extinction.
Resumo:
In vertebrates, genome size has been shown to correlate with nuclear and cell sizes, and influences phenotypic features, such as brain complexity. In three different anuran families, advertisement calls of polyploids exhibit longer notes and intervals than diploids, and difference in cellular dimensions have been hypothesized to cause these modifications. We investigated this phenomenon in green toads (Bufo viridis subgroup) of three ploidy levels, in a different call type (release calls) that may evolve independently from advertisement calls, examining 1205 calls, from ten species, subspecies, and hybrid forms. Significant differences between pulse rates of six diploid and four polyploid (3n, 4n) green toad forms across a range of temperatures from 7 to 27 °C were found. Laboratory data supported differences in pulse rates of triploids vs. tetraploids, but failed to reach significance when including field recordings. This study supports the idea that genome size, irrespective of call type, phylogenetic context, and geographical background, might affect call properties in anurans and suggests a common principle governing this relationship. The nuclear-cell size ratio, affected by genome size, seems the most plausible explanation. However, we cannot rule out hypotheses under which call-influencing genes from an unexamined diploid ancestral species might also affect call properties in the hybrid-origin polyploids.
Resumo:
It is well known that dichotomizing continuous data has the effect to decrease statistical power when the goal is to test for a statistical association between two variables. Modern researchers however are focusing not only on statistical significance but also on an estimation of the "effect size" (i.e., the strength of association between the variables) to judge whether a significant association is also clinically relevant. In this article, we are interested in the consequences of dichotomizing continuous data on the value of an effect size in some classical settings. It turns out that the conclusions will not be the same whether using a correlation or an odds ratio to summarize the strength of association between the variables: Whereas the value of a correlation is typically decreased by a factor pi/2 after each dichotomization, the value of an odds ratio is at the same time raised to the power 2. From a descriptive statistical point of view, it is thus not clear whether dichotomizing continuous data leads to a decrease or to an increase in the effect size, as illustrated using a data set to investigate the relationship between motor and intellectual functions in children and adolescents
Resumo:
Our objective was to establish the age-related 3D size of maxillary, sphenoid, and frontal sinuses. A total of 179 magnetic resonance imaging (MRI) of children under 17 years (76 females, 103 males) were included and sinuses were measured in the three axes. Maxillary sinuses measured at birth (mean+/-standard deviation) 7.3+/-2.7 mm length (or antero-posterior)/4.0+/-0.9 mm height (or cranio-caudal)/2.7+/-0.8 mm width (or transverse). At 16 years old, maxillary sinus measured 38.8+/-3.5 mm/36.3+/-6.2 mm/27.5+/-4.2 mm. Sphenoid sinus pneumatization starts in the third year of life after conversion from red to fatty marrow with mean values of 5.8+/-1.4 mm/8.0+/-2.3 mm/5.8+/-1.0 mm. Pneumatization progresses gradually to reach at 16 years 23.0+/-4.5 mm/22.6+/-5.8 mm/12.8+/-3.1 mm. Frontal sinuses present a wide variation in size and most of the time are not valuable with routine head MRI techniques. They are not aerated before the age of 6 years. Frontal sinuses dimensions at 16 years were 12.8+/-5.0 mm/21.9+/-8.4 mm/24.5+/-13.3 mm. A sinus volume index (SVI) of maxillary and sphenoid sinus was computed using a simplified ellipsoid volume formula, and a table with SVI according to age with percentile variations is proposed for easy clinical application. Percentile curves of maxillary and sphenoid sinuses are presented to provide a basis for objective determination of sinus size and volume during development. These data are applicable to other techniques such as conventional X-ray and CT scan.
Resumo:
Background Maternal exposure to air pollution has been related to fetal growth in a number of recent scientific studies. The objective of this study was to assess the association between exposure to air pollution during pregnancy and anthropometric measures at birth in a cohort in Valencia, Spain. Methods Seven hundred and eighty-five pregnant women and their singleton newborns participated in the study. Exposure to ambient nitrogen dioxide (NO2) was estimated by means of land use regression. NO2 spatial estimations were adjusted to correspond to relevant pregnancy periods (whole pregnancy and trimesters) for each woman. Outcome variables were birth weight, length, and head circumference (HC), along with being small for gestational age (SGA). The association between exposure to residential outdoor NO2 and outcomes was assessed controlling for potential confounders and examining the shape of the relationship using generalized additive models (GAM). Results For continuous anthropometric measures, GAM indicated a change in slope at NO2 concentrations of around 40 μg/m3. NO2 exposure >40 μg/m3 during the first trimester was associated with a change in birth length of -0.27 cm (95% CI: -0.51 to -0.03) and with a change in birth weight of -40.3 grams (-96.3 to 15.6); the same exposure throughout the whole pregnancy was associated with a change in birth HC of -0.17 cm (-0.34 to -0.003). The shape of the relation was seen to be roughly linear for the risk of being SGA. A 10 μg/m3 increase in NO2 during the second trimester was associated with being SGA-weight, odds ratio (OR): 1.37 (1.01-1.85). For SGA-length the estimate for the same comparison was OR: 1.42 (0.89-2.25). Conclusions Prenatal exposure to traffic-related air pollution may reduce fetal growth. Findings from this study provide further evidence of the need for developing strategies to reduce air pollution in order to prevent risks to fetal health and development.
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
BACKGROUND Illiteracy, a universal problem, limits the utilization of the most widely used short cognitive tests. Our objective was to assess and compare the effectiveness and cost for cognitive impairment (CI) and dementia (DEM) screening of three short cognitive tests applicable to illiterates. METHODS Phase III diagnostic test evaluation study was performed during one year in four Primary Care centers, prospectively including individuals with suspicion of CI or DEM. All underwent the Eurotest, Memory Alteration Test (M@T), and Phototest, applied in a balanced manner. Clinical, functional, and cognitive studies were independently performed in a blinded fashion in a Cognitive Behavioral Neurology Unit, and the gold standard diagnosis was established by consensus of expert neurologists on the basis of these results. Effectiveness of tests was assessed as the proportion of correct diagnoses (diagnostic accuracy [DA]) and the kappa index of concordance (k) with respect to gold standard diagnoses. Costs were based on public prices at the time and hospital accounts. RESULTS The study included 139 individuals: 47 with DEM, 36 with CI, and 56 without CI. No significant differences in effectiveness were found among the tests. For DEM screening: Eurotest (k = 0.71 [0.59-0.83], DA = 0.87 [0.80-0.92]), M@T (k = 0.72 [0.60-0.84], DA = 0.87 [0.80-0.92]), Phototest (k = 0.70 [0.57-0.82], DA = 0.86 [0.79-0.91]). For CI screening: Eurotest (k = 0.67 [0.55-0.79]; DA = 0.83 [0.76-0.89]), M@T (k = 0.52 [0.37-0.67]; DA = 0.80 [0.72-0.86]), Phototest (k = 0.59 [0.46-0.72]; DA = 0.79 [0.71-0.86]). There were no differences in the cost of DEM screening, but the cost of CI screening was significantly higher with M@T (330.7 ± 177.1 €, mean ± sd) than with Eurotest (294.1 ± 195.0 €) or Phototest (296.0 ± 196. 5 €). Application time was shorter with Phototest (2.8 ± 0.8 min) than with Eurotest (7.1 ± 1.8 min) or M@T (6.8 ± 2.2 min). CONCLUSIONS Eurotest, M@T, and Phototest are equally effective. Eurotest and Phototest are both less expensive options but Phototest is the most efficient, requiring the shortest application time.
Resumo:
BACKGROUND Available screening tests for dementia are of limited usefulness because they are influenced by the patient's culture and educational level. The Eurotest, an instrument based on the knowledge and handling of money, was designed to overcome these limitations. The objective of this study was to evaluate the diagnostic accuracy of the Eurotest in identifying dementia in customary clinical practice. METHODS A cross-sectional, multi-center, naturalistic phase II study was conducted. The Eurotest was administered to consecutive patients, older than 60 years, in general neurology clinics. The patients' condition was classified as dementia or no dementia according to DSM-IV diagnostic criteria. We calculated sensitivity (Sn), specificity (Sp) and area under the ROC curves (aROC) with 95% confidence intervals. The influence of social and educational factors on scores was evaluated with multiple linear regression analysis, and the influence of these factors on diagnostic accuracy was evaluated with logistic regression. RESULTS Sixteen neurologists recruited a total of 516 participants: 101 with dementia, 380 without dementia, and 35 who were excluded. Of the 481 participants who took the Eurotest, 38.7% were totally or functionally illiterate and 45.5% had received no formal education. Mean time needed to administer the test was 8.2+/-2.0 minutes. The best cut-off point was 20/21, with Sn = 0.91 (0.84-0.96), Sp = 0.82 (0.77-0.85), and aROC = 0.93 (0.91-0.95). Neither the scores on the Eurotest nor its diagnostic accuracy were influenced by social or educational factors. CONCLUSION This naturalistic and pragmatic study shows that the Eurotest is a rapid, simple and useful screening instrument, which is free from educational influences, and has appropriate internal and external validity.
Resumo:
The genome size, complexity, and ploidy of the arbuscular mycorrhizal fungus (AMF) Glomus intraradices was determined using flow cytometry, reassociation kinetics, and genomic reconstruction. Nuclei of G. intraradices from in vitro culture, were analyzed by flow cytometry. The estimated average length of DNA per nucleus was 14.07+/-3.52 Mb. Reassociation kinetics on G. intraradices DNA indicated a haploid genome size of approximately 16.54 Mb, comprising 88.36% single copy DNA, 1.59% repetitive DNA, and 10.05% fold-back DNA. To determine ploidy, the DNA content per nucleus measured by flow cytometry was compared with the genome estimate of reassociation kinetics. G. intraradices was found to have a DNA index (DNA per nucleus per haploid genome size) of approximately 0.9, indicating that it is haploid. Genomic DNA of G. intraradices was also analyzed by genomic reconstruction using four genes (Malate synthase, RecA, Rad32, and Hsp88). Because we used flow cytometry and reassociation kinetics to reveal the genome size of G. intraradices and show that it is haploid, then a similar value for genome size should be found when using genomic reconstruction as long as the genes studied are single copy. The average genome size estimate was 15.74+/-1.69 Mb indicating that these four genes are single copy per haploid genome and per nucleus of G. intraradices. Our results show that the genome size of G. intraradices is much smaller than estimates of other AMF and that the unusually high within-spore genetic variation that is seen in this fungus cannot be due to high ploidy.
Resumo:
The aim of our study was to provide an innovative headspace-gas chromatography-mass spectrometry (HS-GC-MS) method applicable for the routine determination of blood CO concentration in forensic toxicology laboratories. The main drawback of the GC/MS methods discussed in literature for CO measurement is the absence of a specific CO internal standard necessary for performing quantification. Even if stable isotope of CO is commercially available in the gaseous state, it is essential to develop a safer method to limit the manipulation of gaseous CO and to precisely control the injected amount of CO for spiking and calibration. To avoid the manipulation of a stable isotope-labeled gas, we have chosen to generate in a vial in situ, an internal labeled standard gas ((13)CO) formed by the reaction of labeled formic acid formic acid (H(13)COOH) with sulfuric acid. As sulfuric acid can also be employed to liberate the CO reagent from whole blood, the procedure allows for the liberation of CO simultaneously with the generation of (13)CO. This method allows for precise measurement of blood CO concentrations from a small amount of blood (10 μL). Finally, this method was applied to measure the CO concentration of intoxicated human blood samples from autopsies.
Resumo:
Intraspecific coalitional aggression between groups of individuals is a widespread trait in the animal world. It occurs in invertebrates and vertebrates, and is prevalent in humans. What are the conditions under which coalitional aggression evolves in natural populations? In this article, I develop a mathematical model delineating conditions where natural selection can favor the coevolution of belligerence and bravery between small-scale societies. Belligerence increases an actor's group probability of trying to conquer another group and bravery increase the actors's group probability of defeating an attacked group. The model takes into account two different types of demographic scenarios that may lead to the coevolution of belligerence and bravery. Under the first, the fitness benefits driving the coevolution of belligerence and bravery come through the repopulation of defeated groups by fission of victorious ones. Under the second demographic scenario, the fitness benefits come through a temporary increase in the local carrying capacity of victorious groups, after transfer of resources from defeated groups to victorious ones. The analysis of the model suggests that the selective pressures on belligerence and bravery are stronger when defeated groups can be repopulated by victorious ones. The analysis also suggests that, depending on the shape of the contest success function, costly bravery can evolve in groups of any size.