941 resultados para Tests de performance
Resumo:
OBJECTIVES: Several guidelines recommend universal screening for hypertension in childhood and adolescence. Targeted screening to children with parental history of hypertension could be a more efficient strategy than universal screening. Therefore, we assessed the association between parental history of hypertension and hypertension in children, and estimated the sensitivity, specificity, negative, and positive predictive values of parental history of hypertension for hypertension in children. METHODS: The present study was a school-based cross-sectional study including 5207 children aged 10-14 years from all public 6th grade classes in the Canton of Vaud, Switzerland. Children had hypertension if they had sustained elevated blood pressure over three separate visits. RESULTS: In children, the prevalence of hypertension was 2.2%. Some 8.5% of mothers and 12.9% of fathers reported to be hypertensive. Maternal history of hypertension (odds ratio 2.0, 95% confidence interval 1.2-3.3) and paternal history of hypertension (odds ratio 2.2, 95% confidence interval 1.4-3.6) were independent risk factors for hypertension in children. Nevertheless, the sensitivity of parental history of hypertension for the identification of hypertension in children was low (from 4% for both parents' positive history up to 41% for at least one parent's positive history). Positive predictive values were also low (between 4 and 5%). CONCLUSION: Children with hypertensive parents were at higher risk of hypertension. Nevertheless, parental history of hypertension helped only marginally to identify hypertension in offspring. Targeting screening only toward children with a parental history of hypertension may not be a substantially better strategy to identify hypertension in children compared with universal screening.
Resumo:
AIM: Longitudinal studies that have examined cognitive performance in children with intellectual disability more than twice over the course of their development are scarce. We assessed population and individual stability of cognitive performance in a clinical sample of children with borderline to mild non-syndromic intellectual disability. METHOD: Thirty-six children (28 males, eight females; age range 3-19y) with borderline to mild intellectual disability (Full-scale IQ [FSIQ] 50-85) of unknown origin were examined in a retrospective clinical case series using linear mixed models including at least three assessments with standardized intelligence tests. RESULTS: Average cognitive performance remained remarkably stable over time (high population stability, drop of only 0.38 IQ points per year, standard error=0.39, p=0.325) whereas individual stability was at best moderate (intraclass correlation of 0.58), indicating that about 60% of the residual variation in FSIQ scores can be attributed to between-child variability. Neither sex nor socio-economic status had a statistically significant impact on FSIQ. INTERPRETATION: Although intellectual disability during childhood is a relatively stable phenomenon, individual stability of IQ is only moderate, likely to be caused by test-to-test reliability (e.g. level of child's cooperation, motivation, and attention). Therefore, clinical decisions and predictions should not rely on single IQ assessments, but should also consider adaptive functioning and previous developmental history.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.
Resumo:
PURPOSE: We investigated association of hematological variables with specific fitness performance in elite team-sport players. METHODS: Hemoglobin mass (Hbmass) was measured in 25 elite field hockey players using the optimized (2 min) CO-rebreathing method. Hemoglobin concentration ([Hb]), hematocrit and mean corpuscular hemoglobin concentration (MCHC) were analyzed in venous blood. Fitness performance evaluation included a repeated-sprint ability (RSA) test (8 x 20 m sprints, 20 s of rest) and the Yo-Yo intermittent recovery level 2 (YYIR2). RESULTS: Hbmass was largely correlated (r = 0.62, P<0.01) with YYIR2 total distance covered (YYIR2TD) but not with any RSA-derived parameters (r ranging from -0.06 to -0.32; all P>0.05). [Hb] and MCHC displayed moderate correlations with both YYIR2TD (r = 0.44 and 0.41; both P<0.01) and RSA sprint decrement score (r = -0.41 and -0.44; both P<0.05). YYIR2TD correlated with RSA best and total sprint times (r = -0.46, P<0.05 and -0.60, P<0.01; respectively), but not with RSA sprint decrement score (r = -0.19, P>0.05). CONCLUSION: Hbmass is positively correlated with specific aerobic fitness, but not with RSA, in elite team-sport players. Additionally, the negative relationships between YYIR2 and RSA tests performance imply that different hematological mechanisms may be at play. Overall, these results indicate that these two fitness tests should not be used interchangeably as they reflect different hematological mechanisms.
Resumo:
We have studied the motor abilities and associative learning capabilities of adult mice placed in different enriched environments. Three-month-old animals were maintained for a month alone (AL), alone in a physically enriched environment (PHY), and, finally, in groups in the absence (SO) or presence (SOPHY) of an enriched environment. The animals' capabilities were subsequently checked in the rotarod test, and for classical and instrumental learning. The PHY and SOPHY groups presented better performances in the rotarod test and in the acquisition of the instrumental learning task. In contrast, no significant differences between groups were observed for classical eyeblink conditioning. The four groups presented similar increases in the strength of field EPSPs (fEPSPs) evoked at the hippocampal CA3-CA1 synapse across classical conditioning sessions, with no significant differences between groups. These trained animals were pulse-injected with bromodeoxyuridine (BrdU) to determine hippocampal neurogenesis. No significant differences were found in the number of NeuN/BrdU double-labeled neurons. We repeated the same BrdU study in one-month-old mice raised for an additional month in the above-mentioned four different environments. These animals were not submitted to rotarod or conditioned tests. Non-trained PHY and SOPHY groups presented more neurogenesis than the other two groups. Thus, neurogenesis seems to be related to physical enrichment at early ages, but not to learning acquisition in adult mice.
Resumo:
INTRODUCTION: Two important risk factors for abnormal neurodevelopment are preterm birth and neonatal hypoxic ischemic encephalopathy. The new revisions of Griffiths Mental Development Scale (Griffiths-II, [1996]) and the Bayley Scales of Infant Development (BSID-II, [1993]) are two of the most frequently used developmental diagnostics tests. The Griffiths-II is divided into five subscales and a global development quotient (QD), and the BSID-II is divided into two scales, the Mental scale (MDI) and the Psychomotor scale (PDI). The main objective of this research was to establish the extent to which developmental diagnoses obtained using the new revisions of these two tests are comparable for a given child. MATERIAL AND METHODS: Retrospective study of 18-months-old high-risk children examined with both tests in the follow-up Unit of the Clinic of Neonatology of our tertiary care university Hospital between 2011 and 2012. To determine the concurrent validity of the two tests paired t-tests and Pearson product-moment correlation coefficients were computed. Using the BSID-II as a gold standard, the performance of the Griffiths-II was analyzed with receiver operating curves. RESULTS: 61 patients (80.3% preterm, 14.7% neonatal asphyxia) were examined. For the BSID-II the MDI mean was 96.21 (range 67-133) and the PDI mean was 87.72 (range 49-114). For the Griffiths-II, the QD mean was 96.95 (range 60-124), the locomotors subscale mean was 92.57 (range 49-119). The score of the Griffiths locomotors subscale was significantly higher than the PDI (p<0.001). Between the Griffiths-II QD and the BSID-II MDI no significant difference was found, and the area under the curve was 0.93, showing good validity. All correlations were high and significant with a Pearson product-moment correlation coefficient >0.8. CONCLUSIONS: The meaning of the results for a given child was the same for the two tests. Two scores were interchangeable, the Griffiths-II QD and the BSID-II MDI.
Resumo:
BACKGROUND: Despite a low positive predictive value, diagnostic tests such as complete blood count (CBC) and C-reactive protein (CRP) are commonly used to evaluate whether infants with risk factors for early-onset neonatal sepsis (EOS) should be treated with antibiotics. STUDY DESIGN: We investigated the impact of imple- menting a protocol aiming at reducing the number of dia- gnostic tests in infants with risk factors for EOS in order to compare the diagnostic performance of repeated clinical examination with CBC and CRP measurement. The primary outcome was the time between birth and the first dose of antibiotics in infants treated for suspected EOS. RESULTS: Among the 11,503 infants born at 35 weeks during the study period, 222 were treated with antibiotics for suspected EOS. The proportion of infants receiving an- tibiotics for suspected EOS was 2.1% and 1.7% before and after the change of protocol (p = 0.09). Reduction of dia- gnostic tests was associated with earlier antibiotic treat- ment in infants treated for suspected EOS (hazard ratio 1.58; 95% confidence interval [CI] 1.20-2.07; p <0.001), and in infants with neonatal infection (hazard ratio 2.20; 95% CI 1.19-4.06; p = 0.01). There was no difference in the duration of hospital stay nor in the proportion of infants requiring respiratory or cardiovascular support before and after the change of protocol. CONCLUSION: Reduction of diagnostic tests such as CBC and CRP does not delay initiation of antibiotic treat- ment in infants with suspected EOS. The importance of clinical examination in infants with risk factors for EOS should be emphasised.
Resumo:
The identification of biomarkers of vascular cognitive impairment is urgent for its early diagnosis. The aim of this study was to detect and monitor changes in brain structure and connectivity, and to correlate them with the decline in executive function. We examined the feasibility of early diagnostic magnetic resonance imaging (MRI) to predict cognitive impairment before onset in an animal model of chronic hypertension: Spontaneously Hypertensive Rats. Cognitive performance was tested in an operant conditioning paradigm that evaluated learning, memory, and behavioral flexibility skills. Behavioral tests were coupled with longitudinal diffusion weighted imaging acquired with 126 diffusion gradient directions and 0.3 mm(3) isometric resolution at 10, 14, 18, 22, 26, and 40 weeks after birth. Diffusion weighted imaging was analyzed in two different ways, by regional characterization of diffusion tensor imaging (DTI) indices, and by assessing changes in structural brain network organization based on Q-Ball tractography. Already at the first evaluated times, DTI scalar maps revealed significant differences in many regions, suggesting loss of integrity in white and gray matter of spontaneously hypertensive rats when compared to normotensive control rats. In addition, graph theory analysis of the structural brain network demonstrated a significant decrease of hierarchical modularity, global and local efficacy, with predictive value as shown by regional three-fold cross validation study. Moreover, these decreases were significantly correlated with the behavioral performance deficits observed at subsequent time points, suggesting that the diffusion weighted imaging and connectivity studies can unravel neuroimaging alterations even overt signs of cognitive impairment become apparent.
Resumo:
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each condition, 5000 replications per group were generated. The results obtained by this study show an adequate type error I rate but not a high power for the confidence intervals. In general, for the two scenarios studied (mean population differences and not mean population differences) in the different conditions analysed, the Mann-Whitney U-test demonstrated strong performance, and a little worse the t-test of Yuen-Welch.
Resumo:
Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.
Resumo:
Nowadays, the huge part of the most important research is done in the area of interaction of two or more fields of research. They open doors for new ideas and help to find that was not possible to find before, explain simple things, which was missed because of narrow vision. This research investigates the interconnection of strategy study and knowledge management. Well-known researches (e.g. Michael Zack, 2003) point out that organization should align its' knowledge management to strategy to gain success. But this is not well developed area yet. This research contributes to the growing knowledge of knowledge management - strategy alignment. The research tests the relation between strategic orientation of knowledge management and performance of the company. It also investigates the nature of strategy typology influence on strategic orientation of knowledge management. These two points have critical importance for development of this area. Moreover, it has management implication for those practitioners, who cares about sustainable success of their company based on knowledge.
Resumo:
The use of renewable fuels, such as the biodiesel, can ease the demand of fossil fuel for the power generation and transportation fields in rural area. In this work, the performance impact of the application of castor oil biodiesel is evaluated with an automotive and a stationary diesel engine. The application of B20 and B10 biodiesel blends and pre-heated net biodiesel is considered. The viability of the employment of B10 and B20 blends to mobility and power generation was observed from dynamometric bench tests, where this blends performed similar to fossil diesel. With the pre-heated net biodiesel, however, a brake torque loss and a specific consumption increase were observed with relation to diesel fuel.
Resumo:
This thesis attempts to fill gaps in both a theoretical basis and an operational and strategic understanding in the areas of social ventures, social entrepreneurship and nonprofit business models. This study also attempts to bridge the gap in strategic and economic theory between social and commercial ventures. More specifically, this thesis explores sustainable competitive advantage from a resource-based theory perspective and explores how it may be applied to the nonmarket situation of nonprofit organizations and social ventures. It is proposed that a social value-orientation of sustainable competitive advantage, called sustainable contributive advantage, provides a more realistic depiction of what is necessary in order for a social venture to perform better than its competitors over time. In addition to providing this realistic depiction, this research provides a substantial theoretical contribution in the area of economics, social ventures, and strategy research, specifically in regards to resource-based theory. The proposed model for sustainable contributive advantage uses resource-based theory and competitive advantage in order to be applicable to social ventures. This model proposes an explanation of a social venture’s ability to demonstrate consistently superior performance. In order to determine whether sustainable competitive advantage is in fact, appropriate to apply to both social and economic environments, quantitative analyses are conducted on a large sample of nonprofit organizations in a single industry and then compared to similar quantitative analyses conducted on commercial ventures. In comparing the trends and strategies between the two types of entities from a quantitative perspective, propositions are developed regarding a social venture’s resource utilization strategies and their possible impact on performance. Evidence is found to support the necessity of adjusting existing models in resource-based theory in order to apply them to social ventures. Additionally supported is the proposed theory of sustainable contributive advantage. The thesis concludes with recommendations for practitioners, researchers and policy makers as well as suggestions for future research paths.
Resumo:
Objective The objective of this study is to assess the performance of cytopathology laboratories providing services to the Brazilian Unified Health System (Sistema Único de Saúde - SUS) in the State of Minas Gerais, Brazil. Methods This descriptive study uses data obtained from the Cervical Cancer Information System from January to December 2012. Three quality indicators were analyzed to assess the quality of cervical cytopathology tests: positivity index, percentage of atypical squamous cells (ASCs) in abnormal tests, and percentage of tests compatiblewith high-grade squamous intraepithelial lesions (HSILs). Laboratories were classified according to their production scale in tests per year≤5,000; from 5,001 to 10,000; from 10,001 to 15,000; and 15,001. Based on the collection of variables and the classification of laboratories according to production scale, we created and analyzed a database using Microsoft Office Excel 97-2003. Results In the Brazilian state of Minas Gerais, 146 laboratories provided services to the SUS in 2012 by performing a total of 1,277,018 cervical cytopathology tests. Half of these laboratories had production scales≤5,000 tests/year and accounted for 13.1% of all tests performed in the entire state; in turn, 13.7% of these laboratories presented production scales of > 15,001 tests/year and accounted for 49.2% of the total of tests performed in the entire state. The positivity indexes of most laboratories providing services to the SUS in 2012, regardless of production scale, were below or well below recommended limits. Of the 20 laboratories that performed more than 15,001 tests per year, only three presented percentages of tests compatible with HSILs above the lower limit recommended by the Brazilian Ministry of Health. Conclusion The majority of laboratories providing services to the SUS in Minas Gerais presented quality indicators outside the range recommended by the Brazilian Ministry of Health.
Resumo:
During vehicle deceleration due to braking there is friction between the lining surface and the brake drum or disc. In this process the kinetic energy of vehicle is turned into thermal energy that raises temperature of the components. The heating of the brake system in the course of braking is a great problem, because besides damaging the system, it may also affect the wheel and tire, which can cause accidents. In search of the best configuration that considers the true conditions of use, without passing the safety limits, models and formulations are presented with respect to the brake system, considering different braking conditions and kinds of brakes. Some modeling was analyzed using well-known methods. The flat plate model considering energy conservation was applied to a bus, using for this a computer program. The vehicle is simulated to undergo an emergency braking, considering the change of temperature on the lining-drum. The results include deceleration, braking efficiency, wheel resistance, normal reaction on the tires and the coefficient of adhesion. Some of the results were compared with dynamometer tests made by FRAS-LE and others were compared with track tests made by Mercedes-Benz. The convergence between the results and the tests is sufficient to validate the mathematical model. The computer program makes it possible to simulate the brake system performance in the vehicle. It assists the designer during the development phase and reduces track tests.