930 resultados para error performance
Children's performance estimation in mathematics and science tests over a school year: A pilot study
Resumo:
The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.
Resumo:
In the present study, we wanted to (1) evaluate whether high-sensitive troponin T levels correlate with the grade of renal insufficiency and (2) test the accuracy of high-sensitive troponin T determination in patients with renal insufficiency for diagnosis of acute myocardial infarction (AMI). In this cross-sectional analysis, all patients who received serial measurements of high-sensitive troponin T from August 1, 2010, to October 31, 2012, at the Department of Emergency Medicine were included. We analyzed data on baseline characteristics, reason for referral, medication, cardiovascular risk factors, and outcome in terms of presence of AMI along with laboratory data (high-sensitive troponin T, creatinine). A total of 1,514 patients (67% male, aged 65 ± 16 years) were included, of which 382 patients (25%) had moderate to severe renal insufficiency and significantly higher levels of high-sensitive troponin T on admission (0.028 vs 0.009, p <0.0001). In patients without AMI, high-sensitive troponin T correlated inversely with the estimated glomerular filtration rate (R = -0.12, p <0.0001). Overall, sensitivity of an elevated high-sensitive troponin for diagnosis of AMI was 0.64 (0.56 to 0.71) and the specificity was 0.48 (0.45 to 0.51). The area under the curve of the receiver operating characteristic for all patients was 0.613 (standard error [SE] 0.023), whereas it was 0.741 (SE 0.029) for patients with a Modification of Diet in Renal Disease estimated glomerular filtration rate >60 ml/min presenting with acute chest pain or dyspnea and 0.535 (SE 0.056) for patients with moderate to severe renal insufficiency presenting with acute chest pain or dyspnea. In conclusion, the diagnostic accuracy for presence of AMI of a baseline measurement of high-sensitive troponin in patients with renal insufficiency was poor and resembles tossing a coin.
Resumo:
This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.
Resumo:
Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
BACKGROUND/AIMS Clinical differentiation between organic hypersomnia and non-organic hypersomnia (NOH) is challenging. We aimed to determine the diagnostic value of sleepiness and performance tests in patients with excessive daytime sleepiness (EDS) of organic and non-organic origin. METHODS We conducted a retrospective comparison of the multiple sleep latency test (MSLT), pupillography, and the Steer Clear performance test in three patient groups complaining of EDS: 19 patients with NOH, 23 patients with narcolepsy (NAR), and 46 patients with mild to moderate obstructive sleep apnoea syndrome (OSAS). RESULTS As required by the inclusion criteria, all patients had Epworth Sleepiness Scale (ESS) scores >10. The mean sleep latency in the MSLT indicated mild objective sleepiness in NOH (8.1 ± 4.0 min) and OSAS (7.2 ± 4.1 min), but more severe sleepiness in NAR (2.5 ± 2.0 min). The difference between NAR and the other two groups was significant; the difference between NOH and OSAS was not. In the Steer Clear performance test, NOH patients performed worst (error rate = 10.4%) followed by NAR (8.0%) and OSAS patients (5.9%; p = 0.008). The difference between OSAS and the other two groups was significant, but not between NOH and NAR. The pupillary unrest index was found to be highest in NAR (11.5) followed by NOH (9.2) and OSAS (7.4; n.s.). CONCLUSION A high error rate in the Steer Clear performance test along with mild sleepiness in an objective sleepiness test (MSLT) in a patient with subjective sleepiness (ESS) is suggestive of NOH. This disproportionately high error rate in NOH may be caused by factors unrelated to sleep pressure, such as anergia, reduced attention and motivation affecting performance, but not conventional sleepiness measurements.
Resumo:
Bayesian adaptive randomization (BAR) is an attractive approach to allocate more patients to the putatively superior arm based on the interim data while maintains good statistical properties attributed to randomization. Under this approach, patients are adaptively assigned to a treatment group based on the probability that the treatment is better. The basic randomization scheme can be modified by introducing a tuning parameter, replacing the posterior estimated response probability, setting a boundary to randomization probabilities. Under randomization settings comprised of the above modifications, operating characteristics, including type I error, power, sample size, imbalance of sample size, interim success rate, and overall success rate, were evaluated through simulation. All randomization settings have low and comparable type I errors. Increasing tuning parameter decreases power, but increases imbalance of sample size and interim success rate. Compared with settings using the posterior probability, settings using the estimated response rates have higher power and overall success rate, but less imbalance of sample size and lower interim success rate. Bounded settings have higher power but less imbalance of sample size than unbounded settings. All settings have better performance in the Bayesian design than in the frequentist design. This simulation study provided practical guidance on the choice of how to implement the adaptive design. ^
Resumo:
Under ocean acidification (OA), the 200 % increase in CO2(aq) and the reduction of pH by 0.3-0.4 units are predicted to affect the carbon physiology and growth of macroalgae. Here we examined how the physiology of the giant kelp Macrocystis pyrifera is affected by elevated pCO2/low pH. Growth and photosynthetic rates, external and internal carbonic anhydrase (CA) activity, HCO3 (-) versus CO2 use were determined over a 7-day incubation at ambient pCO2 400 µatm/pH 8.00 and a future OA treatment of pCO2 1200 µatm/pH 7.59. Neither the photosynthetic nor growth rates were changed by elevated CO2 supply in the OA treatment. These results were explained by the greater use of HCO3 (-) compared to CO2 as an inorganic carbon (Ci) source to support photosynthesis. Macrocystis is a mixed HCO3 (-) and CO2 user that exhibits two effective mechanisms for HCO3 (-) utilization; as predicted for species that possess carbon-concentrating mechanisms (CCMs), photosynthesis was not substantially affected by elevated pCO2. The internal CA activity was also unaffected by OA, and it remained high and active throughout the experiment; this suggests that HCO3 (-) uptake via an anion exchange protein was not affected by OA. Our results suggest that photosynthetic Ci uptake and growth of Macrocystis will not be affected by elevated pCO2/low pH predicted for the future, but the combined effects with other environmental factors like temperature and nutrient availability could change the physiological response of Macrocystis to OA. Therefore, further studies will be important to elucidate how this species might respond to the global environmental change predicted for the ocean.
Resumo:
Large scale patterns of ecologically relevant traits may help identify drivers of their variability and conditions beneficial or adverse to the expression of these traits. Antimicrofouling defenses in scleractinian corals regulate the establishment of the associated biofilm as well as the risks of infection. The Saudi Arabian Red Sea coast features a pronounced thermal and nutritional gradient including regions and seasons with potentially stressful conditions to corals. Assessing the patterns of antimicrofouling defenses across the Red Sea may hint at the susceptibility of corals to global change. We investigated microfouling pressure as well as the relative strength of 2 alternative antimicrofouling defenses (chemical antisettlement activity, mucus release) along the pronounced environmental gradient along the Saudi Arabian Red Sea coast in 2 successive years. Microfouling pressure was exceptionally low along most of the coast but sharply increased at the southernmost sites. Mucus release correlated with temperature. Chemical defense tended to anti-correlate with mucus release. As a result, the combined action of mucus release and chemical antimicrofouling defense seemed to warrant sufficient defense against microbes along the entire coast. In the future, however, we expect enhanced energetic strain on corals when warming and/or eutrophication lead to higher bacterial fouling pressure and a shift towards putatively more costly defense by mucus release.
Resumo:
Rising CO2 levels in the oceans are predicted to have serious consequences for many marine taxa. Recent studies suggest that non-genetic parental effects may reduce the impact of high CO2 on the growth, survival and routine metabolic rate of marine fishes, but whether the parental environment mitigates behavioural and sensory impairment associated with high CO2 remains unknown. Here, we tested the acute effects of elevated CO2 on the escape responses of juvenile fish and whether such effects were altered by exposure of parents to increased CO2 (transgenerational acclimation). Elevated CO2 negatively affected the reactivity and locomotor performance of juvenile fish, but parental exposure to high CO2 reduced the effects in some traits, indicating the potential for acclimation of behavioural impairment across generations. However, acclimation was not complete in some traits, and absent in others, suggesting that transgenerational acclimation does not completely compensate the effects of high CO2 on escape responses.
Resumo:
We show here that increased variability of temperature and pH synergistically negatively affects the energetics of intertidal zone crabs. Under future climate scenarios, coastal ecosystems are projected to have increased extremes of low tide-associated thermal stress and ocean acidification-associated low pH, the individual or interactive effects of which have yet to be determined. To characterize energetic consequences of exposure to increased variability of pH and temperature, we exposed porcelain crabs, Petrolisthes cinctipes, to conditions that simulated current and future intertidal zone thermal and pH environments. During the daily low tide, specimens were exposed to no, moderate or extreme heating, and during the daily high tide experienced no, moderate or extreme acidification. Respiration rate and cardiac thermal limits were assessed following 2.5 weeks of acclimation. Thermal variation had a larger overall effect than pH variation, though there was an interactive effect between the two environmental drivers. Under the most extreme temperature and pH combination, respiration rate decreased while heat tolerance increased, indicating a smaller overall aerobic energy budget (i.e. a reduced O2 consumption rate) of which a larger portion is devoted to basal maintenance (i.e. greater thermal tolerance indicating induction of the cellular stress response). These results suggest the potential for negative long-term ecological consequences for intertidal ectotherms exposed to increased extremes in pH and temperature due to reduced energy for behavior and reproduction.
Resumo:
Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML?fiber couple.
Resumo:
This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.
Resumo:
The energetic performance of landfill biogas (LB) and biodigester biogas (BB) from municipal waste was examined in consumption tests. These tests were performed in situ at a gas generation plant associated with a landfill facility in Madrid (Spain) and following the standard UNE-EN 30-2-1 (1999). The jets of a domestic cooker commonly used for natural gas (NG) or liquefied petroleum gas (LPG) were modified to operate with the biogases produced at the facility. The working pressures best suited to the tested gases, i.e., to avoid flashback and flame lift, and to ensure the stability and correct functioning of the flame during combustion, were determined by trial and error. Both biogases returned optimum energetic performance for the transfer of heat to water in a metallic recipient (as required by the above standard) at a supply pressure of 10 mbar. Domestic cookers are normally supplied with NG at a pressure of 20 mbar, at which pressure the energetic performance of G20 reference gas was higher than that of both biogases (52.84% compared to 38.06% and 49.77% respectively). Data concerning these issues involving also unexplored feedstock are required for the correct conversions of domestic cookers in order to avoid risks of serious personal injuries or property damages.
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.