15 resultados para Performance Estimation

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Children typically hold very optimistic views of their own skills but so far, only a few studies have investigated possible correlates of the ability to predict performance accurately. Therefore, this study examined the role of individual differences in performance estimation accuracy as a global metacognitive index for different monitoring and control skills (item-level judgments of learning [JOLs] and confidence judgments [CJs]), metacognitive control processes (allocation of study time and control of answers), and executive functions (cognitive flexibility, inhibition, working memory) in 6-year-olds (N=93). The three groups of under estimators, realists and over estimators differed significantly in their monitoring and control abilities: the under estimators outperformed the over estimators by showing a higher discrimination in CJs between correct and incorrect recognition. Also, the under estimators scored higher on the adequate control of incorrectly recognized items. Regarding the interplay of monitoring and control processes, under estimators spent more time studying items with low JOLs, and relied more systematically on their monitoring when controlling their recognition compared to over estimators. At the same time, the three groups did not differ significantly from each other in their executive functions. Overall, results indicate that differences in performance estimation accuracy are systematically related to other global and item-level metacognitive monitoring and control abilities in children as young as six years of age, while no meaningful association between performance estimation accuracy and executive functions was found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The objective of this review was to systematically screen the literature for data related to the survival and complication rates observed with dental or implant double crown abutments and removable prostheses under functional loading for at least 3 years. Materials and Methods: A systematic review of the dental literature from January 1966 to December 2009 was performed in electronic databases (PubMed and Embase) as well as by an extensive hand search to investigate the clinical outcomes of double crown reconstructions. Results: From the total of 2412 titles retrieved from the search, 65 were selected for full-text review. Subsequently, 17 papers were included for data extraction. An estimation of the cumulative survival and complication rates was not feasible due to the lack of detailed information. Tooth survival rates for telescopic abutment teeth ranged from 82.5% to 96.5% after an observation period of 3.4 to 6 years, and for tooth-supported double crown retained dentures from 66.7% to 98.6% after an observation period of 6 to 10 years. The survival rates of implants were between 97.9% and 100% and for telescopic-retained removable dental prostheses with two mandibular implants, 100% after 3.0 and 10.4 years. The major biological complications affecting the tooth abutments were gingival inflammation, periodontal disease, and caries. The most frequent technical complications were loss of cementation and loss of facings. Conclusions: The main findings of this review are: (I) double crown tooth abutments and dentures demonstrated a wide range of survival rates. (II) Implant-supported mandibular overdentures demonstrated a favorable long-term prognosis. (III) A greater need for prosthetic maintenance is required for both tooth-supported and implant-supported reconstructions. (IV) Future areas of research would involve designing appropriate longitudinal studies for comparisons of survival and complication rates of different reconstruction designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, screening tests for monitoring the prevalence of transmissible spongiform encephalopathies specifically in sheep and goats became available. Although most countries require comprehensive test validation prior to approval, little is known about their performance under normal operating conditions. Switzerland was one of the first countries to implement 2 of these tests, an enzyme-linked immunosorbent assay (ELISA) and a Western blot, in a 1-year active surveillance program. Slaughtered animals (n = 32,777) were analyzed in either of the 2 tests with immunohistochemistry for confirmation of initial reactive results, and fallen stock samples (n = 3,193) were subjected to both screening tests and immunohistochemistry in parallel. Initial reactive and false-positive rates were recorded over time. Both tests revealed an excellent diagnostic specificity (>99.5%). However, initial reactive rates were elevated at the beginning of the program but dropped to levels below 1% with routine and enhanced staff training. Only those in the ELISA increased again in the second half of the program and correlated with the degree of tissue autolysis in the fallen stock samples. It is noteworthy that the Western blot missed 1 of the 3 atypical scrapie cases in the fallen stock, indicating potential differences in the diagnostic sensitivities between the 2 screening tests. However, an estimation of the diagnostic sensitivity for both tests on field samples remained difficult due to the low disease prevalence. Taken together, these results highlight the importance of staff training, sample quality, and interlaboratory comparison trials when such screening tests are implemented in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. Our algorithm works by estimating the displacements from image patches to the (unknown) landmark positions and then integrating them via voting. The fundamental contribution is that, we jointly estimate the displacements from all patches to multiple landmarks together, by considering not only the training data but also geometric constraints on the test image. The various constraints constitute a convex objective function that can be solved efficiently. Validated on three challenging datasets, our method achieves high accuracy in landmark detection, and, combined with statistical shape model, gives a better performance in shape segmentation compared to the state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence = Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident = true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Robotics-assisted tilt table technology was introduced for early rehabilitation of neurological patients. It provides cyclical stepping movement and physiological loading of the legs. The aim of the present study was to assess the feasibility of this type of device for peak cardiopulmonary performance testing using able-bodied subjects. METHODS: A robotics-assisted tilt table was augmented with force sensors in the thigh cuffs and a work rate estimation algorithm. A custom visual feedback system was employed to guide the subjects' work rate and to provide real time feedback of actual work rate. Feasibility assessment focused on: (i) implementation (technical feasibility), and (ii) responsiveness (was there a measurable, high-level cardiopulmonary reaction?). For responsiveness testing, each subject carried out an incremental exercise test to the limit of functional capacity with a work rate increment of 5 W/min in female subjects and 8 W/min in males. RESULTS: 11 able-bodied subjects were included (9 male, 2 female; age 29.6 ± 7.1 years: mean ± SD). Resting oxygen uptake (O_{2}) was 4.6 ± 0.7 mL/min/kg and O_{2}peak was 32.4 ± 5.1 mL/min/kg; this mean O_{2}peak was 81.1% of the predicted peak value for cycle ergometry. Peak heart rate (HRpeak) was 177.5 ± 9.7 beats/min; all subjects reached at least 85% of their predicted HRpeak value. Respiratory exchange ratio (RER) at O_{2}peak was 1.02 ± 0.07. Peak work rate) was 61.3 ± 15.1 W. All subjects reported a Borg CR10 value for exertion and leg fatigue of 7 or more. CONCLUSIONS: The robotics-assisted tilt table is deemed feasible for peak cardiopulmonary performance testing: the approach was found to be technically implementable and substantial cardiopulmonary responses were observed. Further testing in neurologically-impaired subjects is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly mean land surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global-scale synthetic analogues to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real-world data do not afford us). Hence, algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robotics-assisted tilt table (RATT) technology provides body support, cyclical stepping movement and physiological loading. This technology can potentially be used to facilitate the estimation of peak cardiopulmonary performance parameters in patients who have neurological or other problems that may preclude testing on a treadmill or cycle ergometer. The aim of the study was to compare the magnitude of peak cardiopulmonary performance parameters including peak oxygen uptake (VO2peak) and peak heart rate (HRpeak) obtained from a robotics-assisted tilt table (RATT), a cycle ergometer and a treadmill. The strength of correlations between the three devices, test-retest reliability and repeatability were also assessed. Eighteen healthy subjects performed six maximal exercise tests, with two tests on each of the three exercise modalities. Data from the second tests were used for the comparative and correlation analyses. For nine subjects, test-retest reliability and repeatability of VO2peak and HRpeak were assessed. Absolute VO2peak from the RATT, the cycle ergometer and the treadmill was (mean (SD)) 2.2 (0.56), 2.8 (0.80) and 3.2 (0.87) L/min, respectively (p < 0.001). HRpeak from the RATT, the cycle ergometer and the treadmill was 168 (9.5), 179 (7.9) and 184 (6.9) beats/min, respectively (p < 0.001). VO2peak and HRpeak from the RATT vs the cycle ergometer and the RATT vs the treadmill showed strong correlations. Test-retest reliability and repeatability were high for VO2peak and HRpeak for all devices. The results demonstrate that the RATT is a valid and reliable device for exercise testing. There is potential for the RATT to be used in severely impaired subjects who cannot use the standard modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.