247 resultados para ACCURACIES
Resumo:
In sports games, it is often necessary to perceive a large number of moving objects (e.g., the ball and players). In this context, the role of peripheral vision for processing motion information in the periphery is often discussed especially when motor responses are required. In an attempt to test the basal functionality of peripheral vision in those sports-games situations, a Multiple Object Tracking (MOT) task that requires to track a certain number of targets amidst distractors, was chosen. Participants’ primary task was to recall four targets (out of 10 rectangular stimuli) after six seconds of quasi-random motion. As a second task, a button had to be pressed if a target change occurred (Exp 1: stop vs. form change to a diamond for 0.5 s; Exp 2: stop vs. slowdown for 0.5 s). While eccentricities of changes (5-10° vs. 15-20°) were manipulated, decision accuracy (recall and button press correct), motor response time as well as saccadic reaction time were calculated as dependent variables. Results show that participants indeed used peripheral vision to detect changes, because either no or very late saccades to the changed target were executed in correct trials. Moreover, a saccade was more often executed when eccentricities were small. Response accuracies were higher and response times were lower in the stop conditions of both experiments while larger eccentricities led to higher response times in all conditions. Summing up, it could be shown that monitoring targets and detecting changes can be processed by peripheral vision only and that a monitoring strategy on the basis of peripheral vision may be the optimal one as saccades may be afflicted with certain costs. Further research is planned to address the question whether this functionality is also evident in sports tasks.
Resumo:
In sports games, it is often necessary to perceive a large number of moving objects (e.g., the ball and players). In this context, the role of peripheral vision for processing motion information in the periphery is often discussed especially when motor responses are required. In an attempt to test the capability of using peripheral vision in those sports-games situations, a Multiple-Object-Tracking task that requires to track a certain number of targets amidst distractors, was chosen to determine the sensitivity of detecting target changes with peripheral vision only. Participants’ primary task was to recall four targets (out of 10 rectangular stimuli) after six seconds of quasi-random motion. As a second task, a button had to be pressed if a target change occurred (Exp 1: stop vs. form change to a diamond for 0.5 s; Exp 2: stop vs. slowdown for 0.5 s). Eccentricities of changes (5-10° vs. 15-20°) were manipulated, decision accuracy (recall and button press correct), motor response time and saccadic reaction time (change onset to saccade onset) were calculated and eye-movements were recorded. Results show that participants indeed used peripheral vision to detect changes, because either no or very late saccades to the changed target were executed in correct trials. Moreover, a saccade was more often executed when eccentricities were small. Response accuracies were higher and response times were lower in the stop conditions of both experiments while larger eccentricities led to higher response times in all conditions. Summing up, it could be shown that monitoring targets and detecting changes can be processed by peripheral vision only and that a monitoring strategy on the basis of peripheral vision may be the optimal one as saccades may be afflicted with certain costs. Further research is planned to address the question whether this functionality is also evident in sports tasks.
Resumo:
Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.
Resumo:
Diet-related chronic diseases severely affect personal and global health. However, managing or treating these diseases currently requires long training and high personal involvement to succeed. Computer vision systems could assist with the assessment of diet by detecting and recognizing different foods and their portions in images. We propose novel methods for detecting a dish in an image and segmenting its contents with and without user interaction. All methods were evaluated on a database of over 1600 manually annotated images. The dish detection scored an average of 99% accuracy with a .2s/image run time, while the automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 91% respectively, with an average run time of .5s/image, outperforming competing solutions.
Resumo:
The Center for Orbit Determination in Europe (CODE) is contributing as a global Analysis center to the International GNSS Service (IGS) since many years. The processing of GPS and GLONASS data is well established in CODE’s ultra-rapid, rapid, and final product lines. With the introduction of new signals for the established and new GNSS, new challenges and opportunities are arising for the GNSS data management and processing. The IGS started the Multi-GNSS-EXperiment (MGEX) in 2012 in order to gain first experience with the new data formats and to develop new strategies for making optimal use of these additional measurements. CODE has started to contribute to IGS MGEX with a consistent, rigorously combined triple-system orbit solution (GPS, GLONASS, and Galileo). SLR residuals for the computed Galileo satellite orbits are of the order of 10 cm. Furthermore CODE established a GPS and Galileo clock solution. A quality assessment shows that these experimental orbit and clock products allow even a Galileo-only precise point positioning (PPP) with accuracies on the decimeter- (static PPP) to meter-level (kinematic PPP) for selected stations.
Resumo:
BACKGROUND Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.
Resumo:
Aims. We present an inversion method based on Bayesian analysis to constrain the interior structure of terrestrial exoplanets, in the form of chemical composition of the mantle and core size. Specifically, we identify what parts of the interior structure of terrestrial exoplanets can be determined from observations of mass, radius, and stellar elemental abundances. Methods. We perform a full probabilistic inverse analysis to formally account for observational and model uncertainties and obtain confidence regions of interior structure models. This enables us to characterize how model variability depends on data and associated uncertainties. Results. We test our method on terrestrial solar system planets and find that our model predictions are consistent with independent estimates. Furthermore, we apply our method to synthetic exoplanets up to 10 Earth masses and up to 1.7 Earth radii, and to exoplanet Kepler-36b. Importantly, the inversion strategy proposed here provides a framework for understanding the level of precision required to characterize the interior of exoplanets. Conclusions. Our main conclusions are (1) observations of mass and radius are sufficient to constrain core size; (2) stellar elemental abundances (Fe, Si, Mg) are principal constraints to reduce degeneracy in interior structure models and to constrain mantle composition; (3) the inherent degeneracy in determining interior structure from mass and radius observations does not only depend on measurement accuracies, but also on the actual size and density of the exoplanet. We argue that precise observations of stellar elemental abundances are central in order to place constraints on planetary bulk composition and to reduce model degeneracy. We provide a general methodology of analyzing interior structures of exoplanets that may help to understand how interior models are distributed among star systems. The methodology we propose is sufficiently general to allow its future extension to more complex internal structures including hydrogen- and water-rich exoplanets.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
The Radarsat-1 Antarctic Mapping Project (RAMP) compiled a mosaic of Antarctica and the adjacent ocean zone from more than 3000 high-resolution Synthetic Aperture Radar (SAR) images acquired in September and October 1997. The mosaic with a pixel size of 100 m was used to determine iceberg size distributions around Antarctica, combining an automated detection with a visual control of all icebergs larger than 5 km**2 and correction of recognized false detections. For icebergs below 5 km**2 in size, the numbers of false detections and accuracies of size retrievals were analyzed for three test sites. Nearly 7000 icebergs with horizontal areas between 0.3 and 4717.7 km**2 were identified in a near-coastal zone of varying width between 20 and 300 km. The spatial distributions of icebergs around Antarctica were calculated for zonal segments of 20° angular width and related to the types of the calving fronts in the respective section. Results reveal that regional variations of the size distributions cannot be neglected. The highest ice mass accumulations were found at positions of giant icebergs (> 18.5 km) but also in front of ice shelves from which larger numbers of smaller icebergs calve almost continuously. Although the coastal oceanic zone covered by RAMP is too narrow compared to the spatial coverage needed for oceanographic research, this study nevertheless demonstrates the usefulness of SAR images for iceberg research and the need for repeated data acquisitions extending ocean-wards over distances of 500 km and more from the coast to monitor iceberg melt and disintegration and the related freshwater input into the ocean.
Resumo:
Wildfires are part of the Mediterranean ecosystem, however, in Israel all wildfires are human caused, either intentionally or un-intentionally. In this study we aimed to develop and test a new method for mapping fire scars from MODIS imagery, to examine the temporal and spatial patterns of wildfires in Israel in the 2000s and to examine the factors controlling Israel's wildfire regime. To map the fires we used two 'off-the-shelf' MODIS fire products as our basis-the 1 km MODIS Collection 5 fire hotspots, the 500 m MCD45A1 burnt areas-and we created a new set of fire scar maps from the 250 m MOD13Q1 product. We carried out a cross comparison of the three MODIS based wildfire scar maps and evaluated them independently against the wild fire scars mapped from 30 m Landsat TM imagery. To examine the factors controlling wildfires we used GIS layers of rainfall, land use, and a Landsat-based national vegetation map. Wildfires occurred in areas where annual rainfall was above 250 mm, mostly in areas with herbaceous vegetation. Wildfire frequency was especially high in the Golan Heights and in the foothills of the Judean mountains, and a high correspondence was found between military training zones and the spatial distribution of fire scars. The use of MODIS satellite images enabled us to map wildfires at a national scale due to the high temporal resolution of the sensor. Our MOD13Q1 based mapping of fire scars adequately mapped large (>1 km**2) fires with accuracies above 80%. Such large fires account for a large proportion of all fires, and pose the greatest threats. This database can aid managers in determining wildfire risks in space and in time.
Resumo:
In recent years, profiling floats, which form the basis of the successful international Argo observatory, are also being considered as platforms for marine biogeochemical research. This study showcases the utility of floats as a novel tool for combined gas measurements of CO2 partial pressure (pCO2) and O2. These float prototypes were equipped with a small-sized and submersible pCO2 sensor and an optode O2 sensor for highresolution measurements in the surface ocean layer. Four consecutive deployments were carried out during November 2010 and June 2011 near the Cape Verde Ocean Observatory (CVOO) in the eastern tropical North Atlantic. The profiling float performed upcasts every 31 h while measuring pCO2, O2, salinity, temperature, and hydrostatic pressure in the upper 200 m of the water column. To maintain accuracy, regular pCO2 sensor zeroings at depth and surface, as well as optode measurements in air, were performed for each profile. Through the application of data processing procedures (e.g., time-lag correction), accuracies of floatborne pCO2 measurements were greatly improved (10-15 µatm for the water column and 5 µatm for surface measurements). O2 measurements yielded an accuracy of 2 µmol/kg. First results of this pilot study show the possibility of using profiling floats as a platform for detailed and unattended observations of the marine carbon and oxygen cycle dynamics.
Resumo:
Providing accurate maps of coral reefs where the spatial scale and labels of the mapped features correspond to map units appropriate for examining biological and geomorphic structures and processes is a major challenge for remote sensing. The objective of this work is to assess the accuracy and relevance of the process used to derive geomorphic zone and benthic community zone maps for three western Pacific coral reefs produced from multi-scale, object-based image analysis (OBIA) of high-spatial-resolution multi-spectral images, guided by field survey data. Three Quickbird-2 multi-spectral data sets from reefs in Australia, Palau and Fiji and georeferenced field photographs were used in a multi-scale segmentation and object-based image classification to map geomorphic zones and benthic community zones. A per-pixel approach was also tested for mapping benthic community zones. Validation of the maps and comparison to past approaches indicated the multi-scale OBIA process enabled field data, operator field experience and a conceptual hierarchical model of the coral reef environment to be linked to provide output maps at geomorphic zone and benthic community scales on coral reefs. The OBIA mapping accuracies were comparable with previously published work using other methods; however, the classes mapped were matched to a predetermined set of features on the reef.
Resumo:
The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.
Resumo:
This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.
Resumo:
This work is aimed to present the main differences of nuclear data uncertainties among three different nuclear data libraries: EAF-2007, EAF-2010 and SCALE-6.0, under different neutron spectra: LWR, ADS and DEMO (fusion). To take into account the neutron spectrum, the uncertainty data are collapsed to onegroup. That is a simple way to see the differences among libraries for one application. Also, the neutron spectrum effect on different applications can be observed. These comparisons are presented only for (n,fission), (n,gamma) and (n,p) reactions, for the main transuranic isotopes (234,235,236,238U, 237Np, 238,239,240,241Pu, 241,242m,243Am, 242,243,244,245,246,247,248Cm, 249Bk, 249,250,251,252Cf). But also general comparisons among libraries are presented taking into account all included isotopes. In other works, target accuracies are presented for nuclear data uncertainties; here, these targets are compared with uncertainties on the above libraries. The main results of these comparisons are that EAF-2010 has reduced their uncertainties for many isotopes from EAF-2007 for (n,gamma) and (n,fission) but not for (n,p); SCALE-6.0 gives lower uncertainties for (n,fission) reactions for ADS and PWR applications, but gives higher uncertainties for (n,p) reactions in all applications. For the (n,gamma) reaction, the amount of isotopes which have higher uncertainties is quite similar to the amount of isotopes which have lower uncertainties when SCALE-6.0 and EAF-2010 are compared. When the effect of neutron spectra is analysed, the ADS neutron spectrum obtained the highest uncertainties for (n,gamma) and (n,fission) reactions of all libraries.