996 resultados para Global errors
Resumo:
We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.
Resumo:
We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.
Resumo:
A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.
Resumo:
Background: Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective: To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design: A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results: A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion: Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The relationship between spot volume and variation for all protein spots observed on large format 2D gels when utilising silver stain technology and a model system based on mammalian NSO cell extracts is reported. By running multiple gels we have shown that the reproducibility of data generated in this way is dependent on individual protein spot volumes, which in turn are directly correlated with the coefficient of variation. The coefficients of variation across all observed protein spots were highest for low abundant proteins which are the primary contributors to process error, and lowest for more abundant proteins. Using the relationship between spot volume and coefficient of variation we show it is necessary to calculate variation for individual protein spot volumes. The inherent limitations of silver staining therefore mean that errors in individual protein spot volumes must be considered when assessing significant changes in protein spot volume and not global error. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The spill-over of the global fi nancial crisis has uncovered the weaknesses in the governance of the EMU. As one of the most open economies in Europe, Hungary has suff ered from the ups and downs of the global and European crisis and its mismanagement. Domestic policy blunders have complicated the situation. This paper examines how Hungary has withstood the ups and downs of the eurozone crisis. It also addresses the questions of whether the country has converged with or diverged from the EMU membership, whether joining the EMU is still a good idea for Hungary, and whether the measures to ward off the crisis have actually helped to face the challenge of growth.
Resumo:
The Last Interglacial (LIG, 129-116 thousand of years BP, ka) represents a test bed for climate model feedbacks in warmer-than-present high latitude regions. However, mainly because aligning different palaeoclimatic archives and from different parts of the world is not trivial, a spatio-temporal picture of LIG temperature changes is difficult to obtain. Here, we have selected 47 polar ice core and sub-polar marine sediment records and developed a strategy to align them onto the recent AICC2012 ice core chronology. We provide the first compilation of high-latitude temperature changes across the LIG associated with a coherent temporal framework built between ice core and marine sediment records. Our new data synthesis highlights non-synchronous maximum temperature changes between the two hemispheres with the Southern Ocean and Antarctica records showing an early warming compared to North Atlantic records. We also observe warmer than present-day conditions that occur for a longer time period in southern high latitudes than in northern high latitudes. Finally, the amplitude of temperature changes at high northern latitudes is larger compared to high southern latitude temperature changes recorded at the onset and the demise of the LIG. We have also compiled four data-based time slices with temperature anomalies (compared to present-day conditions) at 115 ka, 120 ka, 125 ka and 130 ka and quantitatively estimated temperature uncertainties that include relative dating errors. This provides an improved benchmark for performing more robust model-data comparison. The surface temperature simulated by two General Circulation Models (CCSM3 and HadCM3) for 130 ka and 125 ka is compared to the corresponding time slice data synthesis. This comparison shows that the models predict warmer than present conditions earlier than documented in the North Atlantic, while neither model is able to produce the reconstructed early Southern Ocean and Antarctic warming. Our results highlight the importance of producing a sequence of time slices rather than one single time slice averaging the LIG climate conditions.
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
Despite its importance in the global climate system, age-calibrated marine geologic records reflecting the evolution of glacial cycles through the Pleistocene are largely absent from the central Arctic Ocean. This is especially true for sediments older than 200 ka. Three sites cored during the Integrated Ocean Drilling Program's Expedition 302, the Arctic Coring Expedition (ACEX), provide a 27 m continuous sedimentary section from the Lomonosov Ridge in the central Arctic Ocean. Two key biostratigraphic datums and constraints from the magnetic inclination data are used to anchor the chronology of these sediments back to the base of the Cobb Mountain subchron (1215 ka). Beyond 1215 ka, two best fitting geomagnetic models are used to investigate the nature of cyclostratigraphic change. Within this chronology we show that bulk and mineral magnetic properties of the sediments vary on predicted Milankovitch frequencies. These cyclic variations record ''glacial'' and ''interglacial'' modes of sediment deposition on the Lomonosov Ridge as evident in studies of ice-rafted debris and stable isotopic and faunal assemblages for the last two glacial cycles and were used to tune the age model. Potential errors, which largely arise from uncertainties in the nature of downhole paleomagnetic variability, and the choice of a tuning target are handled by defining an error envelope that is based on the best fitting cyclostratigraphic and geomagnetic solutions.
Resumo:
The In Situ Analysis System (ISAS) was developed to produce gridded fields of temperature and salinity that preserve as much as possible the time and space sampling capabilities of the Argo network of profiling floats. Since the first global re-analysis performed in 2009, the system has evolved and a careful delayed mode processing of the 2002-2012 dataset has been carried out using version 6 of ISAS and updating the statistics to produce the ISAS13 analysis. This last version is now implemented as the operational analysis tool at the Coriolis data centre. The robustness of the results with respect to the system evolution is explored through global quantities of climatological interest: the Ocean Heat Content and the Steric Height. Estimates of errors consistent with the methodology are computed. This study shows that building reliable statistics on the fields is fundamental to improve the monthly estimates and to determine the absolute error bars. The new mean fields and variances deduced from the ISAS13 re-analysis and dataset show significant changes relative to the previous ISAS estimates, in particular in the southern ocean, justifying the iterative procedure. During the decade covered by Argo, the intermediate waters appear warmer and saltier in the North Atlantic and fresher in the Southern Ocean than in WOA05 long term mean. At inter-annual scale, the impact of ENSO on the Ocean Heat Content and Steric Height is observed during the 2006-2007 and 2009-2010 events captured by the network.
Resumo:
Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0–300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation–model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion is that the performance of the Mercator Ocean 0.25° global data assimilation system is heavily dependent on the availability of Argo data.
Resumo:
Quantifying global patterns of terrestrial nitrogen (N) cycling is central to predicting future patterns of primary productivity, carbon sequestration, nutrient fluxes to aquatic systems, and climate forcing. With limited direct measures of soil N cycling at the global scale, syntheses of the (15)N:(14)N ratio of soil organic matter across climate gradients provide key insights into understanding global patterns of N cycling. In synthesizing data from over 6000 soil samples, we show strong global relationships among soil N isotopes, mean annual temperature (MAT), mean annual precipitation (MAP), and the concentrations of organic carbon and clay in soil. In both hot ecosystems and dry ecosystems, soil organic matter was more enriched in (15)N than in corresponding cold ecosystems or wet ecosystems. Below a MAT of 9.8°C, soil δ(15)N was invariant with MAT. At the global scale, soil organic C concentrations also declined with increasing MAT and decreasing MAP. After standardizing for variation among mineral soils in soil C and clay concentrations, soil δ(15)N showed no consistent trends across global climate and latitudinal gradients. Our analyses could place new constraints on interpretations of patterns of ecosystem N cycling and global budgets of gaseous N loss.
Resumo:
In this work, the volatile chromatographic profiles of roasted Arabica coffees, previously analyzed for their sensorial attributes, were explored by principal component analysis. The volatile extraction technique used was the solid phase microextraction. The correlation optimized warping algorithm was used to align the gas chromatographic profiles. Fifty four compounds were found to be related to the sensorial attributes investigated. The volatiles pyrrole, 1-methyl-pyrrole, cyclopentanone, dihydro-2-methyl-3-furanone, furfural, 2-ethyl-5-methyl-pyrazine, 2-etenyl-n-methyl-pyrazine, 5-methyl-2-propionyl-furan compounds were important for the differentiation of coffee beverage according to the flavour, cleanliness and overall quality. Two figures of merit, sensitivity and specificity (or selectivity), were used to interpret the sensory attributes studied.
Resumo:
Purpose: To establish the prevalence of refractive errors and ocular disorders in preschool and schoolchildren of Ibiporã, Brazil. Methods: A survey of 6 to 12-year-old children from public and private elementary schools was carried out in Ibiporã between 1989 and 1996. Visual acuity measurements were performed by trained teachers using Snellen's chart. Children with visual acuity <0.7 in at least one eye were referred to a complete ophthalmologic examination. Results: 35,936 visual acuity measurements were performed in 13,471 children. 1.966 children (14.59%) were referred to an ophthalmologic examination. Amblyopia was diagnosed in 237 children (1.76%), whereas strabismus was observed in 114 cases (0.84%). Cataract (n=17) (0.12%), chorioretinitis (n=38) (0.28%) and eyelid ptosis (n=6) (0.04%) were also diagnosed. Among the 614 (4.55%) children who were found to have refractive errors, 284 (46.25%) had hyperopia (hyperopia or hyperopic astigmatism), 206 (33.55%) had myopia (myopia or myopic astigmatism) and 124 (20.19%) showed mixed astigmatism. Conclusions: The study determined the local prevalence of amblyopia, refractive errors and eye disorders among preschool and schoolchildren.