991 resultados para spatial error
Resumo:
This paper surveys the literature on the implications of trade liberalisation for intra-national economic geographies. Three results stand out. First, neither urban systems models nor new economic geography models imply a robust prediction for the impact of trade openness on spatial concentration. Whether trade promotes concentration or dispersion depends on subtle modelling choices among which it is impossible to adjudicate a priori. Second, empirical evidence mirrors the theoretical indeterminacy: a majority of cross-country studies find no significant effect of openness on urban concentration or regional inequality. Third, the available models predict that, other things equal, regions with inherently less costly access to foreign markets, such as border or port regions, stand to reap the largest gains from trade liberalisation. This prediction is confirmed by the available evidence. Whether trade liberalisation raises or lowers regional inequality therefore depends on each country's specific geography.
Resumo:
Activation dynamics of hippocampal subregions during spatial learning and their interplay with neocortical regions is an important dimension in the understanding of hippocampal function. Using the (14C)-2-deoxyglucose autoradiographic method, we have characterized the metabolic changes occurring in hippocampal subregions in mice while learning an eight-arm radial maze task. Autoradiogram densitometry revealed a heterogeneous and evolving pattern of enhanced metabolic activity throughout the hippocampus during the training period and on recall. In the early stages of training, activity was enhanced in the CA1 area from the intermediate portion to the posterior end as well as in the CA3 area within the intermediate portion of the hippocampus. At later stages, CA1 and CA3 activations spread over the entire longitudinal axis, while dentate gyrus (DG) activation occurred from the anterior to the intermediate zone. Activation of the retrosplenial cortex but not the amygdala was also observed during the learning process. On recall, only DG activation was observed in the same anterior part of the hippocampus. These results suggest the existence of a functional segmentation of the hippocampus, each subregion being dynamically but also differentially recruited along the acquisition, consolidation, and retrieval process in parallel with some neocortical sites.
Resumo:
This study examined the effects of ibotenic acid-induced lesions of the hippocampus, subiculum and hippocampus +/- subiculum upon the capacity of rats to learn and perform a series of allocentric spatial learning tasks in an open-field water maze. The lesions were made by infusing small volumes of the neurotoxin at a total of 26 (hippocampus) or 20 (subiculum) sites intended to achieve complete target cell loss but minimal extratarget damage. The regional extent and axon-sparing nature of these lesions was evaluated using both cresyl violet and Fink - Heimer stained sections. The behavioural findings indicated that both the hippocampus and subiculum lesions caused impairment to the initial postoperative acquisition of place navigation but did not prevent eventual learning to levels of performance almost as effective as those of controls. However, overtraining of the hippocampus + subiculum lesioned rats did not result in significant place learning. Qualitative observations of the paths taken to find a hidden escape platform indicated that different strategies were deployed by hippocampal and subiculum lesioned groups. Subsequent training on a delayed matching to place task revealed a deficit in all lesioned groups across a range of sample choice intervals, but the subiculum lesioned group was less impaired than the group with the hippocampal lesion. Finally, unoperated control rats given both the initial training and overtraining were later given either a hippocampal lesion or sham surgery. The hippocampal lesioned rats were impaired during a subsequent retention/relearning phase. Together, these findings suggest that total hippocampal cell loss may cause a dual deficit: a slower rate of place learning and a separate navigational impairment. The prospect of unravelling dissociable components of allocentric spatial learning is discussed.
Resumo:
The objective of this work was to evaluate sampling density on the prediction accuracy of soil orders, with high spatial resolution, in a viticultural zone of Serra Gaúcha, Southern Brazil. A digital elevation model (DEM), a cartographic base, a conventional soil map, and the Idrisi software were used. Seven predictor variables were calculated and read along with soil classes in randomly distributed points, with sampling densities of 0.5, 1, 1.5, 2, and 4 points per hectare. Data were used to train a decision tree (Gini) and three artificial neural networks: adaptive resonance theory, fuzzy ARTMap; self‑organizing map, SOM; and multi‑layer perceptron, MLP. Estimated maps were compared with the conventional soil map to calculate omission and commission errors, overall accuracy, and quantity and allocation disagreement. The decision tree was less sensitive to sampling density and had the highest accuracy and consistence. The SOM was the less sensitive and most consistent network. The MLP had a critical minimum and showed high inconsistency, whereas fuzzy ARTMap was more sensitive and less accurate. Results indicate that sampling densities used in conventional soil surveys can serve as a reference to predict soil orders in Serra Gaúcha.
Resumo:
This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.
Resumo:
Orienting attention in space recruits fronto-parietal networks whose damage results in unilateral spatial neglect. However, attention orienting may also be governed by emotional and motivational factors; but it remains unknown whether these factors act through a modulation of the fronto-parietal attentional systems or distinct neural pathways. Here we asked whether attentional orienting is affected by learning about the reward value of targets in a visual search task, in a spatially specific manner, and whether these effects are preserved in right-brain damaged patients with left spatial neglect. We found that associating rewards with left-sided (but not right-sided) targets during search led to progressive exploration biases towards left space, in both healthy people and neglect patients. Such spatially specific biases occurred even without any conscious awareness of the asymmetric reward contingencies. These results show that reward-induced modulations of space representation are preserved despite a dysfunction of fronto-parietal networks associated with neglect, and therefore suggest that they may arise through spared subcortical networks directly acting on sensory processing and/or oculomotor circuits. These effects could be usefully exploited for potentiating rehabilitation strategies in neglect patients.
Resumo:
Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
RésuméLa coexistence de nombreuses espèces différentes a de tout temps intrigué les biologistes. La diversité et la composition des communautés sont influencées par les perturbations et l'hétérogénéité des conditions environnementales. Bien que dans la nature la distribution spatiale des conditions environnementales soit généralement autocorrélée, cet aspect est rarement pris en compte dans les modèles étudiant la coexistence des espèces. Dans ce travail, nous avons donc abordé, à l'aide de simulations numériques, la coexistence des espèces ainsi que leurs caractéristiques au sein d'un environnement autocorrélé.Afin de prendre en compte cet élément spatial, nous avons développé un modèle de métacommunauté (un ensemble de communautés reliées par la dispersion des espèces) spatialement explicite. Dans ce modèle, les espèces sont en compétition les unes avec les autres pour s'établir dans un nombre de places limité, dans un environnement hétérogène. Les espèces sont caractérisées par six traits: optimum de niche, largeur de niche, capacité de dispersion, compétitivité, investissement dans la reproduction et taux de survie. Nous nous sommes particulièrement intéressés à l'influence de l'autocorrélation spatiale et des perturbations sur la diversité des espèces et sur les traits favorisés dans la métacommunauté. Nous avons montré que l'autocorrélation spatiale peut avoir des effets antagonistes sur la diversité, en fonction du taux de perturbations considéré. L'influence de l'autocorrélation spatiale sur la capacité de dispersion moyenne dans la métacommunauté dépend également des taux de perturbations et survie. Nos résultats ont aussi révélé que de nombreuses espèces avec différents degrés de spécialisation (i.e. différentes largeurs de niche) peuvent coexister. Toutefois, les espèces spécialistes sont favorisées en absence de perturbations et quand la dispersion est illimitée. A l'opposé, un taux élevé de perturbations sélectionne des espèces plus généralistes, associées avec une faible compétitivité.L'autocorrélation spatiale de l'environnement, en interaction avec l'intensité des perturbations, influence donc de manière considérable la coexistence ainsi que les caractéristiques des espèces. Ces caractéristiques sont à leur tour souvent impliquées dans d'importants processus, comme le fonctionnement des écosystèmes, la capacité des espèces à réagir aux invasions, à la fragmentation de l'habitat ou aux changements climatiques. Ce travail a permis une meilleure compréhension des mécanismes responsables de la coexistence et des caractéristiques des espèces, ce qui est crucial afin de prédire le devenir des communautés naturelles dans un environnement changeant.AbstractUnderstanding how so many different species can coexist in nature is a fundamental and long-standing question in ecology. Community diversity and composition are known to be influenced by heterogeneity in environmental conditions and disturbance. Though in nature the spatial distribution of environmental conditions is frequently autocorrelated, this aspect is seldom considered in models investigating species coexistence. In this work, we thus addressed several questions pertaining to species coexistence and composition in spatially autocorrelated environments, with a numerical simulations approach.To take into account this spatial aspect, we developed a spatially explicit model of metacommunity (a set of communities linked by dispersal of species). In this model, species are trophically equivalent, and compete for space in a heterogeneous environment. Species are characterized by six life-history traits: niche optimum, niche breadth, dispersal, competitiveness, reproductive investment and survival rate. We were particularly interested in the influence of environmental spatial autocorrelation and disturbance on species diversity and on the traits of the species favoured in the metacommunity. We showed that spatial autocorrelation can have antagonistic effects on diversity depending on disturbance rate. Similarly, spatial autocorrelation interacted with disturbance rate and survival rate to shape the mean dispersal ability observed in the metacommunity. Our results also revealed that many species with various degrees of specialization (i.e. different niche breadths) can coexist together. However specialist species were favoured in the absence of disturbance, and when dispersal was unlimited. In contrast, high disturbance rate selected for more generalist species, associated with low competitive ability.The spatial structure of the environment, together with disturbance and species traits, thus strongly impacts species diversity and, more importantly, species composition. Species composition is known to affect several important metacommunity properties such as ecosystem functioning, resistance and reaction to invasion, to habitat fragmentation and to climate changes. This work allowed a better understanding of the mechanisms responsible for species composition, which is of crucial importance to predict the fate of natural metacommunities in changing environments
Resumo:
The purpose of this bachelor's thesis was to chart scientific research articles to present contributing factors to medication errors done by nurses in a hospital setting, and introduce methods to prevent medication errors. Additionally, international and Finnish research was combined and findings were reflected in relation to the Finnish health care system. Literature review was conducted out of 23 scientific articles. Data was searched systematically from CINAHL, MEDIC and MEDLINE databases, and also manually. Literature was analysed and the findings combined using inductive content analysis. Findings revealed that both organisational and individual factors contributed to medication errors. High workload, communication breakdowns, unsuitable working environment, distractions and interruptions, and similar medication products were identified as organisational factors. Individual factors included nurses' inability to follow protocol, inadequate knowledge of medications and personal qualities of the nurse. Developing and improving the physical environment, error reporting, and medication management protocols were emphasised as methods to prevent medication errors. Investing to the staff's competence and well-being was also identified as a prevention method. The number of Finnish articles was small, and therefore the applicability of the findings to Finland is difficult to assess. However, the findings seem to fit to the Finnish health care system relatively well. Further research is needed to identify those factors that contribute to medication errors in Finland. This is a necessity for the development of methods to prevent medication errors that fit in to the Finnish health care system.
Resumo:
Peer reviewed
Resumo:
In this paper, an advanced technique for the generation of deformation maps using synthetic aperture radar (SAR) data is presented. The algorithm estimates the linear and nonlinear components of the displacement, the error of the digital elevation model (DEM) used to cancel the topographic terms, and the atmospheric artifacts from a reduced set of low spatial resolution interferograms. The pixel candidates are selected from those presenting a good coherence level in the whole set of interferograms and the resulting nonuniform mesh tessellated with the Delauney triangulation to establish connections among them. The linear component of movement and DEM error are estimated adjusting a linear model to the data only on the connections. Later on, this information, once unwrapped to retrieve the absolute values, is used to calculate the nonlinear component of movement and atmospheric artifacts with alternate filtering techniques in both the temporal and spatial domains. The method presents high flexibility with respect to the required number of images and the baselines length. However, better results are obtained with large datasets of short baseline interferograms. The technique has been tested with European Remote Sensing SAR data from an area of Catalonia (Spain) and validated with on-field precise leveling measurements.
Resumo:
Voltage fluctuations caused by parasitic impedances in the power supply rails of modern ICs are a major concern in nowadays ICs. The voltage fluctuations are spread out to the diverse nodes of the internal sections causing two effects: a degradation of performances mainly impacting gate delays anda noisy contamination of the quiescent levels of the logic that drives the node. Both effects are presented together, in thispaper, showing than both are a cause of errors in modern and future digital circuits. The paper groups both error mechanismsand shows how the global error rate is related with the voltage deviation and the period of the clock of the digital system.
Resumo:
This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.