996 resultados para Global errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the global welfare impact of observed levels of migration using a quantitativemulti-sector model of the world economy calibrated to aggregate and firm-level data.Our framework features cross-country labor productivity differences, international trade, remittances,and a heterogeneous workforce. We compare welfare under the observed levels ofmigration to a no-migration counterfactual. In the long run, natives in countries that receiveda lot of migration -such as Canada or Australia- are better o due to greater product varietyavailable in consumption and as intermediate inputs. In the short run the impact of migrationon average welfare in these countries is close to zero, while the skilled and unskilled nativestend to experience welfare changes of opposite signs. The remaining natives in countries withlarge emigration flows -such as Jamaica or El Salvador- are also better off due to migration,but for a different reason: remittances. The welfare impact of observed levels of migration issubstantial, at about 5 to 10% for the main receiving countries and about 10% in countries withlarge incoming remittances. Our results are robust to accounting for imperfect transferabilityof skills, selection into migration, and imperfect substitution between natives and immigrants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Until the mid-1990s, gastric cancer has been the first cause of cancer death worldwide, although rates had been declining for several decades and gastric cancer has become a relatively rare cancer in North America and in most Northern and Western Europe, but not in Eastern Europe, Russia and selected areas of Central and South America or East Asia. We analyzed gastric cancer mortality in Europe and other areas of the world from 1980 to 2005 using joinpoint regression analysis, and provided updated site-specific incidence rates from 51 selected registries. Over the last decade, the annual percent change (APC) in mortality rate was around -3, -4% for the major European countries. The APC were similar for the Republic of Korea (APC = -4.3%), Australia (-3.7%), the USA (-3.6%), Japan (-3.5%), Ukraine (-3%) and the Russian Federation (-2.8%). In Latin America, the decline was less marked, but constant with APC around -1.6% in Chile and Brazil, -2.3% in Argentina and Mexico and -2.6% in Colombia. Cancers in the fundus and pylorus are more common in high incidence and mortality areas and have been declining more than cardia gastric cancer. Steady downward trends persist in gastric cancer mortality worldwide even in middle aged population, and hence further appreciable declines are likely in the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life cycle analyses (LCA) approaches require adaptation to reflect the increasing delocalization of production to emerging countries. This work addresses this challenge by establishing a country-level, spatially explicit life cycle inventory (LCI). This study comprises three separate dimensions. The first dimension is spatial: processes and emissions are allocated to the country in which they take place and modeled to take into account local factors. Emerging economies China and India are the location of production, the consumption occurs in Germany, an Organisation for Economic Cooperation and Development country. The second dimension is the product level: we consider two distinct textile garments, a cotton T-shirt and a polyester jacket, in order to highlight potential differences in the production and use phases. The third dimension is the inventory composition: we track CO2, SO2, NO (x), and particulates, four major atmospheric pollutants, as well as energy use. This third dimension enriches the analysis of the spatial differentiation (first dimension) and distinct products (second dimension). We describe the textile production and use processes and define a functional unit for a garment. We then model important processes using a hierarchy of preferential data sources. We place special emphasis on the modeling of the principal local energy processes: electricity and transport in emerging countries. The spatially explicit inventory is disaggregated by country of location of the emissions and analyzed according to the dimensions of the study: location, product, and pollutant. The inventory shows striking differences between the two products considered as well as between the different pollutants considered. For the T-shirt, over 70% of the energy use and CO2 emissions occur in the consuming country, whereas for the jacket, more than 70% occur in the producing country. This reversal of proportions is due to differences in the use phase of the garments. For SO2, in contrast, over two thirds of the emissions occur in the country of production for both T-shirt and jacket. The difference in emission patterns between CO2 and SO2 is due to local electricity processes, justifying our emphasis on local energy infrastructure. The complexity of considering differences in location, product, and pollutant is rewarded by a much richer understanding of a global production-consumption chain. The inclusion of two different products in the LCI highlights the importance of the definition of a product's functional unit in the analysis and implications of results. Several use-phase scenarios demonstrate the importance of consumer behavior over equipment efficiency. The spatial emission patterns of the different pollutants allow us to understand the role of various energy infrastructure elements. The emission patterns furthermore inform the debate on the Environmental Kuznets Curve, which applies only to pollutants which can be easily filtered and does not take into account the effects of production displacement. We also discuss the appropriateness and limitations of applying the LCA methodology in a global context, especially in developing countries. Our spatial LCI method yields important insights in the quantity and pattern of emissions due to different product life cycle stages, dependent on the local technology, emphasizing the importance of consumer behavior. From a life cycle perspective, consumer education promoting air-drying and cool washing is more important than efficient appliances. Spatial LCI with country-specific data is a promising method, necessary for the challenges of globalized production-consumption chains. We recommend inventory reporting of final energy forms, such as electricity, and modular LCA databases, which would allow the easy modification of underlying energy infrastructure.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a system where tens of thousands of words are made up of a limited number of phonemes, many words are bound to sound alike. This similarity of the words in the lexicon as characterized by phonological neighbourhood density (PhND) has been shown to affect speed and accuracy of word comprehension and production. Whereas there is a consensus about the interfering nature of neighbourhood effects in comprehension, the language production literature offers a more contradictory picture with mainly facilitatory but also interfering effects reported on word production. Here we report both of these two types of effects in the same study. Multiple regression mixed models analyses were conducted on PhND effects on errors produced in a naming task by a group of 21 participants with aphasia. These participants produced more formal errors (interfering effect) for words in dense phonological neighbourhoods, but produced fewer nonwords and semantic errors (a facilitatory effect) with increasing density. In order to investigate the nature of these opposite effects of PhND, we further analysed a subset of formal errors and nonword errors by distinguishing errors differing on a single phoneme from the target (corresponding to the definition of phonological neighbours) from those differing on two or more phonemes. This analysis confirmed that only formal errors that were phonological neighbours of the target increased in dense neighbourhoods, while all other errors decreased. Based on additional observations favouring a lexical origin of these formal errors (they exceeded the probability of producing a real-word error by chance, were of a higher frequency, and preserved the grammatical category of the targets), we suggest that the interfering effect of PhND is due to competition between lexical neighbours and target words in dense neighbourhoods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this bachelor's thesis was to chart scientific research articles to present contributing factors to medication errors done by nurses in a hospital setting, and introduce methods to prevent medication errors. Additionally, international and Finnish research was combined and findings were reflected in relation to the Finnish health care system. Literature review was conducted out of 23 scientific articles. Data was searched systematically from CINAHL, MEDIC and MEDLINE databases, and also manually. Literature was analysed and the findings combined using inductive content analysis. Findings revealed that both organisational and individual factors contributed to medication errors. High workload, communication breakdowns, unsuitable working environment, distractions and interruptions, and similar medication products were identified as organisational factors. Individual factors included nurses' inability to follow protocol, inadequate knowledge of medications and personal qualities of the nurse. Developing and improving the physical environment, error reporting, and medication management protocols were emphasised as methods to prevent medication errors. Investing to the staff's competence and well-being was also identified as a prevention method. The number of Finnish articles was small, and therefore the applicability of the findings to Finland is difficult to assess. However, the findings seem to fit to the Finnish health care system relatively well. Further research is needed to identify those factors that contribute to medication errors in Finland. This is a necessity for the development of methods to prevent medication errors that fit in to the Finnish health care system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage fluctuations caused by parasitic impedances in the power supply rails of modern ICs are a major concern in nowadays ICs. The voltage fluctuations are spread out to the diverse nodes of the internal sections causing two effects: a degradation of performances mainly impacting gate delays anda noisy contamination of the quiescent levels of the logic that drives the node. Both effects are presented together, in thispaper, showing than both are a cause of errors in modern and future digital circuits. The paper groups both error mechanismsand shows how the global error rate is related with the voltage deviation and the period of the clock of the digital system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vol.23, No. 5, pp. 1024-1037, 2007.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Localization, which is the ability of a mobile robot to estimate its position within its environment, is a key capability for autonomous operation of any mobile robot. This thesis presents a system for indoor coarse and global localization of a mobile robot based on visual information. The system is based on image matching and uses SIFT features as natural landmarks. Features extracted from training images arestored in a database for use in localization later. During localization an image of the scene is captured using the on-board camera of the robot, features are extracted from the image and the best match is searched from the database. Feature matching is done using the k-d tree algorithm. Experimental results showed that localization accuracy increases with the number of training features used in the training database, while, on the other hand, increasing number of features tended to have a negative impact on the computational time. For some parts of the environment the error rate was relatively high due to a strong correlation of features taken from those places across the environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Programa de mà lliurat en la presentació del pòster 'UPCommons', exposat al primer COMMUNIA Workshop on Technology and the Public Domain, celebrat a Torí (Itàlia) el 18 de gener de 2008.