830 resultados para Global sensitivity analysis
Resumo:
A través de una simulación llevada a cabo con GTAP, este documento presenta una evaluación preliminar del impacto potencial que el Área de Libre Comercio de las Américas tendría sobre la Comunidad Andina de Naciones. Mantenido por la Universidad de Purdue, el GTAP es un modelo multiregional de equilibrio general, ampliamente usado para el análisis de temas de economía internacional. El experimento llevado a cabo tiene lugar en un ambiente de competencia perfecta y rendimientos constantes a escala y consiste en la completa eliminación de aranceles a las importaciones de bienes entre los países del Hemisferio Occidental. Los resultados muestran la presencia de modestas pero positivas ganancias netas de bienestar para la Comunidad Andina, generadas fundamentalmente por mejoras en la asignación de recursos. Movimientos desfavorables en los términos de intercambio y el efecto de la desviación de comercio con respecto a terceros países, reducen considerablemente las ganancias potenciales de bienestar. De la misma forma, la existencia de distorsiones económicas al interior de la Comunidad Andina tiene un efecto negativo sobre el bienestar. El patrón de comercio aumenta su grado de concentración en el comercio bilateral con los Estados Unidos y la remuneración real a los factores productivos presenta mejoras con la implementación de la zona de libre comercio.
Resumo:
Este documento es el resultado de una investigación bajo el enfoque de Finanzas Corporativas del Comportamiento, disciplina relevante en el mundo financiero desde el 2002 y que hasta el momento poco se ha investigado en Colombia. Esta difiere del supuesto tradicional de la racionalidad de los individuos en la toma de decisiones financieras, ya que pueden ser influenciadas por sesgos cognitivos y emocionales que la teoría ortodoxa no tiene en cuenta en sus supuestos. Esta investigación busca indagar, desde el punto de vista conceptual y mediante el análisis de resultados de estudio de campo con operadores del mercado bursátil colombiano, sobre la posible presencia de elementos comportamentales en las decisiones de inversión. Los sesgos que se evaluaron fueron: disonancia cognitiva, heurístico de disponibilidad y sesgo de confirmación. Para la recolección de fuentes primarias, una encuesta fue enviada a los operadores Colombianos, categorizados en operadores con experiencia y operadores jóvenes. Después del filtro, 142 encuestas fueron seleccionadas para el análisis. Los principales hallazgos fueron que los jóvenes son más propensos a experimentar disonancia cognitiva y heurístico de disponibilidad y en ambas categorías, los sesgos analizados influencian medianamente la toma de decisiones de inversión.
Resumo:
O presente trabalho teve por objectivo investigar e construir um Plano de Intervenção baseado numa sala de 1º ano de escolaridade na cidade de Portalegre com vista à inclusão, na turma, de um aluno em situação de Necessidade Educativa Especial. Este plano integra uma componente teórica – prática uma vez que foi feita uma análise teórica prévia que tinha como objectivo conhecer e caracterizar cientificamente toda a situação – problema e, consequentemente foram aplicadas técnicas de análise e recolha de dados: pesquisa documental, entrevista, sociometria e observação naturalista com base na literatura de referência para a aplicação dessas técnicas. Posteriormente foi feita a análise de toda a situação e foram identificadas quais as áreas de intervenção que seriam pertinentes para a problemática em estudo com o intuito de ser elaborado um Plano de Intervenção, realizado durante quatro meses em contexto de sala de aula e com o objectivo de proporcionar alguma mudança face à situação – problema encontrada. Este projecto de investigação – acção teve como objectivo responder à questão de partida: Como promover as aprendizagens numa ambiência inclusiva numa turma de 1º ano de escolaridade? O Plano de Intervenção foi estruturado em quinze sessões de aproximadamente 90 minutos cada, onde foram desenvolvidas actividades com a turma que promovessem, o mais possível, a progressão académica e a participação do aluno em situação de NEE. Para a avaliação do Plano de Intervenção recorreu-se às técnicas de recolha e análise de dados: entrevista e sociometria, assim como às reflexões semanais elaboradas após cada intervenção. O Plano de Intervenção interferiu positivamente na inclusão do aluno em situação de NEE na sua turma. Palavras
Resumo:
Populations of Lesser Scaup (Aythya affinis) have declined markedly in North America since the early 1980s. When considering alternatives for achieving population recovery, it would be useful to understand how the rate of population growth is functionally related to the underlying vital rates and which vital rates affect population growth rate the most if changed (which need not be those that influenced historical population declines). To establish a more quantitative basis for learning about life history and population dynamics of Lesser Scaup, we summarized published and unpublished estimates of vital rates recorded between 1934 and 2005, and developed matrix life-cycle models with these data for females breeding in the boreal forest, prairie-parklands, and both regions combined. We then used perturbation analysis to evaluate the effect of changes in a variety of vital-rate statistics on finite population growth rate and abundance. Similar to Greater Scaup (Aythya marila), our modeled population growth rate for Lesser Scaup was most sensitive to unit and proportional change in adult female survival during the breeding and non-breeding seasons, but much less so to changes in fecundity parameters. Interestingly, population growth rate was also highly sensitive to unit and proportional changes in the mean of nesting success, duckling survival, and juvenile survival. Given the small samples of data for key aspects of the Lesser Scaup life cycle, we recommend additional research on vital rates that demonstrate a strong effect on population growth and size (e.g., adult survival probabilities). Our life-cycle models should be tested and regularly updated in the future to simultaneously guide science and management of Lesser Scaup populations in an adaptive context.
Resumo:
[1] We present a new, process-based model of soil and stream water dissolved organic carbon (DOC): the Integrated Catchments Model for Carbon (INCA-C). INCA-C is the first model of DOC cycling to explicitly include effects of different land cover types, hydrological flow paths, in-soil carbon biogeochemistry, and surface water processes on in-stream DOC concentrations. It can be calibrated using only routinely available monitoring data. INCA-C simulates daily DOC concentrations over a period of years to decades. Sources, sinks, and transformation of solid and dissolved organic carbon in peat and forest soils, wetlands, and streams as well as organic carbon mineralization in stream waters are modeled. INCA-C is designed to be applied to natural and seminatural forested and peat-dominated catchments in boreal and temperate regions. Simulations at two forested catchments showed that seasonal and interannual patterns of DOC concentration could be modeled using climate-related parameters alone. A sensitivity analysis showed that model predictions were dependent on the mass of organic carbon in the soil and that in-soil process rates were dependent on soil moisture status. Sensitive rate coefficients in the model included those for organic carbon sorption and desorption and DOC mineralization in the soil. The model was also sensitive to the amount of litter fall. Our results show the importance of climate variability in controlling surface water DOC concentrations and suggest the need for further research on the mechanisms controlling production and consumption of DOC in soils.
Resumo:
Increased atmospheric deposition of inorganic nitrogen (N) may lead to increased leaching of nitrate (NO3-) to surface waters. The mechanisms responsible for, and controls on, this leaching are matters of debate. An experimental N addition has been conducted at Gardsjon, Sweden to determine the magnitude and identify the mechanisms of N leaching from forested catchments within the EU funded project NITREX. The ability of INCA-N, a simple process-based model of catchment N dynamics, to simulate catchment-scale inorganic N dynamics in soil and stream water during the course of the experimental addition is evaluated. Simulations were performed for 1990-2002. Experimental N addition began in 1991. INCA-N was able to successfully reproduce stream and soil water dynamics before and during the experiment. While INCA-N did not correctly simulate the lag between the start of N addition and NO 2 3 breakthrough, the model was able to simulate the state change resulting from increased N deposition. Sensitivity analysis showed that model behaviour was controlled primarily by parameters related to hydrology and vegetation dynamics and secondarily by in-soil processes.
Resumo:
Fine sediment delivery to and storage in stream channel reaches can disrupt aquatic habitats, impact river hydromorphology, and transfer adsorbed nutrients and pollutants from catchment slopes to the fluvial system. This paper presents a modelling toot for simulating the time-dependent response of the fine sediment system in catchments, using an integrated approach that incorporates both land phase and in-stream processes of sediment generation, storage and transfer. The performance of the model is demonstrated by applying it to simulate in-stream suspended sediment concentrations in two lowland catchments in southern England, the Enborne and the Lambourn, which exhibit contrasting hydrological and sediment responses due to differences in substrate permeability. The sediment model performs well in the Enborne catchment, where direct runoff events are frequent and peak suspended sediment concentrations can exceed 600 mg l(-1). The general trends in the in-stream concentrations in the Lambourn catchment are also reproduced by the model, although the observed concentrations are low (rarely exceeding 50 mg l(-1)) and the background variability in the concentrations is not fully characterized by the model. Direct runoff events are rare in this highly permeable catchment, resulting in a weak coupling between the sediment delivery system and the catchment hydrology. The generic performance of the model is also assessed using a generalized sensitivity analysis based on the parameter bounds identified in the catchment applications. Results indicate that the hydrological parameters contributing to the sediment response include those controlling (1) the partitioning of runoff between surface and soil zone flows and (2) the fractional loss of direct runoff volume prior to channel delivery. The principal sediment processes controlling model behaviour in the simulations are the transport capacity of direct runoff and the in-stream generation, storage and release of the fine sediment fraction. The in-stream processes appear to be important in maintaining the suspended sediment concentrations during low flows in the River Enborne and throughout much of the year in the River Lambourn. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
A quantitative model of wheat root systems is developed that links the size and distribution of the root system to the capture of water and nitrogen (which are assumed to be evenly distributed with depth) during grain filling, and allows estimates of the economic consequences of this capture to be assessed. A particular feature of the model is its use of summarizing concepts, and reliance on only the minimum number of parameters (each with a clear biological meaning). The model is then used to provide an economic sensitivity analysis of possible target characteristics for manipulating root systems. These characteristics were: root distribution with depth, proportional dry matter partitioning to roots, resource capture coefficients, shoot dry weight at anthesis, specific root weight and water use efficiency. From the current estimates of parameters it is concluded that a larger investment by the crop in fine roots at depth in the soil, and less proliferation of roots in surface layers, would improve yields by accessing extra resources. The economic return on investment in roots for water capture was twice that of the same amount invested for nitrogen capture. (C) 2003 Annals of Botany Company.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe. The implementation of these policies involves large expenditures, and it is reasonable for policymakers to ask what degree of certainty can be attached to the underlying critical load and exceedance estimates. This paper is a literature review of studies which attempt to estimate the uncertainty attached to critical loads. Critical load models and uncertainty analysis are briefly outlined. Most studies have used Monte Carlo analysis of some form to investigate the propagation of uncertainties in the definition of the input parameters through to uncertainties in critical loads. Though the input parameters are often poorly known, the critical load uncertainties are typically surprisingly small because of a "compensation of errors" mechanism. These results depend on the quality of the uncertainty estimates of the input parameters, and a "pedigree" classification for these is proposed. Sensitivity analysis shows that some input parameters are more important in influencing critical load uncertainty than others, but there have not been enough studies to form a general picture. Methods used for dealing with spatial variation are briefly discussed. Application of alternative models to the same site or modifications of existing models can lead to widely differing critical loads, indicating that research into the underlying science needs to continue.
Resumo:
Fifty-nine healthy infants were filmed with their mothers and with a researcher at two, four, six and nine months in face-to-face play, and in toy-play at six and nine months. During toy-play at both ages, two indices of joint attention (JA)—infant bids for attention, and percent of time in shared attention—were assessed, along with other behavioural measures. Global ratings were made at all four ages of infants’ and mothers’ interactive style. The mothers varied in psychiatric history (e.g., half had experienced postpartum depression) and socioeconomic status, so their interactive styles were diverse. Variation in nine-month infant JA — with mother and with researcher — was predicted by variation in maternal behaviour and global ratings at six months, but not at two or four months. Concurrent adult behaviour also influenced nine-month JA, independent of infant ratings. Six-month maternal behaviours that positively predicted later JA (some of which remained important at nine months) included teaching, conjoint action on a toy, and global sensitivity. Other behaviours (e.g., entertaining) negatively predicted later JA. Findings are discussed in terms of social-learning and neurobiological accounts of JA emergence.
Resumo:
A mathematical model describing the uptake of low density lipoprotein (LDL) and very low density lipoprotein (VLDL) particles by a single hepatocyte cell is formulated and solved. The model includes a description of the dynamic change in receptor density on the surface of the cell due to the binding and dissociation of the lipoprotein particles, the subsequent internalisation of bound particles, receptors and unbound receptors, the recycling of receptors to the cell surface, cholesterol dependent de novo receptor formation by the cell and the effect that particle uptake has on the cell's overall cholesterol content. The effect that blocking access to LDL receptors by VLDL, or internalisation of VLDL particles containing different amounts of apolipoprotein E (we will refer to these particles as VLDL-2 and VLDL-3) has on LDL uptake is explored. By comparison with experimental data we find that measures of cell cholesterol content are important in differentiating between the mechanisms by which VLDL is thought to inhibit LDL uptake. We extend our work to show that in the presence of both types of VLDL particle (VLDL-2 and VLDL-3), measuring relative LDL uptake does not allow differentiation between the results of blocking and internalisation of each VLDL particle to be made. Instead by considering the intracellular cholesterol content it is found that internalisation of VLDL-2 and VLDL-3 leads to the highest intracellular cholesterol concentration. A sensitivity analysis of the model reveals that binding, unbinding and internalisation rates, the fraction of receptors recycled and the rate at which the cholesterol dependent free receptors are created by the cell have important implications for the overall uptake dynamics of either VLDL or LDL particles and subsequent intracellular cholesterol concentration. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Cold pitched roofs, with their form of construction situating insulation on a horizontal ceiling, are intrinsically vulnerable to condensation. This study reports the results derived from using a simulation package (Heat, Air and Moisture modelling tool, or HAM-Tools) to investigate the risk of condensation in cold pitched roofs in housing fitted with a vapour-permeable underlay (VPU) of known characteristics. In order to visualize the effect of the VPUs on moisture transfer, several scenarios were modelled, and compared with the results from a conventional bituminous felt with high resistance (200 MNs/g, Sd = 40 m). The results indicate that ventilation is essential in the roof to reduce condensation. However, a sensitivity analysis proved that reducing the overall tightness of the ceiling and using lower-resistance VPUs would help in controlling condensation formation in the roof. To a large extent, the proposed characteristic performance of the VPU as predicted by manufacturers and some researchers may only be realistic if gaps in the ceiling are sealed completely during construction, which may be practically difficult given current construction practice.
Resumo:
The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.
Resumo:
The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.
Resumo:
We discuss the feasibility of wireless terahertz communications links deployed in a metropolitan area and model the large-scale fading of such channels. The model takes into account reception through direct line of sight, ground and wall reflection, as well as diffraction around a corner. The movement of the receiver is modeled by an autonomous dynamic linear system in state space, whereas the geometric relations involved in the attenuation and multipath propagation of the electric field are described by a static nonlinear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a single-output Wiener model from time-domain measurements of the field intensity when the receiver motion is simulated using a constant angular speed and an exponentially decaying radius. The identification procedure is validated by using the model to perform q-step ahead predictions. The sensitivity of the algorithm to small-scale fading, detector noise, and atmospheric changes are discussed. The performance of the algorithm is tested in the diffraction zone assuming a range of emitter frequencies (2, 38, 60, 100, 140, and 400 GHz). Extensions of the simulation results to situations where a more complicated trajectory describes the motion of the receiver are also implemented, providing information on the performance of the algorithm under a worst case scenario. Finally, a sensitivity analysis to model parameters for the identified Wiener system is proposed.