953 resultados para model complexity


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis explores how multinational corporations of different sizes create barriers to imitation and therefore sustain competitive advantage in rural and informal Base of the Pyramid economies. These markets require close cooperation with local partners in a dynamic environment that lacks imposable property rights and follows a different rationale than developed markets. In order to explore how competitive advantage is sustained by different sized multinational corporations at the Base of the Pyramid, the natural-resource-based view and the dynamic capabilities perspective are integrated. Based on this integration the natural-resource-based view is extended by identifying critical dynamic capabilities that are assumed to be sources of competitive advantage at the Base of the Pyramid. Further, a contrasting case study explores how the identified dynamic capabilities are protected and their competitive advantage is sustained by isolating mechanisms that create barriers to imitation for a small to medium sized and a large multinational corporation. The case study results give grounds to assume that most resource-based isolating mechanisms create barriers to imitation that are fairly high for large and established multinational corporations that operate at the rural Base of the Pyramid and have a high product and business model complexity. On the contrary, barriers to imitation were found to be lower for young and small to medium sized multinational corporations with low product and business model complexity that according to some authors represent the majority of rural Base of the Pyramid companies. Particularly for small to medium sized multinational corporations the case study finds a relationship- and transaction-based unwillingness of local partners to act opportunistically rather than a resource-based inability to imitate. By offering an explanation of sustained competitive advantage for small to medium sized multinational corporations at the rural Base of the Pyramid this thesis closes an important research gap and recommends to include institutional and transaction-based research perspectives.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose an algorithm that extracts image features that are consistent with the 3D structure of the scene. The features can be robustly tracked over multiple views and serve as vertices of planar patches that suitably represent scene surfaces, while reducing the redundancy in the description of 3D shapes. In other words, the extracted features will off er good tracking properties while providing the basis for 3D reconstruction with minimum model complexity

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Application of semi-distributed hydrological models to large, heterogeneous watersheds deals with several problems. On one hand, the spatial and temporal variability in catchment features should be adequately represented in the model parameterization, while maintaining the model complexity in an acceptable level to take advantage of state-of-the-art calibration techniques. On the other hand, model complexity enhances uncertainty in adjusted model parameter values, therefore increasing uncertainty in the water routing across the watershed. This is critical for water quality applications, where not only streamflow, but also a reliable estimation of the surface versus subsurface contributions to the runoff is needed. In this study, we show how a regularized inversion procedure combined with a multiobjective function calibration strategy successfully solves the parameterization of a complex application of a water quality-oriented hydrological model. The final value of several optimized parameters showed significant and consistentdifferences across geological and landscape features. Although the number of optimized parameters was significantly increased by the spatial and temporal discretization of adjustable parameters, the uncertainty in water routing results remained at reasonable values. In addition, a stepwise numerical analysis showed that the effects on calibration performance due to inclusion of different data types in the objective function could be inextricably linked. Thus caution should be taken when adding or removing data from an aggregated objective function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

1. The ecological niche is a fundamental biological concept. Modelling species' niches is central to numerous ecological applications, including predicting species invasions, identifying reservoirs for disease, nature reserve design and forecasting the effects of anthropogenic and natural climate change on species' ranges. 2. A computational analogue of Hutchinson's ecological niche concept (the multidimensional hyperspace of species' environmental requirements) is the support of the distribution of environments in which the species persist. Recently developed machine-learning algorithms can estimate the support of such high-dimensional distributions. We show how support vector machines can be used to map ecological niches using only observations of species presence to train distribution models for 106 species of woody plants and trees in a montane environment using up to nine environmental covariates. 3. We compared the accuracy of three methods that differ in their approaches to reducing model complexity. We tested models with independent observations of both species presence and species absence. We found that the simplest procedure, which uses all available variables and no pre-processing to reduce correlation, was best overall. Ecological niche models based on support vector machines are theoretically superior to models that rely on simulating pseudo-absence data and are comparable in empirical tests. 4. Synthesis and applications. Accurate species distribution models are crucial for effective environmental planning, management and conservation, and for unravelling the role of the environment in human health and welfare. Models based on distribution estimation rather than classification overcome theoretical and practical obstacles that pervade species distribution modelling. In particular, ecological niche models based on machine-learning algorithms for estimating the support of a statistical distribution provide a promising new approach to identifying species' potential distributions and to project changes in these distributions as a result of climate change, land use and landscape alteration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of eutrophication in river systems is poorly understood given the complex relationship between fixed plants, algae, hydrodynamics, water chemistry and solar radiation. However there is a pressing need to understand the relationship between the ecological status of rivers and the controlling environmental factors to help the reasoned implementation of the Water Framework Directive and Catchment Sensitive Farming in the UK. This research aims to create a dynamic, process-based, mathematical in-stream model to simulate the growth and competition of different vegetation types (macrophytes, phytoplankton and benthic algae) in rivers. The model, applied to the River Frome (Dorset, UK), captured well the seasonality of simulated vegetation types (suspended algae, macrophytes, epiphytes, sediment biofilm). Macrophyte results showed that local knowledge is important for explaining unusual changes in biomass. Fixed algae simulations indicated the need for the more detailed representation of various herbivorous grazer groups, however this would increase the model complexity, the number of model parameters and the required observation data to better define the model. The model results also highlighted that simulating only phytoplankton is insufficient in river systems, because the majority of the suspended algae have benthic origin in short retention time rivers. Therefore, there is a need for modelling tools that link the benthic and free-floating habitats.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to investigate the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Design/methodology/approach – Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. Findings – It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models. Evidence is found of equifinality in the outputs of a simple, aggregated model of development viability relative to more complex, disaggregated models. Originality/value – Development viability appraisal has become increasingly important in the planning system. Consequently, the theory, application and outputs from development appraisal are under intense scrutiny from a wide range of users. However, there has been very little published evaluation of viability models. This paper contributes to the limited literature in this area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Snow provides large seasonal storage of freshwater, and information about the distribution of snow mass as Snow Water Equivalent (SWE) is important for hydrological planning and detecting climate change impacts. Large regional disagreements remain between estimates from reanalyses, remote sensing and modelling. Assimilating passive microwave information improves SWE estimates in many regions but the assimilation must account for how microwave scattering depends on snow stratigraphy. Physical snow models can estimate snow stratigraphy, but users must consider the computational expense of model complexity versus acceptable errors. Using data from the National Aeronautics and Space Administration Cold Land Processes Experiment (NASA CLPX) and the Helsinki University of Technology (HUT) microwave emission model of layered snowpacks, it is shown that simulations of the brightness temperature difference between 19 GHz and 37 GHz vertically polarised microwaves are consistent with Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) and Special Sensor Microwave Imager (SSM/I) retrievals once known stratigraphic information is used. Simulated brightness temperature differences for an individual snow profile depend on the provided stratigraphic detail. Relative to a profile defined at the 10 cm resolution of density and temperature measurements, the error introduced by simplification to a single layer of average properties increases approximately linearly with snow mass. If this brightness temperature error is converted into SWE using a traditional retrieval method then it is equivalent to ±13 mm SWE (7% of total) at a depth of 100 cm. This error is reduced to ±5.6 mm SWE (3 % of total) for a two-layer model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The third primary production algorithm round robin (PPARR3) compares output from 24 models that estimate depth-integrated primary production from satellite measurements of ocean color, as well as seven general circulation models (GCMs) coupled with ecosystem or biogeochemical models. Here we compare the global primary production fields corresponding to eight months of 1998 and 1999 as estimated from common input fields of photosynthetically-available radiation (PAR), sea-surface temperature (SST), mixed-layer depth, and chlorophyll concentration. We also quantify the sensitivity of the ocean-color-based models to perturbations in their input variables. The pair-wise correlation between ocean-color models was used to cluster them into groups or related output, which reflect the regions and environmental conditions under which they respond differently. The groups do not follow model complexity with regards to wavelength or depth dependence, though they are related to the manner in which temperature is used to parameterize photosynthesis. Global average PP varies by a factor of two between models. The models diverged the most for the Southern Ocean, SST under 10 degrees C, and chlorophyll concentration exceeding 1 mg Chlm(-3). Based on the conditions under which the model results diverge most, we conclude that current ocean-color-based models are challenged by high-nutrient low-chlorophyll conditions, and extreme temperatures or chlorophyll concentrations. The GCM-based models predict comparable primary production to those based on ocean color: they estimate higher values in the Southern Ocean, at low SST, and in the equatorial band, while they estimate lower values in eutrophic regions (probably because the area of high chlorophyll concentrations is smaller in the GCMs). Further progress in primary production modeling requires improved understanding of the effect of temperature on photosynthesis and better parameterization of the maximum photosynthetic rate. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to compare and validate two soil-vegetation-atmosphere-transfer (SVAT) schemes: TERRA-ML and the Community Land Model (CLM). Both SVAT schemes are run in standalone mode (decoupled from an atmospheric model) and forced with meteorological in-situ measurements obtained at several tropical African sites. Model performance is quantified by comparing simulated sensible and latent heat fluxes with eddy-covariance measurements. Our analysis indicates that the Community Land Model corresponds more closely to the micrometeorological observations, reflecting the advantages of the higher model complexity and physical realism. Deficiencies in TERRA-ML are addressed and its performance is improved: (1) adjusting input data (root depth) to region-specific values (tropical evergreen forest) resolves dry-season underestimation of evapotranspiration; (2) adjusting the leaf area index and albedo (depending on hard-coded model constants) resolves overestimations of both latent and sensible heat fluxes; and (3) an unrealistic flux partitioning caused by overestimated superficial water contents is reduced by adjusting the hydraulic conductivity parameterization. CLM is by default more versatile in its global application on different vegetation types and climates. On the other hand, with its lower degree of complexity, TERRA-ML is much less computationally demanding, which leads to faster calculation times in a coupled climate simulation.