970 resultados para model complexity


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient–phytoplankton–zooplankton–detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry–climate interactions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called ‘reduced complexitymodels as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed ‘appropriate complexity modelling’ of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Phosphorus Indicators Tool provides a catchment-scale estimation of diffuse phosphorus (P) loss from agricultural land to surface waters using the most appropriate indicators of P loss. The Tool provides a framework that may be applied across the UK to estimate P loss, which is sensitive not only to land use and management but also to environmental factors such as climate, soil type and topography. The model complexity incorporated in the P Indicators Tool has been adapted to the level of detail in the available data and the need to reflect the impact of changes in agriculture. Currently, the Tool runs on an annual timestep and at a 1 km(2) grid scale. We demonstrate that the P Indicators Tool works in principle and that its modular structure provides a means of accounting for P loss from one layer to the next, and ultimately to receiving waters. Trial runs of the Tool suggest that modelled P delivery to water approximates measured water quality records. The transparency of the structure of the P Indicators Tool means that identification of poorly performing coefficients is possible, and further refinements of the Tool can be made to ensure it is better calibrated and subsequently validated against empirical data, as it becomes available.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Increased atmospheric concentrations of carbon dioxide (CO2) will benefit the yield of most crops. Two free air CO2 enrichment (FACE) meta-analyses have shown increases in yield of between 0 and 73% for C3 crops. Despite this large range, few crop modelling studies quantify the uncertainty inherent in the parameterisation of crop growth and development. We present a novel perturbed-parameter method of crop model simulation, which uses some constraints from observations, that does this. The model used is the groundnut (i.e. peanut; Arachis hypogaea L.) version of the general large-area model for annual crops (GLAM). The conclusions are of relevance to C3 crops in general. The increases in yield simulated by GLAM for doubled CO2 were between 16 and 62%. The difference in mean percentage increase between well-watered and water-stressed simulations was 6.8. These results were compared to FACE and controlled environment studies, and to sensitivity tests on two other crop models of differing levels of complexity: CROPGRO, and the groundnut model of Hammer et al. [Hammer, G.L., Sinclair, T.R., Boote, K.J., Wright, G.C., Meinke, H., Bell, M.J., 1995. A peanut simulation model. I. Model development and testing. Agron. J. 87, 1085-1093]. The relationship between CO2 and water stress in the experiments and in the models was examined. From a physiological perspective, water-stressed crops are expected to show greater CO2 stimulation than well-watered crops. This expectation has been cited in literature. However, this result is not seen consistently in either the FACE studies or in the crop models. In contrast, leaf-level models of assimilation do consistently show this result. An analysis of the evidence from these models and from the data suggests that scale (canopy versus leaf), model calibration, and model complexity are factors in determining the sign and magnitude of the interaction between CO2 and water stress. We conclude from our study that the statement that 'water-stressed crops show greater CO2 stimulation than well-watered crops' cannot be held to be universally true. We also conclude, preliminarily, that the relationship between water stress and assimilation varies with scale. Accordingly, we provide some suggestions on how studies of a similar nature, using crop models of a range of complexity, could contribute further to understanding the roles of model calibration, model complexity and scale. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, simple and simplistic models may produce similar outputs to more robust and disaggregated models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, a hybrid online learning model that combines the fuzzy min-max (FMM) neural network and the Classification and Regression Tree (CART) for motor fault detection and diagnosis tasks is described. The hybrid model, known as FMM-CART, incorporates the advantages of both FMM and CART for undertaking data classification (with FMM) and rule extraction (with CART) problems. In particular, the CART model is enhanced with an importance predictor-based feature selection measure. To evaluate the effectiveness of the proposed online FMM-CART model, a series of experiments using publicly available data sets containing motor bearing faults is first conducted. The results (primarily prediction accuracy and model complexity) are analyzed and compared with those reported in the literature. Then, an experimental study on detecting imbalanced voltage supply of an induction motor using a laboratory-scale test rig is performed. In addition to producing accurate results, a set of rules in the form of a decision tree is extracted from FMM-CART to provide explanations for its predictions. The results positively demonstrate the usefulness of FMM-CART with online learning capabilities in tackling real-world motor fault detection and diagnosis tasks. © 2014 Springer Science+Business Media New York.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Depth-integrated primary productivity (PP) estimates obtained from satellite ocean color-based models (SatPPMs) and those generated from biogeochemical ocean general circulation models (BCGCMs) represent a key resource for biogeochemical and ecological studies at global as well as regional scales. Calibration and validation of these PP models are not straightforward, however, and comparative studies show large differences between model estimates. The goal of this paper is to compare PP estimates obtained from 30 different models (21 SatPPMs and 9 BOGCMs) to a tropical Pacific PP database consisting of similar to 1000 C-14 measurements spanning more than a decade (1983-1996). Primary findings include: skill varied significantly between models, but performance was not a function of model complexity or type (i.e. SatPPM vs. BOGCM); nearly all models underestimated the observed variance of PR specifically yielding too few low PP (< 0.2 g Cm-2 d(-1)) values; more than half of the total root-mean-squared model-data differences associated with the satellite-based PP models might be accounted for by uncertainties in the input variables and/or the PP data; and the tropical Pacific database captures a broad scale shift from low biomassnormalized productivity in the 1980s to higher biomass-normalized productivity in the 1990s, which was not successfully captured by any of the models. This latter result suggests that interdecadal and global changes will be a significant challenge for both SatPPMs and BOGCMs. Finally, average root-mean-squared differences between in situ PP data on the equator at 140 degrees W and PP estimates from the satellite-based productivity models were 58% lower than analogous values computed in a previous PP model comparison 6 years ago. The success of these types of comparison exercises is illustrated by the continual modification and improvement of the participating models and the resulting increase in model skill. (C) 2008 Elsevier BY. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this work we explore optimising parameters of a physical circuit model relative to input/output measurements, using the Dallas Rangemaster Treble Booster as a case study. A hybrid metaheuristic/gradient descent algorithm is implemented, where the initial parameter sets for the optimisation are informed by nominal values from schematics and datasheets. Sensitivity analysis is used to screen parameters, which informs a study of the optimisation algorithm against model complexity by fixing parameters. The results of the optimisation show a significant increase in the accuracy of model behaviour, but also highlight several key issues regarding the recovery of parameters.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The ability to predict the properties of magnetic materials in a device is essential to ensuring the correct operation and optimization of the design as well as the device behavior over a wide range of input frequencies. Typically, development and simulation of wide-bandwidth models requires detailed, physics-based simulations that utilize significant computational resources. Balancing the trade-offs between model computational overhead and accuracy can be cumbersome, especially when the nonlinear effects of saturation and hysteresis are included in the model. This study focuses on the development of a system for analyzing magnetic devices in cases where model accuracy and computational intensity must be carefully and easily balanced by the engineer. A method for adjusting model complexity and corresponding level of detail while incorporating the nonlinear effects of hysteresis is presented that builds upon recent work in loss analysis and magnetic equivalent circuit (MEC) modeling. The approach utilizes MEC models in conjunction with linearization and model-order reduction techniques to process magnetic devices based on geometry and core type. The validity of steady-state permeability approximations is also discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A two-stage hybrid model for data classification and rule extraction is proposed. The first stage uses a Fuzzy ARTMAP (FAM) classifier with Q-learning (known as QFAM) for incremental learning of data samples, while the second stage uses a Genetic Algorithm (GA) for rule extraction from QFAM. Given a new data sample, the resulting hybrid model, known as QFAM-GA, is able to provide prediction pertaining to the target class of the data sample as well as to give a fuzzy if-then rule to explain the prediction. To reduce the network complexity, a pruning scheme using Q-values is applied to reduce the number of prototypes generated by QFAM. A 'don't care' technique is employed to minimize the number of input features using the GA. A number of benchmark problems are used to evaluate the effectiveness of QFAM-GA in terms of test accuracy, noise tolerance, model complexity (number of rules and total rule length). The results are comparable, if not better, than many other models reported in the literature. The main significance of this research is a usable and useful intelligent model (i.e., QFAM-GA) for data classification in noisy conditions with the capability of yielding a set of explanatory rules with minimum antecedents. In addition, QFAM-GA is able to maximize accuracy and minimize model complexity simultaneously. The empirical outcome positively demonstrate the potential impact of QFAM-GA in the practical environment, i.e., providing an accurate prediction with a concise justification pertaining to the prediction to the domain users, therefore allowing domain users to adopt QFAM-GA as a useful decision support tool in assisting their decision-making processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

AimBiodiversity outcomes under global change will be influenced by a range of ecological processes, and these processes are increasingly being considered in models of biodiversity change. However, the level of model complexity required to adequately account for important ecological processes often remains unclear. Here we assess how considering realistically complex frugivore-mediated seed dispersal influences the projected climate change outcomes for plant diversity in the Australian Wet Tropics (all 4313 species). LocationThe Australian Wet Tropics, Queensland, Australia. MethodsWe applied a metacommunity model (M-SET) to project biodiversity outcomes using seed dispersal models that varied in complexity, combined with alternative climate change scenarios and habitat restoration scenarios. ResultsWe found that the complexity of the dispersal model had a larger effect on projected biodiversity outcomes than did dramatically different climate change scenarios. Applying a simple dispersal model that ignored spatial, temporal and taxonomic variation due to frugivore-mediated seed dispersal underestimated the reduction in the area of occurrence of plant species under climate change and overestimated the loss of diversity in fragmented tropical forest remnants. The complexity of the dispersal model also changed the habitat restoration approach identified as the best for promoting persistence of biodiversity under climate change. Main conclusionsThe consideration of complex processes such as frugivore-mediated seed dispersal can make an important difference in how we understand and respond to the influence of climate change on biodiversity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The most common approach to decision making in multi-objective optimisation with metaheuristics is a posteriori preference articulation. Increased model complexity and a gradual increase of optimisation problems with three or more objectives have revived an interest in progressively interactive decision making, where a human decision maker interacts with the algorithm at regular intervals. This paper presents an interactive approach to multi-objective particle swarm optimisation (MOPSO) using a novel technique to preference articulation based on decision space interaction and visual preference articulation. The approach is tested on a 2D aerofoil design case study and comparisons are drawn to non-interactive MOPSO. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

提出一种采用附加测量机构直接测量并联机床运动平台位姿精度的方法。其基本思想是根据运动平台的运动特性在固定平台和运动平台之间增设附加测量机构,当运动平台运动时带动测量机构运动,通过安装在测量机构上的传感器测得广义坐标参量, 经运动学建模即可得到运动平台的位姿。当测量机构位姿正解求解速度满足实时控制要求时,利用该反馈信息对机床进行实时精度补偿和控制。基于上述思想建立的并联机床位姿测量系统可部分排除机床切削力变形和运动副间隙等误差, 从而提高机床的位姿测量精度。以一种五坐标并联机床为例,介绍采用附加测量机构直接测量运动平台位姿精度的建模方法。其中, 测量机构的综合十分重要。测量机构的组成决定了运动学模型的复杂程度, 即决定了运动学模型的计算效率。