920 resultados para Ecosystem-level models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land-surface processes include a broad class of models that operate at a landscape scale. Current modelling approaches tend to be specialised towards one type of process, yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management. This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as water flow, biochemical diffusion, and plant dispersal. Its theoretical development applies a Lagrangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique, called level set method, is implemented in a geographical information system (GIS). It fits with a field data model in GIS and is implemented as operators in map algebra. The paper describes an implementation of the level set methods in a map algebra programming language, called MapScript, and gives example program scripts for applications in ecology and hydrology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even when data repositories exhibit near perfect data quality, users may formulate queries that do not correspond to the information requested. Users’ poor information retrieval performance may arise from either problems understanding of the data models that represent the real world systems, or their query skills. This research focuses on users’ understanding of the data structures, i.e., their ability to map the information request and the data model. The Bunge-Wand-Weber ontology was used to formulate three sets of hypotheses. Two laboratory experiments (one using a small data model and one using a larger data model) tested the effect of ontological clarity on users’ performance when undertaking component, record, and aggregate level tasks. The results indicate for the hypotheses associated with different representations but equivalent semantics that parsimonious data model participants performed better for component level tasks but that ontologically clearer data model participants performed better for record and aggregate level tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While classic intergroup theories have specified the processes explaining situational shifts in social identification, the processes whereby social identities change more profoundly and become integrated within the self have to be proposed. To this aim, the present studies investigate the processes by which group members integrate a new social identity as they are joining a new group. Combining a social identity approach and stress and coping models, this research tests if social factors (i.e., needs satisfied by fellow group members, social support), have an impact on the adaptation strategies group members use to deal with the novelty of the situation and to fit into their new group (seeking information & adopting group norms vs. disengaging). These strategies, in turn, should predict changes in level of identification with the new social group over time, as well as enhanced psychological adjustment. These associations are tested among university students over the course of their first academic year (Study 1), and among online gamers joining a newly established online community (Study 2). Path analyses provide support for the hypothesised associations. The results are discussed in light of recent theoretical developments pertaining to intraindividual changes in social identities and their integration in the self.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo principale della politica di sicurezza alimentare è quello di garantire la salute dei consumatori attraverso regole e protocolli di sicurezza specifici. Al fine di rispondere ai requisiti di sicurezza alimentare e standardizzazione della qualità, nel 2002 il Parlamento Europeo e il Consiglio dell'UE (Regolamento (CE) 178/2002 (CE, 2002)), hanno cercato di uniformare concetti, principi e procedure in modo da fornire una base comune in materia di disciplina degli alimenti e mangimi provenienti da Stati membri a livello comunitario. La formalizzazione di regole e protocolli di standardizzazione dovrebbe però passare attraverso una più dettagliata e accurata comprensione ed armonizzazione delle proprietà globali (macroscopiche), pseudo-locali (mesoscopiche), ed eventualmente, locali (microscopiche) dei prodotti alimentari. L'obiettivo principale di questa tesi di dottorato è di illustrare come le tecniche computazionali possano rappresentare un valido supporto per l'analisi e ciò tramite (i) l’applicazione di protocolli e (ii) miglioramento delle tecniche ampiamente applicate. Una dimostrazione diretta delle potenzialità già offerte dagli approcci computazionali viene offerta nel primo lavoro in cui un virtual screening basato su docking è stato applicato al fine di valutare la preliminare xeno-androgenicità di alcuni contaminanti alimentari. Il secondo e terzo lavoro riguardano lo sviluppo e la convalida di nuovi descrittori chimico-fisici in un contesto 3D-QSAR. Denominata HyPhar (Hydrophobic Pharmacophore), la nuova metodologia così messa a punto è stata usata per esplorare il tema della selettività tra bersagli molecolari strutturalmente correlati e ha così dimostrato di possedere i necessari requisiti di applicabilità e adattabilità in un contesto alimentare. Nel complesso, i risultati ci permettono di essere fiduciosi nel potenziale impatto che le tecniche in silico potranno avere nella identificazione e chiarificazione di eventi molecolari implicati negli aspetti tossicologici e nutrizionali degli alimenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over a number of years, as the Higher Education Funding Council for England (HEFCE)'s funding models became more transparent, Aston University was able to discover how its funding for teaching and research was calculated. This enabled calculations to be made on the funds earned by each school in the University, and Aston Business School (ABS) in turn to develop models to calculate the funds earned by its programmes and academic groups. These models were a 'load' and a 'contribution' model. The 'load' model records the weighting of activities undertaken by individual members of staff; the 'contribution' model is the means by which funds are allocated to academic units. The 'contribution' model is informed by the 'load' model in determining the volume of activity for which each academic unit is to be funded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An interactive hierarchical Generative Topographic Mapping (HGTM) ¸iteHGTM has been developed to visualise complex data sets. In this paper, we build a more general visualisation system by extending the HGTM visualisation system in 3 directions: bf (1) We generalize HGTM to noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM) developed in ¸iteKabanpami. bf (2) We give the user a choice of initializing the child plots of the current plot in either em interactive, or em automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in ¸iteHGTM, whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of LTMs is employed. bf (3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualisation plots, since they can highlight the boundaries between data clusters. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a toy example and apply our system to three more complex real data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the exchange rate forecasting performance of neural network models are evaluated against the random walk, autoregressive moving average and generalised autoregressive conditional heteroskedasticity models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore, the parameters are chosen according to what the researcher considers to be the best. Such an approach, however,implies that the risk of making bad decisions is extremely high, which could explain why in many studies, neural network models do not consistently perform better than their time series counterparts. In this paper, through extensive experimentation, the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of Forecasting exchange rates with linear and nonlinear models 415 performing well. The results show that in general, neural network models perform better than the traditionally used time series models in forecasting exchange rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the exchange rate forecasting performance of neural network models are evaluated against random walk and a range of time series models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore the parameters are chosen according to what the researcher considers to be the best. Such an approach, however, implies that the risk of making bad decisions is extremely high which could explain why in many studies neural network models do not consistently perform better than their time series counterparts. In this paper through extensive experimentation the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of performing well. Our results show that in general neural network models perform better than traditionally used time series models in forecasting exchange rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.