395 resultados para Generative Modelling
Resumo:
Business process modelling can help an organisation better understand and improve its business processes. Most business process modelling methods adopt a task- or activity-based approach to identifying business processes. Within our work, we use activity theory to categorise elements within organisations as being either human beings, activities or artefacts. Due to the direct relationship between these three elements, an artefact-oriented approach to organisation analysis emerges. Organisational semiotics highlights the ontological dependency between affordances within an organisation. We analyse the ontological dependency between organisational elements, and therefore produce the ontology chart for artefact-oriented business process modelling in order to clarify the relationship between the elements of an organisation. Furthermore, we adopt the techniques from semantic analysis and norm analysis, of organisational semiotics, to develop the artefact-oriented method for business process modelling. The proposed method provides a novel perspective for identifying and analysing business processes, as well as agents and artefacts, as the artefact-oriented perspective demonstrates the fundamental flow of an organisation. The modelling results enable an organisation to understand and model its processes from an artefact perspective, viewing an organisation as a network of artefacts. The information and practice captured and stored in artefact can also be shared and reused between organisations that produce similar artefacts.
Resumo:
Anaerobic digestion (AD) technologies convert organic wastes and crops into methane-rich biogas for heating, electricity generation and vehicle fuel. Farm-based AD has proliferated in some EU countries, driven by favourable policies promoting sustainable energy generation and GHG mitigation. Despite increased state support there are still few AD plants on UK farms leading to a lack of normative data on viability of AD in the whole-farm context. Farmers and lenders are therefore reluctant to fund AD projects and policy makers are hampered in their attempts to design policies that adequately support the industry. Existing AD studies and modelling tools do not adequately capture the farm context within which AD interacts. This paper demonstrates a whole-farm, optimisation modelling approach to assess the viability of AD in a more holistic way, accounting for such issues as: AD scale, synergies and conflicts with other farm enterprises, choice of feedstocks, digestate use and impact on farm Net Margin. This modelling approach demonstrates, for example, that: AD is complementary to dairy enterprises, but competes with arable enterprises for farm resources. Reduced nutrient purchases significantly improve Net Margin on arable farms, but AD scale is constrained by the capacity of farmland to absorb nutrients in AD digestate.
Resumo:
A dynamic size-structured model is developed for phytoplankton and nutrients in the oceanic mixed layer and applied to extract phytoplankton biomass at discrete size fractions from remotely sensed, ocean-colour data. General relationships between cell size and biophysical processes (such as sinking, grazing, and primary production) of phytoplankton were included in the model through a bottom–up approach. Time-dependent, mixed-layer depth was used as a forcing variable, and a sequential data-assimilation scheme was implemented to derive model trajectories. From a given time-series, the method produces estimates of size-structured biomass at every observation, so estimates seasonal succession of individual phytoplankton size, derived here from remote sensing for the first time. From these estimates, normalized phytoplankton biomass size spectra over a period of 9 years were calculated for one location in the North Atlantic. Further analysis demonstrated that strong relationships exist between the seasonal trends of the estimated size spectra and the mixed-layer depth, nutrient biomass, and total chlorophyll. The results contain useful information on the time-dependent biomass flux in the pelagic ecosystem.
Resumo:
Four CO2 concentration inversions and the Global Fire Emissions Database (GFED) versions 2.1 and 3 are used to provide benchmarks for climate-driven modeling of the global land-atmosphere CO2 flux and the contribution of wildfire to this flux. The Land surface Processes and exchanges (LPX) model is introduced. LPX is based on the Lund-Potsdam-Jena Spread and Intensity of FIRE (LPJ-SPITFIRE) model with amended fire probability calculations. LPX omits human ignition sources yet simulates many aspects of global fire adequately. It captures the major features of observed geographic pattern in burnt area and its seasonal timing and the unimodal relationship of burnt area to precipitation. It simulates features of geographic variation in the sign of the interannual correlations of burnt area with antecedent dryness and precipitation. It simulates well the interannual variability of the global total land-atmosphere CO2 flux. There are differences among the global burnt area time series from GFED2.1, GFED3 and LPX, but some features are common to all. GFED3 fire CO2 fluxes account for only about 1/3 of the variation in total CO2 flux during 1997–2005. This relationship appears to be dominated by the strong climatic dependence of deforestation fires. The relationship of LPX-modeled fire CO2 fluxes to total CO2 fluxes is weak. Observed and modeled total CO2 fluxes track the El Niño–Southern Oscillation (ENSO) closely; GFED3 burnt area and global fire CO2 flux track the ENSO much less so. The GFED3 fire CO2 flux-ENSO connection is most prominent for the El Niño of 1997–1998, which produced exceptional burning conditions in several regions, especially equatorial Asia. The sign of the observed relationship between ENSO and fire varies regionally, and LPX captures the broad features of this variation. These complexities underscore the need for process-based modeling to assess the consequences of global change for fire and its implications for the carbon cycle.
Resumo:
Understanding how species and ecosystems respond to climate change has become a major focus of ecology and conservation biology. Modelling approaches provide important tools for making future projections, but current models of the climate-biosphere interface remain overly simplistic, undermining the credibility of projections. We identify five ways in which substantial advances could be made in the next few years: (i) improving the accessibility and efficiency of biodiversity monitoring data, (ii) quantifying the main determinants of the sensitivity of species to climate change, (iii) incorporating community dynamics into projections of biodiversity responses, (iv) accounting for the influence of evolutionary processes on the response of species to climate change, and (v) improving the biophysical rule sets that define functional groupings of species in global models.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
Objective To model the overall and income specific effect of a 20% tax on sugar sweetened drinks on the prevalence of overweight and obesity in the UK. Design Econometric and comparative risk assessment modelling study. Setting United Kingdom. Population Adults aged 16 and over. Intervention A 20% tax on sugar sweetened drinks. Main outcome measures The primary outcomes were the overall and income specific changes in the number and percentage of overweight (body mass index ≥25) and obese (≥30) adults in the UK following the implementation of the tax. Secondary outcomes were the effect by age group (16-29, 30-49, and ≥50 years) and by UK constituent country. The revenue generated from the tax and the income specific changes in weekly expenditure on drinks were also estimated. Results A 20% tax on sugar sweetened drinks was estimated to reduce the number of obese adults in the UK by 1.3% (95% credible interval 0.8% to 1.7%) or 180 000 (110 000 to 247 000) people and the number who are overweight by 0.9% (0.6% to 1.1%) or 285 000 (201 000 to 364 000) people. The predicted reductions in prevalence of obesity for income thirds 1 (lowest income), 2, and 3 (highest income) were 1.3% (0.3% to 2.0%), 0.9% (0.1% to 1.6%), and 2.1% (1.3% to 2.9%). The effect on obesity declined with age. Predicted annual revenue was £276m (£272m to £279m), with estimated increases in total expenditure on drinks for income thirds 1, 2, and 3 of 2.1% (1.4% to 3.0%), 1.7% (1.2% to 2.2%), and 0.8% (0.4% to 1.2%). Conclusions A 20% tax on sugar sweetened drinks would lead to a reduction in the prevalence of obesity in the UK of 1.3% (around 180 000 people). The greatest effects may occur in young people, with no significant differences between income groups. Both effects warrant further exploration. Taxation of sugar sweetened drinks is a promising population measure to target population obesity, particularly among younger adults.
Resumo:
Communication signal processing applications often involve complex-valued (CV) functional representations for signals and systems. CV artificial neural networks have been studied theoretically and applied widely in nonlinear signal and data processing [1–11]. Note that most artificial neural networks cannot be automatically extended from the real-valued (RV) domain to the CV domain because the resulting model would in general violate Cauchy-Riemann conditions, and this means that the training algorithms become unusable. A number of analytic functions were introduced for the fully CV multilayer perceptrons (MLP) [4]. A fully CV radial basis function (RBF) nework was introduced in [8] for regression and classification applications. Alternatively, the problem can be avoided by using two RV artificial neural networks, one processing the real part and the other processing the imaginary part of the CV signal/system. A even more challenging problem is the inverse of a CV
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.