876 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate a portable process for developing a triple bottom line model to measure the knowledge production performance of individual research centres. For the first time, this study also empirically illustrates how a fully units-invariant model of Data Envelopment Analysis (DEA) can be used to measure the relative efficiency of research centres by capturing the interaction amongst a common set of multiple inputs and outputs. This study is particularly timely given the increasing transparency required by governments and industries that fund research activities. The process highlights the links between organisational objectives, desired outcomes and outputs while the emerging performance model represents an executive managerial view. This study brings consistency to current measures that often rely on ratios and univariate analyses that are not otherwise conducive to relative performance analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In deregulated electricity market, modeling and forecasting the spot price present a number of challenges. By applying wavelet and support vector machine techniques, a new time series model for short term electricity price forecasting has been developed in this paper. The model employs both historical price and other important information, such as load capacity and weather (temperature), to forecast the price of one or more time steps ahead. The developed model has been evaluated with the actual data from Australian National Electricity Market. The simulation results demonstrated that the forecast model is capable of forecasting the electricity price with a reasonable forecasting accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiscale systems that are characterized by a great range of spatial–temporal scales arise widely in many scientific domains. These range from the study of protein conformational dynamics to multiphase processes in, for example, granular media or haemodynamics, and from nuclear reactor physics to astrophysics. Despite the diversity in subject areas and terminology, there are many common challenges in multiscale modelling, including validation and design of tools for programming and executing multiscale simulations. This Theme Issue seeks to establish common frameworks for theoretical modelling, computing and validation, and to help practical applications to benefit from the modelling results. This Theme Issue has been inspired by discussions held during two recent workshops in 2013: ‘Multiscale modelling and simulation’ at the Lorentz Center, Leiden (http://www.lorentzcenter.nl/lc/web/2013/569/info.php3?wsid=569&venue=Snellius), and ‘Multiscale systems: linking quantum chemistry, molecular dynamics and microfluidic hydrodynamics’ at the Royal Society Kavli Centre. The objective of both meetings was to identify common approaches for dealing with multiscale problems across different applications in fluid and soft matter systems. This was achieved by bringing together experts from several diverse communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays financial institutions due to regulation and internal motivations care more intensively on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse operational risk in an own developed simulation model framework. Our approach is based on the analysis of latent risk process instead of manifest risk process, which widely popular in risk literature. In our model the latent risk process is a stochastic risk process, so called Ornstein- Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of “first time to hit” requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently making digital 3D models and replicas of the cultural heritage assets play an important role in the preservation and having a high detail source for future research and intervention. In this dissertation, it is tried to assess different methods for digital surveying and making 3D replicas of cultural heritage assets in different scales of size. The methodologies vary in devices, software, workflow, and the amount of skill that is required. The three phases of the 3D modelling process are data acquisition, modelling, and model presentation. Each of these sections is divided into sub-sections and there are several approaches, methods, devices, and software that may be employed, furthermore, the selection process should be based on the operation's goal, available facilities, the scale and properties of the object or structure to be modeled, as well as the operators' expertise and experience. The most key point to remember is that the 3D modelling operation should be properly accurate, precise, and reliable; therefore, there are so many instructions and pieces of advice on how to perform 3D modelling effectively. It is an attempt to compare and evaluate the various ways of each phase in order to explain and demonstrate their differences, benefits, and drawbacks in order to serve as a simple guide for new and/or inexperienced users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thin hard coatings on components and tools are used increasingly due to the rapid development in deposition techniques, tribological performance and application skills. The residual stresses in a coated surface are crucial for its tribological performance. Compressive residual stresses in PVD deposited TiN and DLC coatings were measured to be in the range of 0.03-4 GPa on steel substrate and 0.1-1.3 GPa on silicon. MoS(2) coatings had tensional stresses in the range of 0.8-1.3 on steel and 0.16 GPa compressive stresses on silicon. The fracture pattern of coatings deposited on steel substrate were analysed both in bend testing and scratch testing. A micro-scale finite element method (FEM) modelling and stress simulation of a 2 mu m TiN-coated steel surface was carried out and showed a reduction of the generated tensile buckling stresses in front of the sliding tip when compressive residual stresses of 1 GPa were included in the model. However, this reduction is not similarly observed in the scratch groove behind the tip, possibly due to sliding contact-induced stress relaxation. Scratch and bending tests allowed calculation of the fracture toughness of the three coated surfaces, based on both empirical crack pattern observations and FEM stress calculation, which resulted in highest values for TiN coating followed by MoS(2) and DLC coatings, being K(C) = 4-11, about 2, and 1-2 MPa M(1/2), respectively. Higher compressive residual stresses in the coating and higher elastic modulus of the coating correlated to increased fracture toughness of the coated surface. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modelling and simulation studies were carried out at 26 cement clinker grinding circuits including tube mills, air separators and high pressure grinding rolls in 8 plants. The results reported earlier have shown that tube mills can be modelled as several mills in series, and the internal partition in tube mills can be modelled as a screen which must retain coarse particles in the first compartment but not impede the flow of drying air. In this work the modelling has been extended to show that the Tromp curve which describes separator (classifier) performance can be modelled in terms of d(50)(corr), by-pass, the fish hook, and the sharpness of the curve. Also the high pressure grinding rolls model developed at the Julius Kruttschnitt Mineral Research Centre gives satisfactory predictions using a breakage function derived from impact and compressed bed tests. Simulation studies of a full plant incorporating a tube mill, HPGR and separators showed that the models could successfully predict the performance of the another mill working under different conditions. The simulation capability can therefore be used for process optimization and design. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.