897 resultados para 2447: modelling and forecasting
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Includes bibliography
Resumo:
This work presents and discusses the main topics involved on the design of a mobile robot system and focus on the control and navigation systems for autonomous mobile robots. Introduces the main aspects of the Robot design, which is a holistic vision about all the steps of the development process of an autonomous mobile robot; discusses the problems addressed to the conceptualization of the mobile robot physical structure and its relation to the world. Presents the dynamic and control analysis for navigation robots with kinematic and dynamic model and, for final, presents applications for a robotic platform of Automation, Simulation, Control and Supervision of Mobile Robots Navigation, with studies of dynamic and kinematic modelling, control algorithms, mechanisms for mapping and localization, trajectory planning and the platform simulator. © 2012 Praise Worthy Prize S.r.l. - All rights reserved.
Resumo:
The multi-scale synoptic circulation system in the southeastern Brazil (SEBRA) region is presented using a feature-oriented approach. Prevalent synoptic circulation structures, or ""features,"" are identified from previous observational studies. These features include the southward-flowing Brazil Current (BC), the eddies off Cabo Sao Tome (CST - 22 degrees S) and off Cabo Frio (CF - 23 degrees S), and the upwelling region off CF and CST. Their synoptic water-mass (T-S) structures are characterized and parameterized to develop temperature-salinity (T-S) feature models. Following [Gangopadhyay, A., Robinson, A.R., Haley, PJ., Leslie, W.J., Lozano, C.j., Bisagni, J., Yu, Z., 2003. Feature-oriented regional modeling and simulation (forms) in the gulf of maine and georges bank. Cont. Shelf Res. 23 (3-4), 317-353] methodology, a synoptic initialization scheme for feature-oriented regional modeling and simulation (FORMS) of the circulation in this region is then developed. First, the temperature and salinity feature-model profiles are placed on a regional circulation template and objectively analyzed with available background climatology in the deep region. These initialization fields are then used for dynamical simulations via the Princeton Ocean Model (POM). A few first applications of this methodology are presented in this paper. These include the BC meandering, the BC-eddy interaction and the meander-eddy-upwelling system (MEUS) simulations. Preliminary validation results include realistic wave-growth and eddy formation and sustained upwelling. Our future plan includes the application of these feature models with satellite, in-situ data and advanced data-assimilation schemes for nowcasting and forecasting the SEBRA region. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
[EN] This poster shows the first attempt to modelize the Gran Canaria Island wake, an obstacle with almost a conical shape (60 km diameter and about 2000 m height). The leeside circulation was modelized for two well-defined street vortex cases during June 2010 and March 2011. Numerical simulations of these events were carried out using the 3.1.1 version of the Weather Research and Forecasting (WRF-ARW) Model. Three different domains with 4.5-km, 1.5-km and 0.5-km horizontal grid spacing and 70 vertical sigma levels were defined. The simulations were performed using two-way interactive nesting between the first and the second and third domains, using different land surface model parameterizations (Thermal diffusion, Noah LSM and RUC) for comparison. Initial conditions were provided by the NCAR Dataset analysis from April 2007. The poster is focused on both episodes using NoahLSM parameterizations.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
The modelling of critical infrastructures (CIs) is an important issue that needs to be properly addressed, for several reasons. It is a basic support for making decisions about operation and risk reduction. It might help in understanding high-level states at the system-of-systems layer, which are not ready evident to the organisations that manage the lower level technical systems. Moreover, it is also indispensable for setting a common reference between operator and authorities, for agreeing on the incident scenarios that might affect those infrastructures. So far, critical infrastructures have been modelled ad-hoc, on the basis of knowledge and practice derived from less complex systems. As there is no theoretical framework, most of these efforts proceed without clear guides and goals and using informally defined schemas based mostly on boxes and arrows. Different CIs (electricity grid, telecommunications networks, emergency support, etc) have been modelled using particular schemas that were not directly translatable from one CI to another. If there is a desire to build a science of CIs it is because there are some observable commonalities that different CIs share. Up until now, however, those commonalities were not adequately compiled or categorized, so building models of CIs that are rooted on such commonalities was not possible. This report explores the issue of which elements underlie every CI and how those elements can be used to develop a modelling language that will enable CI modelling and, subsequently, analysis of CI interactions, with a special focus on resilience
Resumo:
In this paper we present a solution for building a better strategy to take part in external electricity markets. For an optimal strategy development, both the internal system costs as well as the future values of the series of electricity prices in external markets need to be known. But in practice, the real problems that must be faced are that both future electricity prices and costs are unknown. Thus, the first ones must be modeled and forecasted and the costs must be calculated. Our methodology for building an optimal strategy consists of three steps: The first step is modeling and forecasting market prices in external systems. The second step is the cost calculation on internal system taking into account the expected prices in the first step. The third step is based on the results of the previous steps, and consists of preparing the bids for external markets. The main goal is to reduce consumers' costs unlike many others that are oriented to increase GenCo's profits.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
Molecular modelling of human CYP1B1 based on homology with the mammalian P450, CYP2C5, of known three-dimensional structure is reported. The enzyme model has been used to investigate the likely mode of binding for selected CYP1B1 substrates, particularly with regard to the possible effects of allelic variants of CYP1B1 on metabolism. In general, it appears that the CYP1B1 model is consistent with known substrate selectivity for the enzyme, and the sites of metabolism can be rationalized in terms of specific contacts with key amino acid residues within the CYP1B1 heme locus. Further-more, a mode of binding interaction for the inhibitor, a-naphthoflavone, is presented which accords with currently available information. The current paper shows that a combination of molecular modelling and experimental determinations on the substrate metabolism for CYP1B1 allelic variants can aid in the understanding of structure-function relationships within P450 enzymes. (C) 2003 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
The aim of this review is to analyse critically the recent literature on the clinical pharmacokinetics and pharmacodynamics of tacrolimus in solid organ transplant recipients. Dosage and target concentration recommendations for tacrolimus vary from centre to centre, and large pharmacokinetic variability makes it difficult to predict what concentration will be achieved with a particular dose or dosage change. Therapeutic ranges have not been based on statistical approaches. The majority of pharmacokinetic studies have involved intense blood sampling in small homogeneous groups in the immediate post-transplant period. Most have used nonspecific immunoassays and provide little information on pharmacokinetic variability. Demographic investigations seeking correlations between pharmacokinetic parameters and patient factors have generally looked at one covariate at a time and have involved small patient numbers. Factors reported to influence the pharmacokinetics of tacrolimus include the patient group studied, hepatic dysfunction, hepatitis C status, time after transplantation, patient age, donor liver characteristics, recipient race, haematocrit and albumin concentrations, diurnal rhythm, food administration, corticosteroid dosage, diarrhoea and cytochrome P450 (CYP) isoenzyme and P-glycoprotein expression. Population analyses are adding to our understanding of the pharmacokinetics of tacrolimus, but such investigations are still in their infancy. A significant proportion of model variability remains unexplained. Population modelling and Bayesian forecasting may be improved if CYP isoenzymes and/or P-glycoprotein expression could be considered as covariates. Reports have been conflicting as to whether low tacrolimus trough concentrations are related to rejection. Several studies have demonstrated a correlation between high trough concentrations and toxicity, particularly nephrotoxicity. The best predictor of pharmacological effect may be drug concentrations in the transplanted organ itself. Researchers have started to question current reliance on trough measurement during therapeutic drug monitoring, with instances of toxicity and rejection occurring when trough concentrations are within 'acceptable' ranges. The correlation between blood concentration and drug exposure can be improved by use of non-trough timepoints. However, controversy exists as to whether this will provide any great benefit, given the added complexity in monitoring. Investigators are now attempting to quantify the pharmacological effects of tacrolimus on immune cells through assays that measure in vivo calcineurin inhibition and markers of immuno suppression such as cytokine concentration. To date, no studies have correlated pharmacodynamic marker assay results with immunosuppressive efficacy, as determined by allograft outcome, or investigated the relationship between calcineurin inhibition and drug adverse effects. Little is known about the magnitude of the pharmacodynamic variability of tacrolimus.
Resumo:
New tools derived from advances in molecular biology have not been widely adopted in plant breeding for complex traits because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. In this study, we explored whether physiological dissection and integrative modelling of complex traits could link phenotype complexity to underlying genetic systems in a way that enhanced the power of molecular breeding strategies. A crop and breeding system simulation study on sorghum, which involved variation in 4 key adaptive traits-phenology, osmotic adjustment, transpiration efficiency, stay-green-and a broad range of production environments in north-eastern Australia, was used. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages assuming gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies in the data. Based on the analyses of gene effects, a range of marker-assisted selection breeding strategies was simulated. It was shown that the inclusion of knowledge resulting from trait physiology and modelling generated an enhanced rate of yield advance over cycles of selection. This occurred because the knowledge associated with component trait physiology and extrapolation to the target population of environments by modelling removed confounding effects associated with environment and gene context dependencies for the markers used. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate genetic regions.
Resumo:
This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
We demonstrate a portable process for developing a triple bottom line model to measure the knowledge production performance of individual research centres. For the first time, this study also empirically illustrates how a fully units-invariant model of Data Envelopment Analysis (DEA) can be used to measure the relative efficiency of research centres by capturing the interaction amongst a common set of multiple inputs and outputs. This study is particularly timely given the increasing transparency required by governments and industries that fund research activities. The process highlights the links between organisational objectives, desired outcomes and outputs while the emerging performance model represents an executive managerial view. This study brings consistency to current measures that often rely on ratios and univariate analyses that are not otherwise conducive to relative performance analysis.