897 resultados para estimation and filtering


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The agricultural and energy industries are closely related, both biologically and financially. The paper discusses the relationship and the interactions on price and volatility, with special focus on the covolatility spillover effects for these two industries. The interaction and covolatility spillovers or the delayed effect of a returns shock in one asset on the subsequent volatility or covolatility in another asset, between the energy and agricultural industries is the primary emphasis of the paper. Although there has already been significant research on biofuel and biofuel-related crops, much of the previous research has sought to find a relationship among commodity prices. Only a few published papers have been concerned with volatility spillovers. However, it must be emphasized that there have been numerous technical errors in the theoretical and empirical research, which needs to be corrected. The paper not only considers futures prices as a widely-used hedging instrument, but also takes an interesting new hedging instrument, ETF, into account. ETF is regarded as index futures when investors manage their portfolios, so it is possible to calculate an optimal dynamic hedging ratio. This is a very useful and interesting application for the estimation and testing of volatility spillovers. In the empirical analysis, multivariate conditional volatility diagonal BEKK models are estimated for comparing patterns of covolatility spillovers. The paper provides a new way of analyzing and describing the patterns of covolatility spillovers, which should be useful for the future empirical analysis of estimating and testing covolatility spillover effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among Small and Medium Sized Enterprises (SMEs) in particular, the UK Government’s ambitions regarding BIM uptake and diffusion across the construction sector may be tempered by a realpolitik shaped in part by interactions between the industry, Higher Education (HE) and professional practice. That premise also has a global perspective. Building on the previous 2 papers, Architectural technology and the BIM Acronym 1 and 2, this third iteration is a synthesis of research and investigations carried out over a number of years directly related to the practical implementation of BIM and its impact upon BE SMEs. First challenges, risks and potential benefits for SMEs and micros in facing up to the necessity to engage with digital tools in a competitive and volatile marketplace are discussed including tailoring BIM to suit business models, and filtering out achievable BIM outcomes from generic and bespoke aspects of practice. Second the focus is on setting up and managing teams engaging with BIM scenarios, including the role of clients; addresses a range of paradigms including lonely BIM and collaborative working. The significance of taking a whole life view with BIM is investigated including embedding soft landings principles into project planning and realisation. Thirdly procedures for setting up and managing common data environments are identified and the value of achieving smooth information flow is addressed. The overall objective of this paper is to provide SMEs with a practical strategy to develop a toolkit to BIM implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Implant failures and postoperative complications are often associated to the bone drilling. Estimation and control of drilling parameters are critical to prevent mechanical damage to the bone tissues. For better performance of the drilling procedures, it is essential to understand the mechanical behaviour of bones that leads to their failures and consequently to improve the cutting conditions. This paper investigates the effect of drill speed and feed-rate on mechanical damage during drilling of solid rigid foam materials, with similar mechanical properties to the human bone. Experimental tests were conducted on biomechanical blocks instrumented with strain gauges to assess the drill speed and feed-rate influence. A three-dimensional dynamic finite element model to predict the bone stresses, as a function of drilling conditions, drill geometry and bone model, was developed. These simulations incorporate the dynamic characteristics involved in the drilling process. The element removal scheme is taken into account and allows advanced simulations of tool penetration and material removal. Experimental and numerical results show that generated stresses in the material tend to increase with tool penetration. Higher drill speed leads to an increase of von-Mises stresses and strains in the solid rigid foams. However, when the feed-rate is higher, the stresses and strains are lower. The numerical normal stresses and strains are found to be in good agreement with experimental results. The models could be an accurate analysis tool to simulate the stresses distribution in the bone during the drilling process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address two issues in the determination of particulate carbon and nitrogen in suspended matter of aquatic environments. One is the adsorption of dissolved organic matter on filters, leading to overestimate particulate matter. The second is the material loss during filtration due to fragile algal cells breaking up. Examples from both laboratory cultures and natural samples are presented. We recommend using stacked filters in order to estimate thefirst and filtering different volumes of water in order to evaluate the second.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

"Final report for the period July 1985 to July 1987."

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern toxicology investigates a wide array of both old and new health hazards. Priority setting is needed to select agents for research from the plethora of exposure circumstances. The changing societies and a growing fraction of the aged have to be taken into consideration. A precise exposure assessment is of importance for risk estimation and regulation. Toxicology contributes to the exploration of pathomechanisms to specify the exposure metrics for risk estimation. Combined effects of co-existing agents are not yet sufficiently understood. Animal experiments allow a separate administration of agents which can not be disentangled by epidemiological means, but their value is limited for low exposure levels in many of today's settings. As an experimental science, toxicology has to keep pace with the rapidly growing knowledge about the language of the genome and the changing paradigms in cancer development. During the pioneer era of assembling a working draft of the human genome, toxicogenomics has been developed. Gene and pathway complexity have to be considered when investigating gene-environment interactions. For a best conduct of studies, modem toxicology needs a close liaison with many other disciplines like epidemiology and bioinformatics. (C) 2004 Elsevier Ireland Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

On-site wastewater treatment and dispersal systems (OWTS) are used in non-sewered populated areas in Australia to treat and dispose of household wastewater. The most common OWTS in Australia is the septic tank-soil absorption system (SAS) - which relies on the soil to treat and disperse effluent. The mechanisms governing purification and hydraulic performance of a SAS are complex and have been shown to be highly influenced by the biological zone (biomat) which develops on the soil surface within the trench or bed. Studies suggest that removal mechanisms in the biomat zone, primarily adsorption and filtering, are important processes in the overall purification abilities of a SAS. There is growing concern that poorly functioning OWTS are impacting upon the environment, although to date, only a few investigations have been able to demonstrate pollution of waterways by on-site systems. In this paper we review some key hydrological and biogeochemical mechanisms in SAS, and the processes leading to hydraulic failure. The nutrient and pathogen removal efficiencies in soil absorption systems are also reviewed, and a critical discussion of the evidence of failure and environmental and public health impacts arising from SAS operation is presented. Future research areas identified from the review include the interactions between hydraulic and treatment mechanisms, and the biomat and sub-biomat zone gas composition and its role in effluent treatment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nurses play a key role in the prevention of cardiovascular disease (CVD) and one would, therefore, expect them to have a heightened awareness of the need for systematic screening and their own CVD risk profile. The aim of this study was to examine personal awareness of CVD risk among a cohort of cardiovascular nurses attending a European conference. Methods Of the 340 delegates attending the 5th annual Spring Meeting on Cardiovascular Nursing (Basel, Switzerland, 2005), 287 (83%) completed a self-report questionnaire to assess their own risk factors for CVD. Delegates were also asked to give an estimation of their absolute total risk of experiencing a fatal CVD event in the next 10 years. Level of agreement between self-reported CVD risk estimation and their actual risk according to the SCORE risk assessment system was compared by calculating weighted Kappa (κw). Results Overall, 109 responders (38%) self-reported having either pre-existing CVD (only 2%), one or more markedly raised CVD risk factors, a high total risk of fatal CVD (≥ 5% in 10 years) or a strong family history of CVD. About half of this cohort (53%) did not know their own total cholesterol level. Less than half (45%) reported having a 10-year risk of fatal CVD of < 1%, while 13% reported having a risk ≥ 5%. Based on the SCORE risk function, the estimated 10-year risk of a fatal CVD event was < 1% for 96% of responders: only 2% had a ≥ 5% risk of such an event. Overall, less than half (46%) of this cohort's self-reported CVD risk corresponded with that calculated using the SCORE risk function (κw = 0.27). Conclusion Most cardiovascular nurses attending a European conference in 2005 poorly understood their own CVD risk profile, and the agreement between their self-reported 10-year risk of a fatal CVD and their CVD risk using SCORE was only fair. Given the specialist nature of this conference, our findings clearly demonstrate a need to improve overall nursing awareness of the role and importance of systematic CVD risk assessment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report the impact of cascaded reconfigurable optical add-drop multiplexer induced penalties on coherently-detected 28 Gbaud polarization multiplexed m-ary quadrature amplitude modulation (PM m-ary QAM) WDM channels. We investigate the interplay between different higher-order modulation channels and the effect of filter shapes and bandwidth of (de)multiplexers on the transmission performance, in a segment of pan-European optical network with a maximum optical path of 4,560 km (80km x 57 spans). We verify that if the link capacities are assigned assuming that digital back propagation is available, 25% of the network connections fail using electronic dispersion compensation alone. However, majority of such links can indeed be restored by employing single-channel digital back-propagation employing less than 15 steps for the whole link, facilitating practical application of DBP. We report that higher-order channels are most sensitive to nonlinear fiber impairments and filtering effects, however these formats are less prone to ROADM induced penalties due to the reduced maximum number of hops. Furthermore, it has been demonstrated that a minimum filter Gaussian order of 3 and bandwidth of 35 GHz enable negligible excess penalty for any modulation order.