34 resultados para Estimation Methods
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Quality of Service (QoS) support in IEEE 802.11-based ad hoc networks relies on the networks’ ability to estimate the available bandwidth on a given link. However, no mechanism has been standardized to accurately evaluate this resource. This remains one of the main issues open to research in this field. This paper proposes an available bandwidth estimation approach which achieves more accurate estimation when compared to existing research. The proposed approach differentiates the channel busy caused by transmitting or receiving from that caused by carrier sensing, and thus improves the accuracy of estimating the overlap probability of two adjacent nodes’ idle time. Simulation results testify the improvement of this approach when compared with well known bandwidth estimation methods in the literature.
An integrated approach for real-time model-based state-of-charge estimation of lithium-ion batteries
Resumo:
Lithium-ion batteries have been widely adopted in electric vehicles (EVs), and accurate state of charge (SOC) estimation is of paramount importance for the EV battery management system. Though a number of methods have been proposed, the SOC estimation for Lithium-ion batteries, such as LiFePo4 battery, however, faces two key challenges: the flat open circuit voltage (OCV) vs SOC relationship for some SOC ranges and the hysteresis effect. To address these problems, an integrated approach for real-time model-based SOC estimation of Lithium-ion batteries is proposed in this paper. Firstly, an auto-regression model is adopted to reproduce the battery terminal behaviour, combined with a non-linear complementary model to capture the hysteresis effect. The model parameters, including linear parameters and non-linear parameters, are optimized off-line using a hybrid optimization method that combines a meta-heuristic method (i.e., the teaching learning based optimization method) and the least square method. Secondly, using the trained model, two real-time model-based SOC estimation methods are presented, one based on the real-time battery OCV regression model achieved through weighted recursive least square method, and the other based on the state estimation using the extended Kalman filter method (EKF). To tackle the problem caused by the flat OCV-vs-SOC segments when the OCV-based SOC estimation method is adopted, a method combining the coulombic counting and the OCV-based method is proposed. Finally, modelling results and SOC estimation results are presented and analysed using the data collected from LiFePo4 battery cell. The results confirmed the effectiveness of the proposed approach, in particular the joint-EKF method.
Resumo:
Nova V458 Vul erupted on 2007 August 8 and reached a visual magnitude of 8.1 a few days later. Ha images obtained 6 weeks before the outburst as part of the IPHAS Galactic plane survey reveal an 18th magnitude progenitor surrounded by an extended nebula. Subsequent images and spectroscopy of the nebula reveal an inner nebular knot increasing rapidly in brightness due to flash ionization by the nova event. We derive a distance of 13 kpc based on light travel time considerations, which is supported by two other distance estimation methods. The nebula has an ionized mass of 0.2 Msolar and a low expansion velocity: this rules it out as ejecta from a previous nova eruption, and is consistent with it being a ~14,000 year old planetary nebula, probably the product of a prior common envelope (CE) phase of evolution of the binary system. The large derived distance means that the mass of the erupting WD component of the binary is high. We identify two possible evolutionary scenarios, in at least one of which the system is massive enough to produce a Type Ia supernova upon merging.
Resumo:
Background
Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.
Results
In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.
Conclusions
For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.
Resumo:
The validity of load estimates from intermittent, instantaneous grab sampling is dependent on adequate spatial coverage by monitoring networks and a sampling frequency that re?ects the variability in the system under study. Catchments with a ?ashy hydrology due to surface runoff pose a particular challenge as intense short duration rainfall events may account for a signi?cant portion of the total diffuse transfer of pollution from soil to water in any hydrological year. This can also be exacerbated by the presence of strong background pollution signals from point sources during low flows. In this paper, a range of sampling methodologies and load estimation techniques are applied to phosphorus data from such a surface water dominated river system, instrumented at three sub-catchments (ranging from 3 to 5 km2 in area) with near-continuous monitoring stations. Systematic and Monte Carlo approaches were applied to simulate grab sampling using multiple strategies and to calculate an estimated load, Le based on established load estimation methods. Comparison with the actual load, Lt, revealed signi?cant average underestimation, of up to 60%, and high variability for all feasible sampling approaches. Further analysis of the time series provides an insight into these observations; revealing peak frequencies and power-law scaling in the distributions of P concentration, discharge and load associated with surface runoff and background transfers. Results indicate that only near-continuous monitoring that re?ects the rapid temporal changes in these river systems is adequate for comparative monitoring and evaluation purposes. While the implications of this analysis may be more tenable to small scale ?ashy systems, this represents an appropriate scale in terms of evaluating catchment mitigation strategies such as agri-environmental policies for managing diffuse P transfers in complex landscapes.
Resumo:
A parametric regression model for right-censored data with a log-linear median regression function and a transformation in both response and regression parts, named parametric Transform-Both-Sides (TBS) model, is presented. The TBS model has a parameter that handles data asymmetry while allowing various different distributions for the error, as long as they are unimodal symmetric distributions centered at zero. The discussion is focused on the estimation procedure with five important error distributions (normal, double-exponential, Student's t, Cauchy and logistic) and presents properties, associated functions (that is, survival and hazard functions) and estimation methods based on maximum likelihood and on the Bayesian paradigm. These procedures are implemented in TBSSurvival, an open-source fully documented R package. The use of the package is illustrated and the performance of the model is analyzed using both simulated and real data sets.
Resumo:
An algorithm is presented which generates pairs of oscillatory random time series which have identical periodograms but differ in the number of oscillations. This result indicates the intrinsic limitations of spectral methods when it comes to the task of measuring frequencies. Other examples, one from medicine and one from bifurcation theory, are given, which also exhibit these limitations of spectral methods. For two methods of spectral estimation it is verified that the particular way end points are treated, which is specific to each method, is, for long enough time series, not relevant for the main result.
Resumo:
Stiffness values in geotechnical structures can range over many orders of magnitude for relatively small operational strains. The typical strain levels where soil stiffness changes most dramatically is in the range 0.01-0.1%, however soils do not exhibit linear stress-strain behaviour at small strains. Knowledge of the in situ stiffness at small strain is important in geotechnical numerical modelling and design. The stress-strain regime of cut slopes is complex, as we have different principle stress directions at different positions along the potential failure plane. For example, loading may be primarily in extension near the toe of the slope, while compressive loading is predominant at the crest of a slope. Cuttings in heavily overconsolidated clays are known to be susceptible to progressive failure and subsequent strain softening, in which progressive yielding propagates from the toe towards the crest of the slope over time. In order to gain a better understanding of the rate of softening it would be advantageous to measure changes in small strain stiffness in the field.
Resumo:
The problem of measuring high frequency variations in temperature is described, and the need for some form of reconstruction introduced. One method of reconstructing temperature measurements is to use the signals from two thermocouples of differing diameter. Two existing methods for processing such measurements and reconstructing the higher frequency components are described. These are compared to a novel reconstruction algorithm based on a nonlinear extended Kalman filter. The performance of this filter is found to compare favorably, in a number of ways, with the existing techniques, and it is suggested that such a technique would be viable for the online reconstruction of temperatures in real time.
Resumo:
A generic, hierarchical, and multifidelity unit cost of acquisition estimating methodology for outside production machined parts is presented. The originality of the work lies with the method’s inherent capability of being able to generate multilevel and multifidelity cost relations for large volumes of parts utilizing process, supply chain costing data, and varying degrees of part design definition information. Estimates can be generated throughout the life cycle of a part using different grades of the combined information available. Considering design development for a given part, additional design definition may be used as it becomes available within the developed method to improve the quality of the resulting estimate. Via a process of analogous classification, parts are classified into groups of increasing similarity using design-based descriptors. A parametric estimating method is then applied to each subgroup of the machined part commodity in the direction of improved classification and using which, a relationship which links design variables to manufacturing cycle time may be generated. A rate cost reflective of the supply chain is then applied to the cycle time estimate for a given part to arrive at an estimate of make cost which is then totalled with the material and treatments cost components respectively to give an overall estimate of unit acquisition cost. Both the rate charge applied and the treatments cost calculated for a given procured part is derived via the use of ratio analysis.
Resumo:
This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.
Resumo:
Background. Kidney Disease Outcomes Quality Initiative (KDOQI) chronic kidney disease (CKD) guidelines have focused on the utility of using the modified four-variable MDRD equation (now traceable by isotope dilution mass spectrometry IDMS) in calculating estimated glomerular filtration rates (eGFRs). This study assesses the practical implications of eGFR correction equations on the range of creatinine assays currently used in the UK and further investigates the effect of these equations on the calculated prevalence of CKD in one UK region Methods. Using simulation, a range of creatinine data (30–300 µmol/l) was generated for male and female patients aged 20–100 years. The maximum differences between the IDMS and MDRD equations for all 14 UK laboratory techniques for serum creatinine measurement were explored with an average of individual eGFRs calculated according to MDRD and IDMS 30 ml/min/1.73 m2. Observed data for 93,870 patients yielded a first MDRD eGFR 3 months later of which 47 093 (71%) continued to have an eGFR