328 resultados para Glomerular filtration rate estimation
Analytical modeling and sensitivity analysis for travel time estimation on signalized urban networks
Resumo:
This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
The estimation of phylogenetic divergence times from sequence data is an important component of many molecular evolutionary studies. There is now a general appreciation that the procedure of divergence dating is considerably more complex than that initially described in the 1960s by Zuckerkandl and Pauling (1962, 1965). In particular, there has been much critical attention toward the assumption of a global molecular clock, resulting in the development of increasingly sophisticated techniques for inferring divergence times from sequence data. In response to the documentation of widespread departures from clocklike behavior, a variety of local- and relaxed-clock methods have been proposed and implemented. Local-clock methods permit different molecular clocks in different parts of the phylogenetic tree, thereby retaining the advantages of the classical molecular clock while casting off the restrictive assumption of a single, global rate of substitution (Rambaut and Bromham 1998; Yoder and Yang 2000).
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.
Resumo:
Objectives To evaluate differences among patients with different clinical features of ALS, we used our Bayesian method of motor unit number estimation (MUNE). Methods We performed serial MUNE studies on 42 subjects who fulfilled the diagnostic criteria for ALS during the course of their illness. Subjects were classified into three subgroups according to whether they had typical ALS (with upper and lower motor neurone signs) or had predominantly upper motor neurone weakness with only minor LMN signs, or predominantly lower motor neurone weakness with only minor UMN signs. In all subjects we calculated the half life of MUs, defined as the expected time for the number of MUs to halve, in one or more of the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and extensor digitorum brevis (EDB) muscles. Results The mean half life of MUs was less in subjects who had typical ALS with both upper and lower motor neurone signs than in those with predominantly upper motor neurone weakness or predominantly lower motor neurone weakness. In 18 subjects we analysed the estimated size of the MUs and demonstrated the appearance of large MUs in subjects with upper or lower motor neurone predominant weakness. We found that the appearance of large MUs was correlated with the half life of MUs. Conclusions Patients with different clinical features of ALS have different rates of loss and different sizes of MUs. Significance: These findings could indicate differences in disease pathogenesis.
Resumo:
This paper describes system identification, estimation and control of translational motion and heading angle for a cost effective open-source quadcopter — the MikroKopter. The dynamics of its built-in sensors, roll and pitch attitude controller, and system latencies are determined and used to design a computationally inexpensive multi-rate velocity estimator that fuses data from the built-in inertial sensors and a low-rate onboard laser range finder. Control is performed using a nested loop structure that is also computationally inexpensive and incorporates different sensors. Experimental results for the estimator and closed-loop positioning are presented and compared with ground truth from a motion capture system.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Objective: To use our Bayesian method of motor unit number estimation (MUNE) to evaluate lower motor neuron degeneration in ALS. Methods: In subjects with ALS we performed serial MUNE studies. We examined the repeatability of the test and then determined whether the loss of MUs was fitted by an exponential or Weibull distribution. Results: The decline in motor unit (MU) numbers was well-fitted by an exponential decay curve. We calculated the half life of MUs in the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and/or extensor digitorum brevis (EDB) muscles. The mean half life of the MUs of ADM muscle was greater than those of the APB or EDB muscles. The half-life of MUs was less in the ADM muscle of subjects with upper limb than in those with lower limb onset. Conclusions: The rate of loss of lower motor neurons in ALS is exponential, the motor units of the APB decay more quickly than those of the ADM muscle and the rate of loss of motor units is greater at the site of onset of disease. Significance: This shows that the Bayesian MUNE method is useful in following the course and exploring the clinical features of ALS. 2012 International Federation of Clinical Neurophysiology.
Resumo:
In practical cases for active noise control (ANC), the secondary path has usually a time varying behavior. For these cases, an online secondary path modeling method that uses a white noise as a training signal is required to ensure convergence of the system. The modeling accuracy and the convergence rate are increased when a white noise with a larger variance is used. However, the larger variance increases the residual noise, which decreases performance of the system and additionally causes instability problem to feedback structures. A sudden change in the secondary path leads to divergence of the online secondary path modeling filter. To overcome these problems, this paper proposes a new approach for online secondary path modeling in feedback ANC systems. The proposed algorithm uses the advantages of white noise with larger variance to model the secondary path, but the injection is stopped at the optimum point to increase performance of the algorithm and to prevent the instability effect of the white noise. In this approach, instead of continuous injection of the white noise, a sudden change in secondary path during the operation makes the algorithm to reactivate injection of the white noise to correct the secondary path estimation. In addition, the proposed method models the secondary path without the need of using off-line estimation of the secondary path. Considering the above features increases the convergence rate and modeling accuracy, which results in a high system performance. Computer simulation results shown in this paper indicate effectiveness of the proposed method.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
This study aimed to quantify the efficiency of deep bag and electrostatic filters, and assess the influence of ventilation systems using these filters on indoor fine (<2.5 µm) and ultrafine particle concentrations in commercial office buildings. Measurements and modelling were conducted for different indoor and outdoor particle source scenarios at three office buildings in Brisbane, Australia. Overall, the in-situ efficiency, measured for particles in size ranges 6 to 3000 nm, of the deep bag filters ranged from 26.3 to 46.9% for the three buildings, while the in-situ efficiency of the electrostatic filter in one building was 60.2%. The highest PN and PM2.5 concentrations in one of the office buildings (up to 131% and 31% higher than the other two buildings, respectively) were due to the proximity of the building’s HVAC air intakes to a nearby bus-only roadway, as well as its higher outdoor ventilation rate. The lowest PN and PM2.5 concentrations (up to 57% and 24% lower than the other two buildings, respectively) were measured in a building that utilised both outdoor and mixing air filters in its HVAC system. Indoor PN concentrations were strongly influenced by outdoor levels and were significantly higher during rush-hours (up to 41%) and nucleation events (up to 57%), compared to working-hours, for all three buildings. This is the first time that the influence of new particle formation on indoor particle concentrations has been identified and quantified. A dynamic model for indoor PN concentration, which performed adequately in this study also revealed that using mixing/outdoor air filters can significantly reduce indoor particle concentration in buildings where indoor air was strongly influenced by outdoor particle levels. This work provides a scientific basis for the selection and location of appropriate filters and outdoor air intakes, during the design of new, or upgrade of existing, building HVAC systems. The results also serve to provide a better understanding of indoor particle dynamics and behaviours under different ventilation and particle source scenarios, and highlight effective methods to reduce exposure to particles in commercial office buildings.
Resumo:
In this paper, we propose a novel online hidden Markov model (HMM) parameter estimator based on Kerridge inaccuracy rate (KIR) concepts. Under mild identifiability conditions, we prove that our online KIR-based estimator is strongly consistent. In simulation studies, we illustrate the convergence behaviour of our proposed online KIR-based estimator and provide a counter-example illustrating the local convergence properties of the well known recursive maximum likelihood estimator (arguably the best existing solution).
Resumo:
The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region.