992 resultados para Fitting parameters
Resumo:
A new online method is presented for estimation of the angular randomwalk and rate randomwalk coefficients of inertial measurement unit gyros and accelerometers. In the online method, a state-space model is proposed, and recursive parameter estimators are proposed for quantities previously measured from offline data techniques such as the Allan variance method. The Allan variance method has large offline computational effort and data storage requirements. The technique proposed here requires no data storage and computational effort of approximately 100 calculations per data sample.
Resumo:
In this paper we propose and study low complexity algorithms for on-line estimation of hidden Markov model (HMM) parameters. The estimates approach the true model parameters as the measurement noise approaches zero, but otherwise give improved estimates, albeit with bias. On a nite data set in the high noise case, the bias may not be signi cantly more severe than for a higher complexity asymptotically optimal scheme. Our algorithms require O(N3) calculations per time instant, where N is the number of states. Previous algorithms based on earlier hidden Markov model signal processing methods, including the expectation-maximumisation (EM) algorithm require O(N4) calculations per time instant.
Resumo:
A new online method is presented for estimation of the angular random walk and rate random walk coefficients of IMU (inertial measurement unit) gyros and accelerometers. The online method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph. Allan variance graphs have large off-line computational effort and data storage requirements. The technique proposed here requires no data storage and computational effort of O(100) calculations per data sample.
Resumo:
Gaining invariance to camera and illumination variations has been a well investigated topic in Active Appearance Model (AAM) fitting literature. The major problem lies in the inability of the appearance parameters of the AAM to generalize to unseen conditions. An attractive approach for gaining invariance is to fit an AAM to a multiple filter response (e.g. Gabor) representation of the input image. Naively applying this concept with a traditional AAM is computationally prohibitive, especially as the number of filter responses increase. In this paper, we present a computationally efficient AAM fitting algorithm based on the Lucas-Kanade (LK) algorithm posed in the Fourier domain that affords invariance to both expression and illumination. We refer to this as a Fourier AAM (FAAM), and show that this method gives substantial improvement in person specific AAM fitting performance over traditional AAM fitting methods.
Resumo:
Whether the first daily disposable soft contact lens to enter the market in 1994 was the Premier lens (Award Technology, Scotland, UK; subsequently purchased by Bausch & Lomb, Rochester New York, USA) or the 1-Day Acuvue lens (Johnson and Johnson Vision Care, Jacksonville, Florida, USA) has long been a matter of bitter dispute1 but whatever the answer, this year marks the 20th anniversary of the launch of this modality of lens wear...
Resumo:
Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.
Resumo:
The preservation technique of drying offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Variations in porosity are just one of the microstructural changes that take place during the drying of most food materials. Some studies found that there may be a relationship between porosity and the properties of dried foods. However, no conclusive relationship has yet been established in the literature. This paper presents an overview of the factors that influence porosity, as well as the effects of porosity on dried food quality attributes. The effect of heat and mass transfer on porosity is also discussed along with porosity development in various drying methods. After an extensive review of the literature concerning the study of porosity, it emerges that a relationship between process parameters, food qualities, and sample properties can be established. Therefore, we propose a hypothesis of relationships between process parameters, product quality attributes, and porosity.
Resumo:
In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.
Resumo:
Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.
Resumo:
Traditionally, it is not easy to carry out tests to identify modal parameters from existing railway bridges because of the testing conditions and complicated nature of civil structures. A six year (2007-2012) research program was conducted to monitor a group of 25 railway bridges. One of the tasks was to devise guidelines for identifying their modal parameters. This paper presents the experience acquired from such identification. The modal analysis of four representative bridges of this group is reported, which include B5, B15, B20 and B58A, crossing the Carajás railway in northern Brazil using three different excitations sources: drop weight, free vibration after train passage, and ambient conditions. To extract the dynamic parameters from the recorded data, Stochastic Subspace Identification and Frequency Domain Decomposition methods were used. Finite-element models were constructed to facilitate the dynamic measurements. The results show good agreement between the measured and computed natural frequencies and mode shapes. The findings provide some guidelines on methods of excitation, record length of time, methods of modal analysis including the use of projected channel and harmonic detection, helping researchers and maintenance teams obtain good dynamic characteristics from measurement data.
Resumo:
Modal flexibility is a widely accepted technique to detect structural damage using vibration characteristics. Its application to detect damage in long span large diameter cables such as those used in suspension bridge main cables has not received much attention. This paper uses the modal flexibility method incorporating two damage indices (DIs) based on lateral and vertical modes to localize damage in such cables. The competency of those DIs in damage detection is tested by the numerically obtained vibration characteristics of a suspended cable in both intact and damaged states. Three single damage cases and one multiple damage case are considered. The impact of random measurement noise in the modal data on the damage localization capability of these two DIs is next examined. Long span large diameter cables are characterized by the two critical cable parameters named bending stiffness and sag-extensibility. The influence of these parameters in the damage localization capability of the two DIs is evaluated by a parametric study with two single damage cases. Results confirm that the damage index based on lateral vibration modes has the ability to successfully detect and locate damage in suspended cables with 5% noise in modal data for a range of cable parameters. This simple approach therefore can be extended for timely damage detection in cables of suspension bridges and thereby enhance their service during their life spans.
Resumo:
Aims The aim of the study was to evaluate the significance of total bilirubin, aspartate transaminase (AST), alanine transaminase and gamma-glutamyltransferase (GGT) for predicting outcome in sepsis-associated cholestasis. Methods: A retrospective cohort review of the hospital records was performed in 181 neonates admitted to the Neonatal Care Unit. A comparison was performed between subjects with low and high liver values based on cut-off values from ROC analysis. We defined poor prognosis to be when a subject had prolonged cholestasis of more than 3.5 months, developed severe sepsis, septic shock or had a fatal outcome. Results: The majority of the subjects were male (56%), preterm (56%) and had early onset sepsis (73%). The poor prognosis group had lower initial values of GGT compared with the good prognosis group (P = 0.003). Serum GGT (cut-off value of 85.5 U/L) and AST (cut-off value of 51 U/L) showed significant correlation with the outcome following multivariate analysis. The odds ratio (OR) of low GGT and high AST were OR 4.3 (95% CI:1.6 to11.8) and OR 2.9 (95% CI:1.1 to 8), respectively, for poor prognosis. In subjects with normal AST values, those with low GGT value had relative risk of 2.52 (95% CI:1.4 to 3.5) for poorer prognosis compared with those with normal or high GGT. Conclusion: Serum GGT and AST values can be used to predict the prognosis of patients with sepsis-associated cholestasis
Resumo:
Drivers behave in different ways, and these different behaviors are a cause of traffic disturbances. A key objective for simulation tools is to correctly reproduce this variability, in particular for car-following models. From data collection to the sampling of realistic behaviors, a chain of key issues must be addressed. This paper discusses data filtering, robustness of calibration, correlation between parameters, and sampling techniques of acceleration-time continuous car-following models. The robustness of calibration is systematically investigated with an objective function that allows confidence regions around the minimum to be obtained. Then, the correlation between sets of calibrated parameters and the validity of the joint distributions sampling techniques are discussed. This paper confirms the need for adapted calibration and sampling techniques to obtain realistic sets of car-following parameters, which can be used later for simulation purposes.
Resumo:
Electronic cigarette-generated mainstream aerosols were characterized in terms of particle number concentrations and size distributions through a Condensation Particle Counter and a Fast Mobility Particle Sizer spectrometer, respectively. A thermodilution system was also used to properly sample and dilute the mainstream aerosol. Different types of electronic cigarettes, liquid flavors, liquid nicotine contents, as well as different puffing times were tested. Conventional tobacco cigarettes were also investigated. The total particle number concentration peak (for 2-s puff), averaged across the different electronic cigarette types and liquids, was measured equal to 4.39 ± 0.42 × 109 part. cm−3, then comparable to the conventional cigarette one (3.14 ± 0.61 × 109 part. cm−3). Puffing times and nicotine contents were found to influence the particle concentration, whereas no significant differences were recognized in terms of flavors and types of cigarettes used. Particle number distribution modes of the electronic cigarette-generated aerosol were in the 120–165 nm range, then similar to the conventional cigarette one.