924 resultados para Gruneisen parameters
Resumo:
In this paper we propose and study low complexity algorithms for on-line estimation of hidden Markov model (HMM) parameters. The estimates approach the true model parameters as the measurement noise approaches zero, but otherwise give improved estimates, albeit with bias. On a nite data set in the high noise case, the bias may not be signi cantly more severe than for a higher complexity asymptotically optimal scheme. Our algorithms require O(N3) calculations per time instant, where N is the number of states. Previous algorithms based on earlier hidden Markov model signal processing methods, including the expectation-maximumisation (EM) algorithm require O(N4) calculations per time instant.
Resumo:
A new online method is presented for estimation of the angular random walk and rate random walk coefficients of IMU (inertial measurement unit) gyros and accelerometers. The online method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph. Allan variance graphs have large off-line computational effort and data storage requirements. The technique proposed here requires no data storage and computational effort of O(100) calculations per data sample.
Resumo:
The preservation technique of drying offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Variations in porosity are just one of the microstructural changes that take place during the drying of most food materials. Some studies found that there may be a relationship between porosity and the properties of dried foods. However, no conclusive relationship has yet been established in the literature. This paper presents an overview of the factors that influence porosity, as well as the effects of porosity on dried food quality attributes. The effect of heat and mass transfer on porosity is also discussed along with porosity development in various drying methods. After an extensive review of the literature concerning the study of porosity, it emerges that a relationship between process parameters, food qualities, and sample properties can be established. Therefore, we propose a hypothesis of relationships between process parameters, product quality attributes, and porosity.
Resumo:
In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.
Resumo:
Traditionally, it is not easy to carry out tests to identify modal parameters from existing railway bridges because of the testing conditions and complicated nature of civil structures. A six year (2007-2012) research program was conducted to monitor a group of 25 railway bridges. One of the tasks was to devise guidelines for identifying their modal parameters. This paper presents the experience acquired from such identification. The modal analysis of four representative bridges of this group is reported, which include B5, B15, B20 and B58A, crossing the Carajás railway in northern Brazil using three different excitations sources: drop weight, free vibration after train passage, and ambient conditions. To extract the dynamic parameters from the recorded data, Stochastic Subspace Identification and Frequency Domain Decomposition methods were used. Finite-element models were constructed to facilitate the dynamic measurements. The results show good agreement between the measured and computed natural frequencies and mode shapes. The findings provide some guidelines on methods of excitation, record length of time, methods of modal analysis including the use of projected channel and harmonic detection, helping researchers and maintenance teams obtain good dynamic characteristics from measurement data.
Resumo:
Modal flexibility is a widely accepted technique to detect structural damage using vibration characteristics. Its application to detect damage in long span large diameter cables such as those used in suspension bridge main cables has not received much attention. This paper uses the modal flexibility method incorporating two damage indices (DIs) based on lateral and vertical modes to localize damage in such cables. The competency of those DIs in damage detection is tested by the numerically obtained vibration characteristics of a suspended cable in both intact and damaged states. Three single damage cases and one multiple damage case are considered. The impact of random measurement noise in the modal data on the damage localization capability of these two DIs is next examined. Long span large diameter cables are characterized by the two critical cable parameters named bending stiffness and sag-extensibility. The influence of these parameters in the damage localization capability of the two DIs is evaluated by a parametric study with two single damage cases. Results confirm that the damage index based on lateral vibration modes has the ability to successfully detect and locate damage in suspended cables with 5% noise in modal data for a range of cable parameters. This simple approach therefore can be extended for timely damage detection in cables of suspension bridges and thereby enhance their service during their life spans.
Resumo:
Aims The aim of the study was to evaluate the significance of total bilirubin, aspartate transaminase (AST), alanine transaminase and gamma-glutamyltransferase (GGT) for predicting outcome in sepsis-associated cholestasis. Methods: A retrospective cohort review of the hospital records was performed in 181 neonates admitted to the Neonatal Care Unit. A comparison was performed between subjects with low and high liver values based on cut-off values from ROC analysis. We defined poor prognosis to be when a subject had prolonged cholestasis of more than 3.5 months, developed severe sepsis, septic shock or had a fatal outcome. Results: The majority of the subjects were male (56%), preterm (56%) and had early onset sepsis (73%). The poor prognosis group had lower initial values of GGT compared with the good prognosis group (P = 0.003). Serum GGT (cut-off value of 85.5 U/L) and AST (cut-off value of 51 U/L) showed significant correlation with the outcome following multivariate analysis. The odds ratio (OR) of low GGT and high AST were OR 4.3 (95% CI:1.6 to11.8) and OR 2.9 (95% CI:1.1 to 8), respectively, for poor prognosis. In subjects with normal AST values, those with low GGT value had relative risk of 2.52 (95% CI:1.4 to 3.5) for poorer prognosis compared with those with normal or high GGT. Conclusion: Serum GGT and AST values can be used to predict the prognosis of patients with sepsis-associated cholestasis
Resumo:
Drivers behave in different ways, and these different behaviors are a cause of traffic disturbances. A key objective for simulation tools is to correctly reproduce this variability, in particular for car-following models. From data collection to the sampling of realistic behaviors, a chain of key issues must be addressed. This paper discusses data filtering, robustness of calibration, correlation between parameters, and sampling techniques of acceleration-time continuous car-following models. The robustness of calibration is systematically investigated with an objective function that allows confidence regions around the minimum to be obtained. Then, the correlation between sets of calibrated parameters and the validity of the joint distributions sampling techniques are discussed. This paper confirms the need for adapted calibration and sampling techniques to obtain realistic sets of car-following parameters, which can be used later for simulation purposes.
Resumo:
Electronic cigarette-generated mainstream aerosols were characterized in terms of particle number concentrations and size distributions through a Condensation Particle Counter and a Fast Mobility Particle Sizer spectrometer, respectively. A thermodilution system was also used to properly sample and dilute the mainstream aerosol. Different types of electronic cigarettes, liquid flavors, liquid nicotine contents, as well as different puffing times were tested. Conventional tobacco cigarettes were also investigated. The total particle number concentration peak (for 2-s puff), averaged across the different electronic cigarette types and liquids, was measured equal to 4.39 ± 0.42 × 109 part. cm−3, then comparable to the conventional cigarette one (3.14 ± 0.61 × 109 part. cm−3). Puffing times and nicotine contents were found to influence the particle concentration, whereas no significant differences were recognized in terms of flavors and types of cigarettes used. Particle number distribution modes of the electronic cigarette-generated aerosol were in the 120–165 nm range, then similar to the conventional cigarette one.
Resumo:
Graphene films were produced by chemical vapor deposition (CVD) of pyridine on copper substrates. Pyridine-CVD is expected to lead to doped graphene by the insertion of nitrogen atoms in the growing sp2 carbon lattice, possibly improving the properties of graphene as a transparent conductive film. We here report on the influence that the CVD parameters (i.e., temperature and gas flow) have on the morphology, transmittance, and electrical conductivity of the graphene films grown with pyridine. A temperature range between 930 and 1070 °C was explored and the results were compared to those of pristine graphene grown by ethanol-CVD under the same process conditions. The films were characterized by atomic force microscopy, Raman and X-ray photoemission spectroscopy. The optical transmittance and electrical conductivity of the films were measured to evaluate their performance as transparent conductive electrodes. Graphene films grown by pyridine reached an electrical conductivity of 14.3 × 105 S/m. Such a high conductivity seems to be associated with the electronic doping induced by substitutional nitrogen atoms. In particular, at 930 °C the nitrogen/carbon ratio of pyridine-grown graphene reaches 3%, and its electrical conductivity is 40% higher than that of pristine graphene grown from ethanol-CVD.
Resumo:
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.
Resumo:
The characterisation of cracks is usually done using the well known three basic fracture modes, namely opening, shearing and tearing modes. In isotropic materials these modes are uncoupled and provide a convenient way to define the fracture parameters. It is well known that these fracture modes are coupled in anisotropic materials. In the case of orthotropic materials also, coupling exists between the fracture modes, unless the crack plane coincides with one of the axes of orthotropy. The strength of coupling depends upon the orientation of the axes of orthotropy with respect to the crack plane and so the energy release rate components associated with each of the modes vary with crack orientation. The variation, of these energy release rate components with the crack orientation with respect to orthotropic axes, is analyzed in this paper. Results indicate that in addition to the orthotropic planes there exists other planes with reference to which fracture modes are uncoupled.
Resumo:
Diffusion such is the integrated diffusion coefficient of the phase, the tracer diffusion coefficient of species at different temperatures and the activation energy for diffusion, are determined in V3Si phase with A15 crystal structure. The tracer diffusion coefficient of Si Was found to be negligible compared to the tracer diffusion coefficient of V. The calculated diffusion parameters will help to validate the theoretical analysis of defect structure of the phase, which plays an important role in the superconductivity.
Resumo:
The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.