182 resultados para Parameters kinetic
Resumo:
In this paper we propose and study low complexity algorithms for on-line estimation of hidden Markov model (HMM) parameters. The estimates approach the true model parameters as the measurement noise approaches zero, but otherwise give improved estimates, albeit with bias. On a nite data set in the high noise case, the bias may not be signi cantly more severe than for a higher complexity asymptotically optimal scheme. Our algorithms require O(N3) calculations per time instant, where N is the number of states. Previous algorithms based on earlier hidden Markov model signal processing methods, including the expectation-maximumisation (EM) algorithm require O(N4) calculations per time instant.
Resumo:
A new online method is presented for estimation of the angular random walk and rate random walk coefficients of IMU (inertial measurement unit) gyros and accelerometers. The online method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph. Allan variance graphs have large off-line computational effort and data storage requirements. The technique proposed here requires no data storage and computational effort of O(100) calculations per data sample.
Resumo:
The preservation technique of drying offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Variations in porosity are just one of the microstructural changes that take place during the drying of most food materials. Some studies found that there may be a relationship between porosity and the properties of dried foods. However, no conclusive relationship has yet been established in the literature. This paper presents an overview of the factors that influence porosity, as well as the effects of porosity on dried food quality attributes. The effect of heat and mass transfer on porosity is also discussed along with porosity development in various drying methods. After an extensive review of the literature concerning the study of porosity, it emerges that a relationship between process parameters, food qualities, and sample properties can be established. Therefore, we propose a hypothesis of relationships between process parameters, product quality attributes, and porosity.
Resumo:
The mathematical model of a steadily propagating Saffman-Taylor finger in a Hele-Shaw channel has applications to two-dimensional interacting streamer discharges which are aligned in a periodic array. In the streamer context, the relevant regularisation on the interface is not provided by surface tension, but instead has been postulated to involve a mechanism equivalent to kinetic undercooling, which acts to penalise high velocities and prevent blow-up of the unregularised solution. Previous asymptotic results for the Hele-Shaw finger problem with kinetic undercooling suggest that for a given value of the kinetic undercooling parameter, there is a discrete set of possible finger shapes, each analytic at the nose and occupying a different fraction of the channel width. In the limit in which the kinetic undercooling parameter vanishes, the fraction for each family approaches 1/2, suggesting that this selection of 1/2 by kinetic undercooling is qualitatively similar to the well-known analogue with surface tension. We treat the numerical problem of computing these Saffman-Taylor fingers with kinetic undercooling, which turns out to be more subtle than the analogue with surface tension, since kinetic undercooling permits finger shapes which are corner-free but not analytic. We provide numerical evidence for the selection mechanism by setting up a problem with both kinetic undercooling and surface tension, and numerically taking the limit that the surface tension vanishes.
Resumo:
Acid hydrolysis is a popular pretreatment for removing hemicellulose from lignocelluloses in order to produce a digestible substrate for enzymatic saccharification. In this work, a novel model for the dilute acid hydrolysis of hemicellulose within sugarcane bagasse is presented and calibrated against experimental oligomer profiles. The efficacy of mathematical models as hydrolysis yield predictors and as vehicles for investigating the mechanisms of acid hydrolysis is also examined. Experimental xylose, oligomer (degree of polymerisation 2 to 6) and furfural yield profiles were obtained for bagasse under dilute acid hydrolysis conditions at temperatures ranging from 110C to 170C. Population balance kinetics, diffusion and porosity evolution were incorporated into a mathematical model of the acid hydrolysis of sugarcane bagasse. This model was able to produce a good fit to experimental xylose yield data with only three unknown kinetic parameters ka, kb and kd. However, fitting this same model to an expanded data set of oligomeric and furfural yield profiles did not successfully reproduce the experimental results. It was found that a ``hard-to-hydrolyse'' parameter, $\alpha$, was required in the model to ensure reproducibility of the experimental oligomer profiles at 110C, 125C and 140C. The parameters obtained through the fitting exercises at lower temperatures were able to be used to predict the oligomer profiles at 155C and 170C with promising results. The interpretation of kinetic parameters obtained by fitting a model to only a single set of data may be ambiguous. Although these parameters may correctly reproduce the data, they may not be indicative of the actual rate parameters, unless some care has been taken to ensure that the model describes the true mechanisms of acid hydrolysis. It is possible to challenge the robustness of the model by expanding the experimental data set and hence limiting the parameter space for the fitting parameters. The novel combination of ``hard-to-hydrolyse'' and population balance dynamics in the model presented here appears to stand up to such rigorous fitting constraints.
Resumo:
The desire to solve problems caused by socket prostheses in transfemoral amputees and the acquired success of osseointegration in the dental application has led to the introduction of osseointegration in the orthopedic surgery. Since its first introduction in 1990 in Gothenburg Sweden the osseointegrated (OI) orthopedic fixation has proven several benefits[1]. The surgery consists of two surgical procedures followed by a lengthy rehabilitation program. The rehabilitation program after an OI implant includes a specific training period with a short training prosthesis. Since mechanical loading is considered to be one of the key factors that influence bone mass and the osseointegration of bone-anchored implants, the rehabilitation program will also need to include some form of load bearing exercises (LBE). To date there are two frequently used commercially available human implants. We can find proof in the literature that load bearing exercises are performed by patients with both types of OI implants. We refer to two articles, a first one written by Dr. Aschoff and all and published in 2010 in the Journal of Bone and Joint Surgery.[2] The second one presented by Hagberg et al in 2009 gives a very thorough description of the rehabilitation program of TFA fitted with an OPRA implant. The progression of the load however is determined individually according to the residual skeleton’s quality, pain level and body weight of the participant.[1] Patients are using a classical bathroom weighing scale to control the load on the implant during the course of their rehabilitation. The bathroom scale is an affordable and easy-to-use device but it has some important shortcomings. The scale provides instantaneous feedback to the patient only on the magnitude of the vertical component of the applied force. The forces and moments applied along and around the three axes of the implant are unknown. Although there are different ways to assess the load on the implant for instance through inverse dynamics in a motion analysis laboratory [3-6] this assessment is challenging. A recent proof- of-concept study by Frossard et al (2009) showed that the shortcomings of the weighing scale can be overcome by a portable kinetic system based on a commercial transducer[7].
Resumo:
This study aimed at presenting the intra-tester reliability of the static load bearing exercises (LBEs) performed by individuals with transfemoral amputation (TFA) fitted with an osseointegrated implant to stimulate the bone remodelling process. There is a need for a better understanding of the implementation of these exercises particularly the reliability. The intra-tester reliability is discussed with a particular emphasis on inter-load prescribed, inter-axis and inter-component reliabilities as well as the effect of body weight normalisation. Eleven unilateral TFAs fitted with an OPRA implant performed five trials in four loading conditions. The forces and moments on the three axes of the implant were measured directly with an instrumented pylon including a six-channel transducer. Reliability of loading variables was assessed using intraclass correlation coefficients (ICCs) and percentage standard error of measurement values (%SEMs). The ICCs of all variables were above 0.9 and the %SEM values ranged between 0 and 87%. This study showed a high between-participants’ variance highlighting the lack of loading consistency typical of symptomatic population as well as a high reliability between the loading sessions indicating a plausible correct repetition of the LBE by the participants. However, these outcomes must be understood within the framework of the proposed experimental protocol.
Resumo:
In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.
Resumo:
Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.
Resumo:
Traditionally, it is not easy to carry out tests to identify modal parameters from existing railway bridges because of the testing conditions and complicated nature of civil structures. A six year (2007-2012) research program was conducted to monitor a group of 25 railway bridges. One of the tasks was to devise guidelines for identifying their modal parameters. This paper presents the experience acquired from such identification. The modal analysis of four representative bridges of this group is reported, which include B5, B15, B20 and B58A, crossing the Carajás railway in northern Brazil using three different excitations sources: drop weight, free vibration after train passage, and ambient conditions. To extract the dynamic parameters from the recorded data, Stochastic Subspace Identification and Frequency Domain Decomposition methods were used. Finite-element models were constructed to facilitate the dynamic measurements. The results show good agreement between the measured and computed natural frequencies and mode shapes. The findings provide some guidelines on methods of excitation, record length of time, methods of modal analysis including the use of projected channel and harmonic detection, helping researchers and maintenance teams obtain good dynamic characteristics from measurement data.
Resumo:
Modal flexibility is a widely accepted technique to detect structural damage using vibration characteristics. Its application to detect damage in long span large diameter cables such as those used in suspension bridge main cables has not received much attention. This paper uses the modal flexibility method incorporating two damage indices (DIs) based on lateral and vertical modes to localize damage in such cables. The competency of those DIs in damage detection is tested by the numerically obtained vibration characteristics of a suspended cable in both intact and damaged states. Three single damage cases and one multiple damage case are considered. The impact of random measurement noise in the modal data on the damage localization capability of these two DIs is next examined. Long span large diameter cables are characterized by the two critical cable parameters named bending stiffness and sag-extensibility. The influence of these parameters in the damage localization capability of the two DIs is evaluated by a parametric study with two single damage cases. Results confirm that the damage index based on lateral vibration modes has the ability to successfully detect and locate damage in suspended cables with 5% noise in modal data for a range of cable parameters. This simple approach therefore can be extended for timely damage detection in cables of suspension bridges and thereby enhance their service during their life spans.
Resumo:
Aims The aim of the study was to evaluate the significance of total bilirubin, aspartate transaminase (AST), alanine transaminase and gamma-glutamyltransferase (GGT) for predicting outcome in sepsis-associated cholestasis. Methods: A retrospective cohort review of the hospital records was performed in 181 neonates admitted to the Neonatal Care Unit. A comparison was performed between subjects with low and high liver values based on cut-off values from ROC analysis. We defined poor prognosis to be when a subject had prolonged cholestasis of more than 3.5 months, developed severe sepsis, septic shock or had a fatal outcome. Results: The majority of the subjects were male (56%), preterm (56%) and had early onset sepsis (73%). The poor prognosis group had lower initial values of GGT compared with the good prognosis group (P = 0.003). Serum GGT (cut-off value of 85.5 U/L) and AST (cut-off value of 51 U/L) showed significant correlation with the outcome following multivariate analysis. The odds ratio (OR) of low GGT and high AST were OR 4.3 (95% CI:1.6 to11.8) and OR 2.9 (95% CI:1.1 to 8), respectively, for poor prognosis. In subjects with normal AST values, those with low GGT value had relative risk of 2.52 (95% CI:1.4 to 3.5) for poorer prognosis compared with those with normal or high GGT. Conclusion: Serum GGT and AST values can be used to predict the prognosis of patients with sepsis-associated cholestasis
Resumo:
Drivers behave in different ways, and these different behaviors are a cause of traffic disturbances. A key objective for simulation tools is to correctly reproduce this variability, in particular for car-following models. From data collection to the sampling of realistic behaviors, a chain of key issues must be addressed. This paper discusses data filtering, robustness of calibration, correlation between parameters, and sampling techniques of acceleration-time continuous car-following models. The robustness of calibration is systematically investigated with an objective function that allows confidence regions around the minimum to be obtained. Then, the correlation between sets of calibrated parameters and the validity of the joint distributions sampling techniques are discussed. This paper confirms the need for adapted calibration and sampling techniques to obtain realistic sets of car-following parameters, which can be used later for simulation purposes.
Resumo:
Electronic cigarette-generated mainstream aerosols were characterized in terms of particle number concentrations and size distributions through a Condensation Particle Counter and a Fast Mobility Particle Sizer spectrometer, respectively. A thermodilution system was also used to properly sample and dilute the mainstream aerosol. Different types of electronic cigarettes, liquid flavors, liquid nicotine contents, as well as different puffing times were tested. Conventional tobacco cigarettes were also investigated. The total particle number concentration peak (for 2-s puff), averaged across the different electronic cigarette types and liquids, was measured equal to 4.39 ± 0.42 × 109 part. cm−3, then comparable to the conventional cigarette one (3.14 ± 0.61 × 109 part. cm−3). Puffing times and nicotine contents were found to influence the particle concentration, whereas no significant differences were recognized in terms of flavors and types of cigarettes used. Particle number distribution modes of the electronic cigarette-generated aerosol were in the 120–165 nm range, then similar to the conventional cigarette one.