58 resultados para kernel estimation
Resumo:
The aim of this study was to use DSC and X-ray diffraction measurements to determine the pore size and pore wall thickness of highly ordered SBA-15 materials. The DSC curves showed two endothermic events during the heating cycle. These events were due to the presence of water inside and outside of mesopores. The results of pore radius, wall thickness and pore volume measurements were in good agreement with the results obtained by nitrogen adsorption measurement, XRD and transmission electron microscopy.
Resumo:
The objective of this investigation was to examine in a systematic manner the influence of plasma protein binding on in vivo pharmacodynamics. Comparative pharmacokinetic-pharmacodynamic studies with four beta blockers were performed in conscious rats, using heart rate under isoprenaline-induced tachycardia as a pharmacodynamic endpoint. A recently proposed mechanism-based agonist-antagonist interaction model was used to obtain in vivo estimates of receptor affinities (K(B),(vivo)). These values were compared with in vitro affinities (K(B),(vitro)) on the basis of both total and free drug concentrations. For the total drug concentrations, the K(B),(vivo) estimates were 26, 13, 6.5 and 0.89 nM for S(-)-atenolol, S(-)-propranolol, S(-)-metoprolol and timolol. The K(B),(vivo) estimates on the basis of the free concentrations were 25, 2.0, 5.2 and 0.56 nM, respectively. The K(B),(vivo)-K(B),(vitro) correlation for total drug concentrations clearly deviated from the line of identity, especially for the most highly bound drug S(-)-propranolol (ratio K(B),(vivo)/K(B),(vitro) similar to 6.8). For the free drug, the correlation approximated the line of identity. Using this model, for beta-blockers the free plasma concentration appears to be the best predictor of in vivo pharmacodynamics. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:3816-3828, 2009
Resumo:
ArtinM is a D-mannose binding lectin that has been arousing increasing interest because of its biomedical properties, especially those involving the stimulation of Th1 immune response, which confers protection against intracellular pathogens The potential pharmaceutical applications of ArtinM have motivated the production of its recombinant form (rArtinM) so that it is important to compare the sugar-binding properties of jArtinM and rArtinM in order to take better advantage of the potential applications of the recombinant lectin. In this work, a biosensor framework based on a Quartz Crystal Microbalance was established with the purpose of making a comparative study of the activity of native and recombinant ArtinM protein The QCM transducer was strategically functionalized to use a simple model of protein binding kinetics. This approach allowed for the determination of the binding/dissociation kinetics rate and affinity equilibrium constant of both forms of ArtinM with horseradish peroxidase glycoprotein (HRP), a N-glycosylated protein that contains the trimannoside Man alpha 1-3[Man alpha 1-6]Man, which is a known ligand for jArtinM (Jeyaprakash et al, 2004). Monitoring of the real-time binding of rArtinM shows that it was able to bind HRP, leading to an analytical curve similar to that of jArtinM, with statistically equivalent kinetic rates and affinity equilibrium constants for both forms of ArtinM The lower reactivity of rArtinM with HRP than jArtinM was considered to be due to a difference in the number of Carbohydrate Recognition Domains (CRDs) per molecule of each lectin form rather than to a difference in the energy of binding per CRD of each lectin form. (C) 2010 Elsevier B V. All rights reserved
Resumo:
To present a novel algorithm for estimating recruitable alveolar collapse and hyperdistension based on electrical impedance tomography (EIT) during a decremental positive end-expiratory pressure (PEEP) titration. Technical note with illustrative case reports. Respiratory intensive care unit. Patients with acute respiratory distress syndrome. Lung recruitment and PEEP titration maneuver. Simultaneous acquisition of EIT and X-ray computerized tomography (CT) data. We found good agreement (in terms of amount and spatial location) between the collapse estimated by EIT and CT for all levels of PEEP. The optimal PEEP values detected by EIT for patients 1 and 2 (keeping lung collapse < 10%) were 19 and 17 cmH(2)O, respectively. Although pointing to the same non-dependent lung regions, EIT estimates of hyperdistension represent the functional deterioration of lung units, instead of their anatomical changes, and could not be compared directly with static CT estimates for hyperinflation. We described an EIT-based method for estimating recruitable alveolar collapse at the bedside, pointing out its regional distribution. Additionally, we proposed a measure of lung hyperdistension based on regional lung mechanics.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.
Resumo:
The magnitude of the basic reproduction ratio R(0) of an epidemic can be estimated in several ways, namely, from the final size of the epidemic, from the average age at first infection, or from the initial growth phase of the outbreak. In this paper, we discuss this last method for estimating R(0) for vector-borne infections. Implicit in these models is the assumption that there is an exponential phase of the outbreaks, which implies that in all cases R(0) > 1. We demonstrate that an outbreak is possible, even in cases where R(0) is less than one, provided that the vector-to-human component of R(0) is greater than one and that a certain number of infected vectors are introduced into the affected population. This theory is applied to two real epidemiological dengue situations in the southeastern part of Brazil, one where R(0) is less than one, and other one where R(0) is greater than one. In both cases, the model mirrors the real situations with reasonable accuracy.
Resumo:
We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than similar to 4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet`s eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.
Resumo:
A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.
Resumo:
Sensitivity and specificity are measures that allow us to evaluate the performance of a diagnostic test. In practice, it is common to have situations where a proportion of selected individuals cannot have the real state of the disease verified, since the verification could be an invasive procedure, as occurs with biopsy. This happens, as a special case, in the diagnosis of prostate cancer, or in any other situation related to risks, that is, not practicable, nor ethical, or in situations with high cost. For this case, it is common to use diagnostic tests based only on the information of verified individuals. This procedure can lead to biased results or workup bias. In this paper, we introduce a Bayesian approach to estimate the sensitivity and the specificity for two diagnostic tests considering verified and unverified individuals, a result that generalizes the usual situation based on only one diagnostic test.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
A positive summability trigonometric kernel {K(n)(theta)}(infinity)(n=1) is generated through a sequence of univalent polynomials constructed by Suffridge. We prove that the convolution {K(n) * f} approximates every continuous 2 pi-periodic function f with the rate omega(f, 1/n), where omega(f, delta) denotes the modulus of continuity, and this provides a new proof of the classical Jackson`s theorem. Despite that it turns out that K(n)(theta) coincide with positive cosine polynomials generated by Fejer, our proof differs from others known in the literature.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.