165 resultados para Numerical Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A procedure is proposed for the determination of the residence time distribution (RTD) of curved tubes taking into account the non-ideal detection of the tracer. The procedure was applied to two holding tubes used for milk pasteurization in laboratory scale. Experimental data was obtained using an ionic tracer. The signal distortion caused by the detection system was considerable because of the short residence time. Four RTD models, namely axial dispersion, extended tanks in series, generalized convection and PER + CSTR association, were adjusted after convolution with the E-curve of the detection system. The generalized convection model provided the best fit because it could better represent the tail on the tracer concentration curve that is Caused by the laminar velocity profile and the recirculation regions. Adjusted model parameters were well cot-related with the now rate. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and objective: Tuberculosis (TB) and cancer are two of the main causes of pleural effusions which frequently share similar clinical features and pleural fluid profiles. This study aimed to identify diagnostic models based on clinical and laboratory variables to differentiate tuberculous from malignant pleural effusions. Methods: A retrospective study of 403 patients (200 with TB; 203 with cancer) was undertaken. Univariate analysis was used to select the clinical variables relevant to the models composition. Variables beta coefficients were used to define a numerical score which presented a practical use. The performances of the most efficient models were tested in a sample of pleural exudates (64 new cases). Results: Two models are proposed for the diagnosis of effusions associated with each disease. For TB: (i) adenosine deaminase (ADA), globulins and the absence of malignant cells in the pleural fluid; and (ii) ADA, globulins and fluid appearance. For cancer: (i) patient age, fluid appearance, macrophage percentage and presence of atypical cells in the pleural fluid; and (ii) as for (i) excluding atypical cells. Application of the models to the 64 pleural effusions showed accuracy higher than 85% for all models. Conclusions: The proposed models were effective in suggesting pleural tuberculosis or cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. in this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. (C) 2009 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we propose and analyze nonlinear elliptical models for longitudinal data, which represent an alternative to gaussian models in the cases of heavy tails, for instance. The elliptical distributions may help to control the influence of the observations in the parameter estimates by naturally attributing different weights for each case. We consider random effects to introduce the within-group correlation and work with the marginal model without requiring numerical integration. An iterative algorithm to obtain maximum likelihood estimates for the parameters is presented, as well as diagnostic results based on residual distances and local influence [Cook, D., 1986. Assessment of local influence. journal of the Royal Statistical Society - Series B 48 (2), 133-169; Cook D., 1987. Influence assessment. journal of Applied Statistics 14 (2),117-131; Escobar, L.A., Meeker, W.Q., 1992, Assessing influence in regression analysis with censored data, Biometrics 48, 507-528]. As numerical illustration, we apply the obtained results to a kinetics longitudinal data set presented in [Vonesh, E.F., Carter, R.L., 1992. Mixed-effects nonlinear regression for unbalanced repeated measures. Biometrics 48, 1-17], which was analyzed under the assumption of normality. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Likelihood ratio tests can be substantially size distorted in small- and moderate-sized samples. In this paper, we apply Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-321] adjusted likelihood ratio statistic to exponential family nonlinear models. We show that the adjustment term has a simple compact form that can be easily implemented from standard statistical software. The adjusted statistic is approximately distributed as X(2) with high degree of accuracy. It is applicable in wide generality since it allows both the parameter of interest and the nuisance parameter to be vector-valued. Unlike the modified profile likelihood ratio statistic obtained from Cox and Reid [Cox, D.R., Reid, N., 1987. Parameter orthogonality and approximate conditional inference. journal of the Royal Statistical Society B49, 1-39], the adjusted statistic proposed here does not require an orthogonal parameterization. Numerical comparison of likelihood-based tests of varying dispersion favors the test we propose and a Bartlett-corrected version of the modified profile likelihood ratio test recently obtained by Cysneiros and Ferrari [Cysneiros, A.H.M.A., Ferrari, S.L.P., 2006. An improved likelihood ratio test for varying dispersion in exponential family nonlinear models. Statistics and Probability Letters 76 (3), 255-265]. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents important properties of standard discrete distributions and its conjugate densities. The Bernoulli and Poisson processes are described as generators of such discrete models. A characterization of distributions by mixtures is also introduced. This article adopts a novel singular notation and representation. Singular representations are unusual in statistical texts. Nevertheless, the singular notation makes it simpler to extend and generalize theoretical results and greatly facilitates numerical and computational implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss bias-corrected estimators for the regression and the dispersion parameters in an extended class of dispersion models (Jorgensen, 1997b). This class extends the regular dispersion models by letting the dispersion parameter vary throughout the observations, and contains the dispersion models as particular case. General formulae for the O(n(-1)) bias are obtained explicitly in dispersion models with dispersion covariates, which generalize previous results obtained by Botter and Cordeiro (1998), Cordeiro and McCullagh (1991), Cordeiro and Vasconcellos (1999), and Paula (1992). The practical use of the formulae is that we can derive closed-form expressions for the O(n(-1)) biases of the maximum likelihood estimators of the regression and dispersion parameters when the information matrix has a closed-form. Various expressions for the O(n(-1)) biases are given for special models. The formulae have advantages for numerical purposes because they require only a supplementary weighted linear regression. We also compare these bias-corrected estimators with two different estimators which are also bias-free to order O(n(-1)) that are based on bootstrap methods. These estimators are compared by simulation. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this work is to study the behaviour of Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-32] adjusted likelihood ratio statistic in testing simple hypothesis in a new class of regression models proposed here. The proposed class of regression models considers Dirichlet distributed observations, and the parameters that index the Dirichlet distributions are related to covariates and unknown regression coefficients. This class is useful for modelling data consisting of multivariate positive observations summing to one and generalizes the beta regression model described in Vasconcellos and Cribari-Neto [Vasconcellos, K.L.P., Cribari-Neto, F., 2005. Improved maximum likelihood estimation in a new class of beta regression models. Brazilian journal of Probability and Statistics 19,13-31]. We show that, for our model, Skovgaard`s adjusted likelihood ratio statistics have a simple compact form that can be easily implemented in standard statistical software. The adjusted statistic is approximately chi-squared distributed with a high degree of accuracy. Some numerical simulations show that the modified test is more reliable in finite samples than the usual likelihood ratio procedure. An empirical application is also presented and discussed. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F) were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05) and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm), mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm), polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm) and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm). All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Below cloud scavenging processes have been investigated considering a numerical simulation, local atmospheric conditions and particulate matter (PM) concentrations, at different sites in Germany. The below cloud scavenging model has been coupled with bulk particulate matter counter TSI (Trust Portacounter dataset, consisting of the variability prediction of the particulate air concentrations during chosen rain events. The TSI samples and meteorological parameters were obtained during three winter Campaigns: at Deuselbach, March 1994, consisting in three different events; Sylt, April 1994 and; Freiburg, March 1995. The results show a good agreement between modeled and observed air concentrations, emphasizing the quality of the conceptual model used in the below cloud scavenging numerical modeling. The results between modeled and observed data have also presented high square Pearson coefficient correlations over 0.7 and significant, except the Freiburg Campaign event. The differences between numerical simulations and observed dataset are explained by the wind direction changes and, perhaps, the absence of advection mass terms inside the modeling. These results validate previous works based on the same conceptual model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Levei Low Jet (LLJ) observed in the Porto Alegre metropolitan region, Rio Grande do Sul State, Brazil, was analyzed using 1989-2003 at 00:00 and 12:00 UTC upper-air observations. The LLJ classification criteria proposed by Bonner (1968) and modified by Whiteman et aI. (1997) were applied to determine the LLJ occurrence. Afterwards was selected a LLJ event, that was one of the most intense observed in the summer (01/27/2002 at 12:00 UTC), during the study period. ln this study were used as tools: atmospheric soundings, GOES-8 satellite images, and wind, temperature and specific humidity fields from GLOBAL, ETA and BRAMS models. Based on the numerical analysis was possible to verify that the three models overestimated the specific humidity and potential temperature values, at LLJ time occurrence. The wind speed was underestimated by the models. It was observed in the study region, at 12:00 UTC (LLJ detected hour in the Porto Alegre region), by three models, warm and wet air from north, generating conditions to Mesoscale Convective System (MCS) formation and intensification.