963 resultados para GENERALIZED ESTIMATING EQUATIONS
Resumo:
The study of non-Newtonian flow in plate heat exchangers (PHEs) is of great importance for the food industry. The objective of this work was to study the pressure drop of pineapple juice in a PHE with 50 degrees chevron plates. Density and flow properties of pineapple juice were determined and correlated with temperature (17.4 <= T <= 85.8 degrees C) and soluble solids content (11.0 <= X(s) <= 52.4 degrees Brix). The Ostwald-de Waele (power law) model described well the rheological behavior. The friction factor for non-isothermal flow of pineapple juice in the PHE was obtained for diagonal and parallel/side flow. Experimental results were well correlated with the generalized Reynolds number (20 <= Re(g) <= 1230) and were compared with predictions from equations from the literature. The mean absolute error for pressure drop prediction was 4% for the diagonal plate and 10% for the parallel plate.
Resumo:
A procedure is proposed for the determination of the residence time distribution (RTD) of curved tubes taking into account the non-ideal detection of the tracer. The procedure was applied to two holding tubes used for milk pasteurization in laboratory scale. Experimental data was obtained using an ionic tracer. The signal distortion caused by the detection system was considerable because of the short residence time. Four RTD models, namely axial dispersion, extended tanks in series, generalized convection and PER + CSTR association, were adjusted after convolution with the E-curve of the detection system. The generalized convection model provided the best fit because it could better represent the tail on the tracer concentration curve that is Caused by the laminar velocity profile and the recirculation regions. Adjusted model parameters were well cot-related with the now rate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The inverse Weibull distribution has the ability to model failure rates which are quite common in reliability and biological studies. A three-parameter generalized inverse Weibull distribution with decreasing and unimodal failure rate is introduced and studied. We provide a comprehensive treatment of the mathematical properties of the new distribution including expressions for the moment generating function and the rth generalized moment. The mixture model of two generalized inverse Weibull distributions is investigated. The identifiability property of the mixture model is demonstrated. For the first time, we propose a location-scale regression model based on the log-generalized inverse Weibull distribution for modeling lifetime data. In addition, we develop some diagnostic tools for sensitivity analysis. Two applications of real data are given to illustrate the potentiality of the proposed regression model.
Resumo:
A five-parameter distribution so-called the beta modified Weibull distribution is defined and studied. The new distribution contains, as special submodels, several important distributions discussed in the literature, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among others. The new distribution can be used effectively in the analysis of survival data since it accommodates monotone, unimodal and bathtub-shaped hazard functions. We derive the moments and examine the order statistics and their moments. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set is used to illustrate the importance and flexibility of the new distribution.
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.
Resumo:
Based on physical laws of similarity, an analytic solution of the soil water potential form of the Richards equation was derived for water infiltration into a homogeneous sand. The derivation assumes a similarity between the soil water retention function and that of the soil water content profiles taken at fixed times. The new solution successfully described soil water content profiles experimentally measured for water infiltrating downward, upward, and horizontally into a homogeneous sand and agrees with that presented by Philip in 1957. The utility of this analysis is still to be verified, but it is expected to hold for soils that have a narrow pore-size distribution before wetting and that manifest a sharp increase of water content at the wetting front during infiltration. The effect of van Genuchten`s parameters alpha and n on the application of the solution to other porous media was investigated. The solution also improves and provides a more realistic description of the infiltration process than that pioneered by Green and Ampt in 1911.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Leaf wetness duration (LWD) is related to plant disease occurrence and is therefore a key parameter in agrometeorology. As LWD is seldom measured at standard weather stations, it must be estimated in order to ensure the effectiveness of warning systems and the scheduling of chemical disease control. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results for operational use. However, the requirement of net radiation (Rn) is a disadvantage foroperational physical models, since this variable is usually not measured over crops or even at standard weather stations. With the objective of proposing a solution for this problem, this study has evaluated the ability of four models to estimate hourly Rn and their impact on LWD estimates using a Penman-Monteith approach. A field experiment was carried out in Elora, Ontario, Canada, with measurements of LWD, Rn and other meteorological variables over mowed turfgrass for a 58 day period during the growing season of 2003. Four models for estimating hourly Rn based on different combinations of incoming solar radiation (Rg), airtemperature (T), relative humidity (RH), cloud cover (CC) and cloud height (CH), were evaluated. Measured and estimated hourly Rn values were applied in a Penman-Monteith model to estimate LWD. Correlating measured and estimated Rn, we observed that all models performed well in terms of estimating hourly Rn. However, when cloud data were used the models overestimated positive Rn and underestimated negative Rn. When only Rg and T were used to estimate hourly Rn, the model underestimated positive Rn and no tendency was observed for negative Rn. The best performance was obtained with Model I, which presented, in general, the smallest mean absolute error (MAE) and the highest C-index. When measured LWD was compared to the Penman-Monteith LWD, calculated with measured and estimated Rn, few differences were observed. Both precision and accuracy were high, with the slopes of the relationships ranging from 0.96 to 1.02 and R-2 from 0.85 to 0.92, resulting in C-indices between 0.87 and 0.93. The LWD mean absolute errors associated with Rn estimates were between 1.0 and 1.5h, which is sufficient for use in plant disease management schemes.
Resumo:
Information on nutritional requirement of some Brazilian farmed fish species, especially essential amino acids (EAA) requirements, is scarce. The estimation of amino acids requirements based on amino acid composition of fish is a fast and reliable alternative. Matrinxa, Brycon amazonicus, and curimbata, Prochilodus lineatus, are two important Brazilian fish with potential for aquaculture. The objective of the present study was to estimate amino acid requirements of these species and analyze similarities among amino acid composition of different fish species by cluster analysis. To estimate amino acid requirement, the following formula was used: amino acid requirement = [(amount of an individual amino acid in fish muscle tissue) x (average totalEAA requirement among channel catfish, Ictalurus punctatus, Nile tilapia, Oreochromis niloticus, and common carp, Cyprinus carpio)]/(average fish muscle totalEAA). Most values found lie within the range of requirements determined for other omnivorous fish species, in exception of leucine requirement estimated for both species, and arginine requirement estimated for matrinxa alone. Rather than writing off the need for regular dose-response assays under the ideal protein concept to determine EAA requirements of curimbata and matrinxa, results set solid base for the study of tropical species dietary amino acids requirements.
Resumo:
We analyze the quantum dynamics of radiation propagating in a single-mode optical fiber with dispersion, nonlinearity, and Raman coupling to thermal phonons. We start from a fundamental Hamiltonian that includes the principal known nonlinear effects and quantum-noise sources, including linear gain and loss. Both Markovian and frequency-dependent, non-Markovian reservoirs are treated. This treatment allows quantum Langevin equations, which have a classical form except for additional quantum-noise terms, to be calculated. In practical calculations, it is more useful to transform to Wigner or 1P quasi-probability operator representations. These transformations result in stochastic equations that can be analyzed by use of perturbation theory or exact numerical techniques. The results have applications to fiber-optics communications, networking, and sensor technology.
Resumo:
The Gaudin models based on the face-type elliptic quantum groups and the XYZ Gaudin models are studied. The Gaudin model Hamiltonians are constructed and are diagonalized by using the algebraic Bethe ansatz method. The corresponding face-type Knizhnik–Zamolodchikov equations and their solutions are given.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
In this paper, we propose a fast adaptive importance sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First, we estimate the minimum cross-entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level. Finally, the tilting parameter just found is used to estimate the overflow probability of interest. We study various properties of the method in more detail for the M/M/1 queue and conjecture that similar properties also hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
In this paper we extend the guiding function approach to show that there are periodic or bounded solutions for first order systems of ordinary differential equations of the form x1 =f(t,x), a.e. epsilon[a,b], where f satisfies the Caratheodory conditions. Our results generalize recent ones of Mawhin and Ward.
Resumo:
The artificial dissipation effects in some solutions obtained with a Navier-Stokes flow solver are demonstrated. The solvers were used to calculate the flow of an artificially dissipative fluid, which is a fluid having dissipative properties which arise entirely from the solution method itself. This was done by setting the viscosity and heat conduction coefficients in the Navier-Stokes solvers to zero everywhere inside the flow, while at the same time applying the usual no-slip and thermal conducting boundary conditions at solid boundaries. An artificially dissipative flow solution is found where the dissipation depends entirely on the solver itself. If the difference between the solutions obtained with the viscosity and thermal conductivity set to zero and their correct values is small, it is clear that the artificial dissipation is dominating and the solutions are unreliable.