45 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rheological properties of adherent cells are essential for their physiological functions, and microrheological measurements on living cells have shown that their viscoelastic responses follow a weak power law over a wide range of time scales. This power law is also influenced by mechanical prestress borne by the cytoskeleton, suggesting that cytoskeletal prestress determines the cell's viscoelasticity, but the biophysical origins of this behavior are largely unknown. We have recently developed a stochastic two-dimensional model of an elastically joined chain that links the power-law rheology to the prestress. Here we use a similar approach to study the creep response of a prestressed three-dimensional elastically jointed chain as a viscoelastic model of semiflexible polymers that comprise the prestressed cytoskeletal lattice. Using a Monte Carlo based algorithm, we show that numerical simulations of the chain's creep behavior closely correspond to the behavior observed experimentally in living cells. The power-law creep behavior results from a finite-speed propagation of free energy from the chain's end points toward the center of the chain in response to an externally applied stretching force. The property that links the power law to the prestress is the chain's stiffening with increasing prestress, which originates from entropic and enthalpic contributions. These results indicate that the essential features of cellular rheology can be explained by the viscoelastic behaviors of individual semiflexible polymers of the cytoskeleton.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a positive correlation between the intensity of use of a given antibiotic and the prevalence of resistant strains. The more you treat, more patients infected with resistant strains appears and, as a consequence, the higher the mortality due to the infection and the longer the hospitalization time. In contrast, the less you treat, the higher the mortality rates and the longer the hospitalization time of patients infected with sensitive strains that could be successfully treated. The hypothesis proposed in this paper is an attempt to solve such a conflict: there must be an optimum treatment intensity that minimizes both the additional mortality and hospitalization time due to the infection by both sensitive and resistant bacteria strains. In order to test this hypothesis we applied a simple mathematical model that allowed us to estimate the optimum proportion of patients to be treated in order to minimize the total number of deaths and hospitalization time due to the infection in a hospital setting. (C) 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let a > 0, Omega subset of R(N) be a bounded smooth domain and - A denotes the Laplace operator with Dirichlet boundary condition in L(2)(Omega). We study the damped wave problem {u(tt) + au(t) + Au - f(u), t > 0, u(0) = u(0) is an element of H(0)(1)(Omega), u(t)(0) = v(0) is an element of L(2)(Omega), where f : R -> R is a continuously differentiable function satisfying the growth condition vertical bar f(s) - f (t)vertical bar <= C vertical bar s - t vertical bar(1 + vertical bar s vertical bar(rho-1) + vertical bar t vertical bar(rho-1)), 1 < rho < (N - 2)/(N + 2), (N >= 3), and the dissipativeness condition limsup(vertical bar s vertical bar ->infinity) s/f(s) < lambda(1) with lambda(1) being the first eigenvalue of A. We construct the global weak solutions of this problem as the limits as eta -> 0(+) of the solutions of wave equations involving the strong damping term 2 eta A(1/2)u with eta > 0. We define a subclass LS subset of C ([0, infinity), L(2)(Omega) x H(-1)(Omega)) boolean AND L(infinity)([0, infinity), H(0)(1)(Omega) x L(2)(Omega)) of the `limit` solutions such that through each initial condition from H(0)(1)(Omega) x L(2)(Omega) passes at least one solution of the class LS. We show that the class LS has bounded dissipativeness property in H(0)(1)(Omega) x L(2)(Omega) and we construct a closed bounded invariant subset A of H(0)(1)(Omega) x L(2)(Omega), which is weakly compact in H(0)(1)(Omega) x L(2)(Omega) and compact in H({I})(s)(Omega) x H(s-1)(Omega), s is an element of [0, 1). Furthermore A attracts bounded subsets of H(0)(1)(Omega) x L(2)(Omega) in H({I})(s)(Omega) x H(s-1)(Omega), for each s is an element of [0, 1). For N = 3, 4, 5 we also prove a local uniqueness result for the case of smooth initial data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FUNDAMENTO: O conhecimento da evolução da mortalidade cardiovascular é importante para levantar hipóteses sobre a sua ocorrência e subsidiar medidas de prevenção e controle. OBJETIVOS: Comparar a mortalidade pelo conjunto das doenças cardiovasculares e seus principais subgrupos: doença isquêmica do coração e cerebrovasculares (DIC e DCBV), no município de São Paulo, por sexo e idade, de 1996 a 1998 e 2003 a 2005. MÉTODOS: Foram usados dados de óbitos do Programa de Aprimoramento das Informações de Mortalidade para o Município (PROAIM) e estimativas populacionais da Fundação Sistema Estadual de Análise de Dados (SEADE) do Estado de São Paulo. A magnitude na mortalidade e as mudanças entre os triênios foram medidas pela descrição de coeficientes e variação percentual relativa. O modelo de regressão de Poisson foi usado também para estimar a mudança na mortalidade entre os períodos. RESULTADOS: Observou-se redução importante da mortalidade cardiovascular. Os coeficientes aumentam com a idade em ambos os sexos. Também são mais elevados na população masculina, na faixa a partir dos 70 anos. Os coeficientes de mortalidade por DIC são maiores que aqueles por DCBV, tanto nos homens como nas mulheres de 50 anos ou mais. O declínio pelo conjunto das doenças cardiovasculares foi maior em mulheres de 20 a 29 anos (-30%) e em homens de 30 a 39 anos (-26%). CONCLUSÃO: A força da intensidade da mortalidade cardiovascular diminuiu entre 1996 e 1998, a 2003 e 2005. Ainda assim há diferenças entre os grupos. Essa redução pode significar, em parte, um maior acesso aos métodos diagnósticos e terapêuticos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pitzer`s equation for the excess Gibbs energy of aqueous solutions of low-molecular electrolytes is extended to aqueous solutions of polyelectrolytes. The model retains the original form of Pitzer`s model (combining a long-range term, based on the Debye-Huckel equation, with a short-range term similar to the virial equation where the second osmotic virial coefficient depends on the ionic strength). The extension consists of two parts: at first, it is assumed that a constant fraction of the monomer units of the polyelectrolyte is dissociated, i.e., that fraction does not depend on the concentration of the polyelectrolyte, and at second, a modified expression for the ionic strength (wherein each charged monomer group is taken into account individually) is introduced. This modification is to account for the presence of charged polyelectrolyte chains, which cannot be regarded as punctual charges. The resulting equation was used to correlate osmotic coefficient data of aqueous solutions of a single polyelectrolyte as well as of binary mixtures of a single polyelectrolyte and a salt with low-molecular weight. It was additionally applied to correlate liquid-liquid equilibrium data of some aqueous two-phase systems that might form when a polyelectrolyte and another hydrophilic but neutral polymer are simultaneously dissolved in water. A good agreement between the experimental data and the correlation result is observed for all investigated systems. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[1] Iron is hypothesized to be an important micronutrient for ocean biota, thus modulating carbon dioxide uptake by the ocean biological pump. Studies have assumed that atmospheric deposition of iron to the open ocean is predominantly from mineral aerosols. For the first time we model the source, transport, and deposition of iron from combustion sources. Iron is produced in small quantities during fossil fuel burning, incinerator use, and biomass burning. The sources of combustion iron are concentrated in the industrialized regions and biomass burning regions, largely in the tropics. Model results suggest that combustion iron can represent up to 50% of the total iron deposited, but over open ocean regions it is usually less than 5% of the total iron, with the highest values (< 30%) close to the East Asian continent in the North Pacific. For ocean biogeochemistry the bioavailability of the iron is important, and this is often estimated by the fraction which is soluble ( Fe(II)). Previous studies have argued that atmospheric processing of the relatively insoluble Fe(III) occurs to make it more soluble ( Fe( II)). Modeled estimates of soluble iron amounts based solely on atmospheric processing as simulated here cannot match the variability in daily averaged in situ concentration measurements in Korea, which is located close to both combustion and dust sources. The best match to the observations is that there are substantial direct emissions of soluble iron from combustion processes. If we assume observed soluble Fe/black carbon ratios in Korea are representative of the whole globe, we obtain the result that deposition of soluble iron from combustion contributes 20-100% of the soluble iron deposition over many ocean regions. This implies that more work should be done refining the emissions and deposition of combustion sources of soluble iron globally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study develops a simplified model describing the evolutionary dynamics of a population composed of obligate sexually and asexually reproducing, unicellular organisms. The model assumes that the organisms have diploid genomes consisting of two chromosomes, and that the sexual organisms replicate by first dividing into haploid intermediates, which then combine with other haploids, followed by the normal mitotic division of the resulting diploid into two new daughter cells. We assume that the fitness landscape of the diploids is analogous to the single-fitness-peak approach often used in single-chromosome studies. That is, we assume a master chromosome that becomes defective with just one point mutation. The diploid fitness then depends on whether the genome has zero, one, or two copies of the master chromosome. We also assume that only pairs of haploids with a master chromosome are capable of combining so as to produce sexual diploid cells, and that this process is described by second-order kinetics. We find that, in a range of intermediate values of the replication fidelity, sexually reproducing cells can outcompete asexual ones, provided the initial abundance of sexual cells is above some threshold value. The range of values where sexual reproduction outcompetes asexual reproduction increases with decreasing replication rate and increasing population density. We critically evaluate a common approach, based on a group selection perspective, used to study the competition between populations and show its flaws in addressing the evolution of sex problem.