990 resultados para modeling algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The earth's tectonic plates are strong, viscoelastic shells which make up the outermost part of a thermally convecting, predominantly viscous layer. Brittle failure of the lithosphere occurs when stresses are high. In order to build a realistic simulation of the planet's evolution, the complete viscoelastic/brittle convection system needs to be considered. A particle-in-cell finite element method is demonstrated which can simulate very large deformation viscoelasticity with a strain-dependent yield stress. This is applied to a plate-deformation problem. Numerical accuracy is demonstrated relative to analytic benchmarks, and the characteristics of the method are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling volatile organic compounds (voc`s) adsorption onto cup-stacked carbon nanotubes (cscnt) using the linear driving force model. Volatile organic compounds (VOC`s) are an important category of air pollutants and adsorption has been employed in the treatment (or simply concentration) of these compounds. The current study used an ordinary analytical methodology to evaluate the properties of a cup-stacked nanotube (CSCNT), a stacking morphology of truncated conical graphene, with large amounts of open edges on the outer surface and empty central channels. This work used a Carbotrap bearing a cup-stacked structure (composite); for comparison, Carbotrap was used as reference (without the nanotube). The retention and saturation capacities of both adsorbents to each concentration used (1, 5, 20 and 35 ppm of toluene and phenol) were evaluated. The composite performance was greater than Carbotrap; the saturation capacities for the composite was 67% higher than Carbotrap (average values). The Langmuir isotherm model was used to fit equilibrium data for both adsorbents, and a linear driving force model (LDF) was used to quantify intraparticle adsorption kinetics. LDF was suitable to describe the curves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: It has long been demonstrated that epidermal growth factor (EGF) has catabolic effects oil bone. Thus. we examined the role of EGF in regulating mechanically induced bone modeling in a rat model of orthodontic tooth movement. Main methods: The maxillary first molars of rats were moved mesially using an orthodontic appliance attached to the maxillary incisor teeth. Rats were randomly divided into 4 groups: (G1) administration of PBS (Phosphate buffer saline Solution (n = 24); (G2) administration of empty liposomes (it = 24): (Q) administration 20 rig of EGF Solution (n = 24): and (G4) 20 ng of EGF-liposomes Solution (it = 24). Each Solution was injected in the mucosa of the left first molar adjacent to the appliance. At days 5, 10, 14 and 2 1 after drug administration. 6 animals of each group were sacrificed. Histomorphometric analysis was used to quantify osteoclasts (Tartrate-resistant acid phosphatase (TRAP) + cells) and tooth movement. Using immunohistochemistry assay we evaluated the RANKL (receptor activator of nuclear factor kappa B ligand) and epidermal growth factor receptor (EGFR) expression. Key findings: The EGF-liposome administration showed an increased tooth movement and osteoclast numbers compared to controls (p<0.05). This was correlated with intense RANKL expression. Both osteoblasts and osteoclasts expressed EGFR. Significance: Local delivery of EGF-liposome stimulates, osteoclastogenesis and tooth movement. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Cellular-Automaton Finite-Volume-Method (CAFVM) algorithm has been developed, coupling with macroscopic model for heat transfer calculation and microscopic models for nucleation and growth. The solution equations have been solved to determine the time-dependent constitutional undercooling and interface retardation during solidification. The constitutional undercooling is then coupled into the CAFVM algorithm to investigate both the effects of thermal and constitutional undercooling on columnar growth and crystal selection in the columnar zone, and formation of equiaxed crystals in the bulk liquid. The model cannot only simulate microstructures of alloys but also investigates nucleation mechanisms and growth kinetics of alloys solidified with various solute concentrations and solidification morphologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effects of a specific cognitive race plan on 100 m sprint performance, Twelve elite sprinters (11 male and 1 female) performed 100 m time trials under normal (control) conditions and then under experimental conditions (use of race cues). In the experimental condition, participants were asked to think about specific thought content in each of three segments of the 100 m. A multiple baseline design was employed. A mean improvement of 0.26 s was found. Eleven of the 12 participants showed improvement using the specific cognitive race plan (p < .005). Participants also produced more consistent sprint performances when using the cues (p < .01). Subjective evaluations made by the participants unanimously supported the use of the race plan for optimizing sprint performance. Environmental conditions, effort, and practice effects were considered as possible influences on the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Absorption kinetics of solutes given with the subcutaneous administration of fluids is ill-defined. The gamma emitter, technitium pertechnetate, enabled estimates of absorption rate to be estimated independently using two approaches. In the first approach, the counts remaining at the site were estimated by imaging above the subcutaneous administration site, whereas in the second approach, the plasma technetium concentration-time profiles were monitored up to 8 hr after technetium administration. Boluses of technetium pertechnetate were given both intravenously and subcutaneously on separate occasions with a multiple dosing regimen using three doses on each occasion. The disposition of technetium after iv administration was best described by biexponential kinetics with a V-ss of 0.30 +/- 0.11 L/kg and a clearance of 30.0 +/- 13.1 ml/min. The subcutaneous absorption kinetics was best described as a single exponential process with a half-life of 18.16 +/- 3.97 min by image analysis and a half-life of 11.58 +/- 2.48 min using plasma technetium time data. The bioavailability of technetium by the subcutaneous route was estimated to be 0.96 +/- 0.12. The absorption half-life showed no consistent change with the duration of the subcutaneous infusion. The amount remaining at the absorption site with time was similar when analyzed using image analysis, and plasma concentrations assuming multiexponential disposition kinetics and a first-order absorption process. Profiles of fraction remaining at the absorption sire generated by deconvolution analysis, image analysis, and assumption of a constant first-order absorption process were similar. Slowing of absorption from the subcutaneous administration site is apparent after the last bolus dose in three of the subjects and can De associated with the stopping of the infusion. In a fourth subject, the retention of technetium at the subcutaneous site is more consistent with accumulation of technetium near the absorption site as a result of systemic recirculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a new neuroeconomics model for decision-making applied to the Attention-Deficit/Hyperactivity Disorder (ADHD). The model is based on the hypothesis that decision-making is dependent on the evaluation of expected rewards and risks assessed simultaneously in two decision spaces: the personal (PDS) and the interpersonal emotional spaces (IDS). Motivation to act is triggered by necessities identified in PDS or IDS. The adequacy of an action in fulfilling a given necessity is assumed to be dependent on the expected reward and risk evaluated in the decision spaces. Conflict generated by expected reward and risk influences the easiness (cognitive effort) and the future perspective of the decision-making. Finally, the willingness (not) to act is proposed to be a function of the expected reward (or risk), adequacy, easiness and future perspective. The two most frequent clinical forms are ADHD hyperactive (AD/HDhyp) and ADHD inattentive (AD/HDdin). AD/HDhyp behavior is hypothesized to be a consequence of experiencing high rewarding expectancies for short periods of time, low risk evaluation, and short future perspective for decision-making. AD/HDin is hypothesized to be a consequence of experiencing high rewarding expectancies for long periods of time, low risk evaluation, and long future perspective for decision-making.