915 resultados para deterministic fractals
Resumo:
This paper presents results of research related to multicriteria decision making under information uncertainty. The Bell-man-Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (< X, M > models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called < X, R > models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
A methodology for rock-excavation structural-reliability analysis that uses Distinct Element Method numerical models is presented. The methodology solves the problem of the conventional numerical models that supply only punctual results and use fixed input parameters, without considering its statistical errors. The analysis of rock-excavation stability must consider uncertainties from geological variability, from uncertainty in the choice of mechanical behaviour hypothesis, and from uncertainties in parameters adopted in numerical model construction. These uncertainties can be analyzed in simple deterministic models, but a new methodology was developed for numerical models with results of several natures. The methodology is based on Monte Carlo simulations and uses principles of Paraconsistent Logic. It will be presented in the analysis of a final slope of a large-dimensioned surface mine.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular two dimensional polygons inside a two dimensional container. This problem is approached with an heuristic based on simulated annealing. Traditional 14 external penalization"" techniques are avoided through the application of the no-fit polygon, that determinates the collision free area for each polygon before its placement. The simulated annealing controls: the rotation applied, the placement and the sequence of placement of the polygons. For each non placed polygon, a limited depth binary search is performed to find a scale factor that when applied to the polygon, would allow it to be fitted in the container. It is proposed a crystallization heuristic, in order to increase the number of accepted solutions. The bottom left and larger first deterministic heuristics were also studied. The proposed process is suited for non convex polygons and containers, the containers can have holes inside. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Mechanical blocking of the columnar front during the columnar to equiaxed transition (CET) is studied by quantitatively comparing the CET positions obtained with one stochastic model and two deterministic models for the unidirectional solidification of an Al-7 (wt pct) Si alloy. One of the deterministic models is based on the solutal blocking of the columnar front, whereas the other model is based on the mechanical blocking. The solutal-blocking model and the mechanical-blocking model with the traditional blocking fraction of 0.49 give columnar zones larger than those predicted with the stochastic model. When a blocking fraction of 0.2 is adopted, however, the agreement is very good for a range of nucleation undercoolings and number density of equiaxed grains. Therefore, changing the mechanical-blocking fraction in deterministic models from 0.49 to 0.2 seems to model more accurately the mechanical-blocking process that can lead to the CET.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
We derive the Cramer-Rao Lower Bound (CRLB) for the estimation of initial conditions of noise-embedded orbits produced by general one-dimensional maps. We relate this bound`s asymptotic behavior to the attractor`s Lyapunov number and show numerical examples. These results pave the way for more suitable choices for the chaotic signal generator in some chaotic digital communication systems. (c) 2006 Published by Elsevier Ltd.
Resumo:
A deterministic mathematical model for steady-state unidirectional solidification is proposed to predict the columnar-to-equiaxed transition. In the model, which is an extension to the classic model proposed by Hunt [Hunt JD. Mater Sci Eng 1984;65:75], equiaxed grains nucleate according to either a normal or a log-normal distribution of nucleation undercoolings. Growth maps are constructed, indicating either columnar or equiaxed solidification as a function of the velocity of isotherms and temperature gradient. The fields A columnar and equiaxed growth change significantly with the spread of the nucleation undercooling distribution. Increasing the spread Favors columnar solidification if the dimensionless velocity of the isotherms is larger than 1. For a velocity less than 1, however, equiaxed solidification is initially favored, but columnar solidification is enhanced for a larger increase in the spread. This behavior was confirmed by a stochastic model, which showed that an increase in the distribution spread Could change the grain structure from completely columnar to 50% columnar grains. (c) 2008 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The economic occupation of an area of 500 ha for Piracicaba was studied with the irrigated cultures of maize, tomato, sugarcane and beans, having used models of deterministic linear programming and linear programming including risk for the Target-Motad model, where two situations had been analyzed. In the deterministic model the area was the restrictive factor and the water was not restrictive for none of the tested situations. For the first situation the gotten maximum income was of R$ 1,883,372.87 and for the second situation it was of R$ 1,821,772.40. In the model including risk a producer that accepts risk can in the first situation get the maximum income of R$ 1,883,372. 87 with a minimum risk of R$ 350 year(-1), and in the second situation R$ 1,821,772.40 with a minimum risk of R$ 40 year(-1). Already a producer averse to the risk can get in the first situation a maximum income of R$ 1,775,974.81 with null risk and for the second situation R$ 1.707.706, 26 with null risk, both without water restriction. These results stand out the importance of the inclusion of the risk in supplying alternative occupations to the producer, allowing to a producer taking of decision considered the risk aversion and the pretension of income.
Resumo:
Successful fertilization in free-spawning marine organisms depends on the interactions between genes expressed on the surfaces of eggs and sperm. Positive selection frequently characterizes the molecular evolution of such genes, raising the possibility that some common deterministic process drives the evolution of gamete recognition genes and may even be important for understanding the evolution of prezygotic isolation and speciation in the marine realm. One hypothesis is that gamete recognition genes are subject to selection for prezygotic isolation, namely reinforcement. In a previous study, positive selection on the gene coding for the acrosomal sperm protein M7 lysin was demonstrated among allopatric populations of mussels in the Mytilus edulis species group (M. edulis, M. galloprovincialis, and M. trossulus). Here, we expand sampling to include M7 lysin haplotypes from populations where mussel species are sympatric and hybridize to determine whether there is a pattern of reproductive character displacement, which would be consistent with reinforcement driving selection on this gene. We do not detect a strong pattern of reproductive character displacement; there are no unique haplotypes in sympatry nor is there consistently greater population structure in comparisons involving sympatric populations. One distinct group of haplotypes, however, is strongly affected by natural selection and this group of haplotypes is found within M. galloprovincialis populations throughout the Northern Hemisphere concurrent with haplotypes common to M. galloprovincialis and M. edulis. We suggest that balancing selection, perhaps resulting from sexual conflicts between sperm and eggs, maintains old allelic diversity within M. galloprovincialis.
Resumo:
Cytoplasmic incompatibility is known to occur between strains of both Drosophila simulans and D. melanogaster. Incompatibility is associated with the infection of Drosophila with microorganismal endosymbionts. This paper reports survey work conducted on strains of D. simulans and D. melanogaster from diverse geographical locations finding that infected populations are relatively rare and scattered in their distribution. The distribution of infected populations of D. simulans appears to be at odds with deterministic models predicting the rapid spread of the infection through uninfected populations. Examination of isofemale lines from four localities in California where populations appear to be polymorphic for the infection failed to find evidence for consistent assortative mating preferences between infected and uninfected populations that may explain the basis for the observed polymorphism.
Resumo:
Ex vivo hematopoiesis is increasingly used for clinical applications. Models of ex vivo hematopoiesis are required to better understand the complex dynamics and to optimize hematopoietic culture processes. A general mathematical modeling framework is developed which uses traditional chemical engineering metaphors to describe the complex hematopoietic dynamics. Tanks and tubular reactors are used to describe the (pseudo-) stochastic and deterministic elements of hematopoiesis, respectively. Cells at any point in the differentiation process can belong to either an immobilized, inert phase (quiescent cells) or a mobile, active phase (cycling cells). The model describes five processes: (1) flow (differentiation), (2) autocatalytic formation (growth),(3) degradation (death), (4) phase transition from immobilized to mobile phase (quiescent to cycling transition), and (5) phase transition from mobile to immobilized phase (cycling to quiescent transition). The modeling framework is illustrated with an example concerning the effect of TGF-beta 1 on erythropoiesis. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Realistic time frames in which management decisions are made often preclude the completion of the detailed analyses necessary for conservation planning. Under these circumstances, efficient alternatives may assist in approximating the results of more thorough studies that require extensive resources and time. We outline a set of concepts and formulas that may be used in lieu of detailed population viability analyses and habitat modeling exercises to estimate the protected areas required to provide desirable conservation outcomes for a suite of threatened plant species. We used expert judgment of parameters and assessment of a population size that results in a specified quasiextinction risk based on simple dynamic models The area required to support a population of this size is adjusted to take into account deterministic and stochastic human influences, including small-scale disturbance deterministic trends such as habitat loss, and changes in population density through processes such as predation and competition. We set targets for different disturbance regimes and geographic regions. We applied our methods to Banksia cuneata, Boronia keysii, and Parsonsia dorrigoensis, resulting in target areas for conservation of 1102, 733, and 1084 ha, respectively. These results provide guidance on target areas and priorities for conservation strategies.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The traditional theory of price index numbers is based on the law of one price. But in the real world, we frequently observe the existence of an equilibrium price dispersion instead of one price of equilibrium. This article discusses the effects of price dispersion on two price indexes: the cost of living index and the consumer price index. With price dispersion and consumer searching for the lowest price, these indexes cannot be interpreted as deterministic indicators, but as stochastic indicators, and they can be biased if price dispersion is not taken into account. A measure for the bias of the consumer price index is proposed and the article ends with an estimation of the bias based on data obtained from the consumer price index calculated for the city of Sao Paulo, Brazil, from January 1988 through December 2004. The period analysed is very interesting, because it exhibits different inflationary environments: high levels and high volatility of the rates of inflation with great price dispersion until July 1994 and low and relatively stable rates of inflation with prices less dispersed after August 1994.