895 resultados para Markov chains. Convergence. Evolutionary Strategy. Large Deviations
Resumo:
The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
Background: The tectum is a structure localized in the roof of the midbrain in vertebrates, and is taken to be highly conserved in evolution. The present article assessed three hypotheses concerning the evolution of lamination and citoarchitecture of the tectum of nontetrapod animals: 1) There is a significant degree of phylogenetic inertia in both traits studied (number of cellular layers and number of cell classes in tectum); 2) Both traits are positively correlated accross evolution after correction for phylogeny; and 3) Different developmental pathways should generate different patterns of lamination and cytoarchitecture.Methodology/Principal Findings: The hypotheses were tested using analytical-computational tools for phylogenetic hypothesis testing. Both traits presented a considerably large phylogenetic signal and were positively associated. However, no difference was found between two clades classified as per the general developmental pathways of their brains.Conclusions/Significance: The evidence amassed points to more variation in the tectum than would be expected by phylogeny in three species from the taxa analysed; this variation is not better explained by differences in the main course of development, as would be predicted by the developmental clade hypothesis. Those findings shed new light on the evolution of an functionally important structure in nontetrapods, the most basal radiations of vertebrates.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study aimed to analyze the composition and the ecological attributes of the zooplankton assemblages (Cladocera and Copepoda), in four marginal lagoons and in the main channel of Rosana Reservoir (SE Brazil). Fieldwork was carried out in September and November/2004 and January, March, May and August/2005. A total of 72 taxa were identifed (55 cladocerans and 17 copepods). Seasonally, a signifcant higher richness was observed during the rainy period. The lateral lagoons, compared to the reservoir, and the rainy period, compared to the dry one, showed higher zooplankton abundance. Copepods exhibited higher abundance than cladocerans. Among the copepods, there was a higher abundance of nauppli forms in the lateral lagoons and in the dry period. Calanoida dominated in relation to Cyclopoida. The most numerous cladoceran family was Bosminidae, followed by Daphniidae. The results showed that the zooplankton assemblages are infuenced by the meteorological factors, by some important nutrients (indirectly) and by the phytoplankton abundance. This pattern indicated that in the lateral lagoon system the communities are controlled by bottom-up mechanisms. The results validate the hypotheses that lateral lagoons have a prominent ecological role for the zooplankton of Rosana Reservoir and also evidenced the main driving forces infuencing the composition and ecological attributes of the assemblages. The incorporation of the reservoir lateral lagoons in regional environmental programs should be a target strategy for the conservation of the aquatic biota, mitigating the negative impact of the dam.
Resumo:
We show that by introducing appropriate local Z(N)(Ngreater than or equal to13) symmetries in electroweak models it is possible to implement an automatic Peccei-Quinn symmetry, at the same time keeping the axion protected against gravitational effects. Although we consider here only an extension of the standard model and a particular 3-3-1 model, the strategy can be used in any kind of electroweak model. An interesting feature of this 3-3-1 model is that if we add (i) right-handed neutrinos, (ii) the conservation of the total lepton number, and (iii) a Z(2) symmetry, the Z(13) and the chiral Peccei-Quinn U(1)P-Q symmetries are both accidental symmetries in the sense that they are not imposed on the Lagrangian but are just a consequence of the particle content of the model, its gauge invariance, renormalizability, and Lorentz invariance. In addition, this model has no domain wall problem.
Resumo:
We analyze the potentiality of the CERN Large Hadron Collider to probe the Higgs boson couplings to the electroweak gauge bosons. We parametrize the possible deviations of these couplings due to new physics in a model independent way, using the most general dimension-six effective lagrangian where the SU(2)(L) x U(1)(Y) is realized linearly. For intermediate Higgs masses, the decay channel into two photons is the most important one for Higgs searches at the LHC, We study the effects of these new interactions on the Higgs production mechanism and its subsequent decay into two photons. We show that the LHC will be sensitive to new physics scales beyond the present limits extracted from the LEP and Tevatron physics. (C) 2000 Elsevier B.V. B,V, All rights reserved.
Resumo:
CMS is a general purpose experiment, designed to study the physics of pp collisions at 14 TeV at the Large Hadron Collider ( LHC). It currently involves more than 2000 physicists from more than 150 institutes and 37 countries. The LHC will provide extraordinary opportunities for particle physics based on its unprecedented collision energy and luminosity when it begins operation in 2007. The principal aim of this report is to present the strategy of CMS to explore the rich physics programme offered by the LHC. This volume demonstrates the physics capability of the CMS experiment. The prime goals of CMS are to explore physics at the TeV scale and to study the mechanism of electroweak symmetry breaking - through the discovery of the Higgs particle or otherwise. To carry out this task, CMS must be prepared to search for new particles, such as the Higgs boson or supersymmetric partners of the Standard Model particles, from the start- up of the LHC since new physics at the TeV scale may manifest itself with modest data samples of the order of a few fb(-1) or less. The analysis tools that have been developed are applied to study in great detail and with all the methodology of performing an analysis on CMS data specific benchmark processes upon which to gauge the performance of CMS. These processes cover several Higgs boson decay channels, the production and decay of new particles such as Z' and supersymmetric particles, B-s production and processes in heavy ion collisions. The simulation of these benchmark processes includes subtle effects such as possible detector miscalibration and misalignment. Besides these benchmark processes, the physics reach of CMS is studied for a large number of signatures arising in the Standard Model and also in theories beyond the Standard Model for integrated luminosities ranging from 1 fb(-1) to 30 fb(-1). The Standard Model processes include QCD, B-physics, diffraction, detailed studies of the top quark properties, and electroweak physics topics such as the W and Z(0) boson properties. The production and decay of the Higgs particle is studied for many observable decays, and the precision with which the Higgs boson properties can be derived is determined. About ten different supersymmetry benchmark points are analysed using full simulation. The CMS discovery reach is evaluated in the SUSY parameter space covering a large variety of decay signatures. Furthermore, the discovery reach for a plethora of alternative models for new physics is explored, notably extra dimensions, new vector boson high mass states, little Higgs models, technicolour and others. Methods to discriminate between models have been investigated. This report is organized as follows. Chapter 1, the Introduction, describes the context of this document. Chapters 2-6 describe examples of full analyses, with photons, electrons, muons, jets, missing E-T, B-mesons and tau's, and for quarkonia in heavy ion collisions. Chapters 7-15 describe the physics reach for Standard Model processes, Higgs discovery and searches for new physics beyond the Standard Model.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The present study shows how nature combined a small number of chemical building blocks to synthesize the acylpolyamine toxins in the venoms of Nephilinae orb-web spiders. Considering these structures in four parts, it was possible to rationalize a way to represent the natural combinatorial chemistry involved in the synthesis of these toxins: an aromatic moiety is connected through a linker amino acid to a polyamine chain, which in turn may be connected to an optional tail. The polyamine chains were classified into seven subtypes (from A to G) depending on the way the small chemical blocks are combined. These polyamine chains may be connected to one of the three possible chromophore moieties: 2,4-dihydroxyphenyl acetic acid, or 4-hydroxyindole acetic acid, or even with the indole acetic group. The connectivity between the aryl moiety and the polyamine chain is usually made through an asparagine residue; optionally a tail may be attached to the polyamine chain; nine different types of tails were identified among the 72 known acylpolyamine toxin structures. The combinations of three chromophores, two types of amino acid linkers, seven sub-types of polyamine backbone, and nine options of tails results in 378 different structural possibilities. However, we detected only 91 different toxin structures, which may represent the most successful structural trials in terms of efficiency of prey paralysis/death.
Resumo:
The simulated annealing optimization technique has been successfully applied to a number of electrical engineering problems, including transmission system expansion planning. The method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Moreover, it has the ability to provide solutions arbitrarily close to an optimum (i.e. it is asymptotically convergent) as the cooling process slows down. The drawback of the approach is the computational burden: finding optimal solutions may be extremely expensive in some cases. This paper presents a Parallel Simulated Annealing, PSA, algorithm for solving the long term transmission network expansion planning problem. A strategy that does not affect the basic convergence properties of the Sequential Simulated Annealing algorithm have been implementeded and tested. The paper investigates the conditions under which the parallel algorithm is most efficient. The parallel implementations have been tested on three example networks: a small 6-bus network, and two complex real-life networks. Excellent results are reported in the test section of the paper: in addition to reductions in computing times, the Parallel Simulated Annealing algorithm proposed in the paper has shown significant improvements in solution quality for the largest of the test networks.
Resumo:
Background: The World Health Organization (WHO) advises treatment of Mycobacterium ulcerans disease, also called Buruli ulcer'' (BU), with a combination of the antibiotics rifampicin and streptomycin (R+S), whether followed by surgery or not. In endemic areas, a clinical case definition is recommended. We evaluated the effectiveness of this strategy in a series of patients with large ulcers of >= 10 cm in longest diameter in a rural health zone of the Democratic Republic of Congo (DRC).Methods: A cohort of 92 patients with large ulcerated lesions suspected to be BU was enrolled between October 2006 and September 2007 and treated according to WHO recommendations. The following microbiologic data were obtained: Ziehl-Neelsen (ZN) stained smear, culture and PCR. Histopathology was performed on a sub-sample. Directly observed treatment with R+S was administered daily for 12 weeks and surgery was performed after 4 weeks. Patients were followed up for two years after treatment.Findings: Out of 92 treated patients, 61 tested positive for M. ulcerans by PCR. PCR negative patients had better clinical improvement than PCR positive patients after 4 weeks of antibiotics (54.8% versus 14.8%). For PCR positive patients, the outcome after 4 weeks of antibiotic treatment was related to the ZN positivity at the start. Deterioration of the ulcers was observed in 87.8% (36/41) of the ZN positive and in 12.2% (5/41) of the ZN negative patients. Deterioration due to paradoxical reaction seemed unlikely. After surgery and an additional 8 weeks of antibiotics, 98.4% of PCR positive patients and 83.3% of PCR negative patients were considered cured. The overall recurrence rate was very low (1.1%).Interpretation: Positive predictive value of the WHO clinical case definition was low. Low relapse rate confirms the efficacy of antibiotics. However, the need for and the best time for surgery for large Buruli ulcers requires clarification. We recommend confirmation by ZN stain at the rural health centers, since surgical intervention without delay may be necessary on the ZN positive cases to avoid progression of the disease. PCR negative patients were most likely not BU cases. Correct diagnosis and specific management of these non-BU ulcers cases are urgently needed.