8 resultados para Scaling sStrategies
em Universidade do Minho
Resumo:
A high-resolution mtDNA phylogenetic tree allowed us to look backward in time to investigate purifying selection. Purifying selection was very strong in the last 2,500 years, continuously eliminating pathogenic mutations back until the end of the Younger Dryas (∼11,000 years ago), when a large population expansion likely relaxed selection pressure. This was preceded by a phase of stable selection until another relaxation occurred in the out-of-Africa migration. Demography and selection are closely related: expansions led to relaxation of selection and higher pathogenicity mutations significantly decreased the growth of descendants. The only detectible positive selection was the recurrence of highly pathogenic nonsynonymous mutations (m.3394T>C-m.3397A>G-m.3398T>C) at interior branches of the tree, preventing the formation of a dinucleotide STR (TATATA) in the MT-ND1 gene. At the most recent time scale in 124 mother-children transmissions, purifying selection was detectable through the loss of mtDNA variants with high predicted pathogenicity. A few haplogroup-defining sites were also heteroplasmic, agreeing with a significant propensity in 349 positions in the phylogenetic tree to revert back to the ancestral variant. This nonrandom mutation property explains the observation of heteroplasmic mutations at some haplogroup-defining sites in sequencing datasets, which may not indicate poor quality as has been claimed.
Resumo:
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.
Resumo:
We present a study on human mobility at small spatial scales. Differently from large scale mobility, recently studied through dollar-bill tracking and mobile phone data sets within one big country or continent, we report Brownian features of human mobility at smaller scales. In particular, the scaling exponents found at the smallest scales is typically close to one-half, differently from the larger values for the exponent characterizing mobility at larger scales. We carefully analyze 12-month data of the Eduroam database within the Portuguese university of Minho. A full procedure is introduced with the aim of properly characterizing the human mobility within the network of access points composing the wireless system of the university. In particular, measures of flux are introduced for estimating a distance between access points. This distance is typically non-Euclidean, since the spatial constraints at such small scales distort the continuum space on which human mobility occurs. Since two different ex- ponents are found depending on the scale human motion takes place, we raise the question at which scale the transition from Brownian to non-Brownian motion takes place. In this context, we discuss how the numerical approach can be extended to larger scales, using the full Eduroam in Europe and in Asia, for uncovering the transi- tion between both dynamical regimes.
Resumo:
A measurement of W boson production in lead-lead collisions at sNN−−−√=2.76 TeV is presented. It is based on the analysis of data collected with the ATLAS detector at the LHC in 2011 corresponding to an integrated luminosity of 0.14 nb−1 and 0.15 nb−1 in the muon and electron decay channels, respectively. The differential production cross-sections and lepton charge asymmetry are each measured as a function of the average number of participating nucleons ⟨Npart⟩ and absolute pseudorapidity of the charged lepton. The results are compared to predictions based on next-to-leading-order QCD calculations. These measurements are, in principle, sensitive to possible nuclear modifications to the parton distribution functions and also provide information on scaling of W boson production in multi-nucleon systems.
Resumo:
The ATLAS experiment at the LHC has measured the Higgs boson couplings and mass, and searched for invisible Higgs boson decays, using multiple production and decay channels with up to 4.7 fb−1 of pp collision data at √s=7 TeV and 20.3 fb−1 at √s=8 TeV. In the current study, the measured production and decay rates of the observed Higgs boson in the γγ, ZZ, W W , Zγ, bb, τ τ , and μμ decay channels, along with results from the associated production of a Higgs boson with a top-quark pair, are used to probe the scaling of the couplings with mass. Limits are set on parameters in extensions of the Standard Model including a composite Higgs boson, an additional electroweak singlet, and two-Higgs-doublet models. Together with the measured mass of the scalar Higgs boson in the γγ and ZZ decay modes, a lower limit is set on the pseudoscalar Higgs boson mass of m A > 370 GeV in the “hMSSM” simplified Minimal Supersymmetric Standard Model. Results from direct searches for heavy Higgs bosons are also interpreted in the hMSSM. Direct searches for invisible Higgs boson decays in the vector-boson fusion and associated production of a Higgs boson with W/Z (Z → ℓℓ, W/Z → jj) modes are statistically combined to set an upper limit on the Higgs boson invisible branching ratio of 0.25. The use of the measured visible decay rates in a more general coupling fit improves the upper limit to 0.23, constraining a Higgs portal model of dark matter.
Resumo:
A search for Higgs boson production in association with a W or Z boson, in the H→ W W ∗ decay channel, is performed with a data sample collected with the ATLAS detector at the LHC in proton-proton collisions at centre-of-mass energies s√=7 TeV and 8 TeV, corresponding to integrated luminosities of 4.5 fb−1 and 20.3 fb−1, respectively. The WH production mode is studied in two-lepton and three-lepton final states, while two- lepton and four-lepton final states are used to search for the ZH production mode. The observed significance, for the combined W H and ZH production, is 2.5 standard deviations while a significance of 0.9 standard deviations is expected in the Standard Model Higgs boson hypothesis. The ratio of the combined W H and ZH signal yield to the Standard Model expectation, μ V H , is found to be μ V H = 3.0 − 1.1 + 1.3 (stat.) − 0.7 + 1.0 (sys.) for the Higgs boson mass of 125.36 GeV. The W H and ZH production modes are also combined with the gluon fusion and vector boson fusion production modes studied in the H → W W ∗ → ℓνℓν decay channel, resulting in an overall observed significance of 6.5 standard deviations and μ ggF + VBF + VH = 1. 16 − 0.15 + 0.16 (stat.) − 0.15 + 0.18 (sys.). The results are interpreted in terms of scaling factors of the Higgs boson couplings to vector bosons (κ V ) and fermions (κ F ); the combined results are: |κ V | = 1.06 − 0.10 + 0.10 , |κ F | = 0. 85 − 0.20 + 0.26 .
Resumo:
Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos
Resumo:
Current data mining engines are difficult to use, requiring optimizations by data mining experts in order to provide optimal results. To solve this problem a new concept was devised, by maintaining the functionality of current data mining tools and adding pervasive characteristics such as invisibility and ubiquity which focus on their users, providing better ease of use and usefulness, by providing autonomous and intelligent data mining processes. This article introduces an architecture to implement a data mining engine, composed by four major components: database; Middleware (control); Middleware (processing); and interface. These components are interlinked but provide independent scaling, allowing for a system that adapts to the user’s needs. A prototype has been developed in order to test the architecture. The results are very promising and showed their functionality and the need for further improvements.