80 resultados para Kinematic Data
Resumo:
Rockburst is characterized by a violent explosion of a block causing a sudden rupture in the rock and is quite common in deep tunnels. It is critical to understand the phenomenon of rockburst, focusing on the patterns of occurrence so these events can be avoided and/or managed saving costs and possibly lives. The failure mechanism of rockburst needs to be better understood. Laboratory experiments are undergoing at the Laboratory for Geomechanics and Deep Underground Engineering (SKLGDUE) of Beijing and the system is described. A large number of rockburst tests were performed and their information collected, stored in a database and analyzed. Data Mining (DM) techniques were applied to the database in order to develop predictive models for the rockburst maximum stress (σRB) and rockburst risk index (IRB) that need the results of such tests to be determined. With the developed models it is possible to predict these parameters with high accuracy levels using data from the rock mass and specific project.
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.
Resumo:
A search is performed for Higgs bosons produced in association with top quarks using the diphoton decay mode of the Higgs boson. Selection requirements are optimized separately for leptonic and fully hadronic final states from the top quark decays. The dataset used corresponds to an integrated luminosity of 4.5 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV and 20.3 fb−1 at 8 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess over the background prediction is observed and upper limits are set on the tt¯H production cross section. The observed exclusion upper limit at 95% confidence level is 6.7 times the predicted Standard Model cross section value. In addition, limits are set on the strength of the Yukawa coupling between the top quark and the Higgs boson, taking into account the dependence of the tt¯H and tH cross sections as well as the H→γγ branching fraction on the Yukawa coupling. Lower and upper limits at 95% confidence level are set at −1.3 and +8.0 times the Yukawa coupling strength in the Standard Model.
Resumo:
Searches are performed for resonant and non-resonant Higgs boson pair production in the hh→γγbb¯ final state using 20 fb−1 of proton--proton collisions at a center-of-mass energy of 8TeV recorded with the ATLAS detector at the CERN Large Hadron Collider. A 95% confidence level upper limit on the cross section times branching ratio of non--resonant production is set at 2.2 pb, while the expected limit is 1.0 pb. The corresponding limit observed for a narrow resonance ranges between 0.8 and 3.5 pb as a function of its mass.
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto
Resumo:
The study of the interaction between hair filaments and formulations or peptides is of utmost importance in fields like cosmetic research. Keratin intermediate filaments structure is not fully described, limiting the molecular dynamics (MD) studies in this field although its high potential to improve the area. We developed a computational model of a truncated protofibril, simulated its behavior in alcoholic based formulations and with one peptide. The simulations showed a strong interaction between the benzyl alcohol molecules of the formulations and the model, leading to the disorganization of the keratin chains, which regress with the removal of the alcohol molecules. This behavior can explain the increase of peptide uptake in hair shafts evidenced in fluorescence microscopy pictures. The model developed is valid to computationally reproduce the interaction between hair and alcoholic formulations and provide a robust base for new MD studies about hair properties. It is shown that the MD simulations can improve hair cosmetic research, improving the uptake of a compound of interest.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.
Resumo:
Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature.
Resumo:
Dijet events produced in LHC proton--proton collisions at a center-of-mass energy s√=8 TeV are studied with the ATLAS detector using the full 2012 data set, with an integrated luminosity of 20.3 fb−1. Dijet masses up to about 4.5 TeV are probed. No resonance-like features are observed in the dijet mass spectrum. Limits on the cross section times acceptance are set at the 95% credibility level for various hypotheses of new phenomena in terms of mass or energy scale, as appropriate. This analysis excludes excited quarks with a mass below 4.09 TeV, color-octet scalars with a mass below 2.72 TeV, heavy W′ bosons with a mass below 2.45 TeV, chiral W∗ bosons with a mass below 1.75 TeV, and quantum black holes with six extra space-time dimensions with threshold mass below 5.82 TeV.
Resumo:
Two searches for supersymmetric particles in final states containing a same-flavour opposite-sign lepton pair, jets and large missing transverse momentum are presented. The proton--proton collision data used in these searches were collected at a centre-of-mass energy s√=8 TeV by the ATLAS detector at the Large Hadron Collider and corresponds to an integrated luminosity of 20.3 fb−1. Two leptonic production mechanisms are considered: decays of squarks and gluinos with Z bosons in the final state, resulting in a peak in the dilepton invariant mass distribution around the Z-boson mass; and decays of neutralinos (e.g. χ~02→ℓ+ℓ−χ~01), resulting in a kinematic endpoint in the dilepton invariant mass distribution. For the former, an excess of events above the expected Standard Model background is observed, with a significance of 3 standard deviations. In the latter case, the data are well-described by the expected Standard Model background. The results from each channel are interpreted in the context of several supersymmetric models involving the production of squarks and gluinos.
Resumo:
Programa Doutoral em Matemática e Aplicações.
Resumo:
The results of a search for charged Higgs bosons decaying to a τ lepton and a neutrino, H±→τ±ν, are presented. The analysis is based on 19.5 fb−1 of proton--proton collision data at s√=8 TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark. The final state is characterised by the presence of a hadronic τ decay, missing transverse momentum, b-tagged jets, a hadronically decaying W boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios B(t→bH±)×B(H±→τ±ν), between 0.23% and 1.3% for charged Higgs boson masses in the range 80--160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, σ(pp→tH±+X)×B(H±→τ±ν), between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of tanβ above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high tanβ for H± masses between 200 GeV and 250 GeV.
Resumo:
The inclusive jet cross-section is measured in proton--proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-kt algorithm with radius parameter values of 0.4 and 0.6. The double-differential cross-sections are presented as a function of the jet transverse momentum and the jet rapidity, covering jet transverse momenta from 100 GeV to 2 TeV. Next-to-leading-order QCD calculations corrected for non-perturbative effects and electroweak effects, as well as Monte Carlo simulations with next-to-leading-order matrix elements interfaced to parton showering, are compared to the measured cross-sections. A quantitative comparison of the measured cross-sections to the QCD calculations using several sets of parton distribution functions is performed.
Resumo:
The tt¯ production cross-section dependence on jet multiplicity and jet transverse momentum is reported for proton--proton collisions at a centre-of-mass energy of 7 TeV in the single-lepton channel. The data were collected with the ATLAS detector at the CERN Large Hadron Collider and comprise the full 2011 data sample corresponding to an integrated luminosity of 4.6 fb−1. Differential cross-sections are presented as a function of the jet multiplicity for up to eight jets using jet transverse momentum thresholds of 25, 40, 60, and 80 GeV, and as a function of jet transverse momentum up to the fifth jet. The results are shown after background subtraction and corrections for all detector effects, within a kinematic range closely matched to the experimental acceptance. Several QCD-based Monte Carlo models are compared with the results. Sensitivity to the parton shower modelling is found at the higher jet multiplicities, at high transverse momentum of the leading jet and in the transverse momentum spectrum of the fifth leading jet. The MC@NLO+HERWIG MC is found to predict too few events at higher jet multiplicities.