990 resultados para Binary data
Resumo:
In this study, we concentrate on modelling gross primary productivity using two simple approaches to simulate canopy photosynthesis: "big leaf" and "sun/shade" models. Two approaches for calibration are used: scaling up of canopy photosynthetic parameters from the leaf to the canopy level and fitting canopy biochemistry to eddy covariance fluxes. Validation of the models is achieved by using eddy covariance data from the LBA site C14. Comparing the performance of both models we conclude that numerically (in terms of goodness of fit) and qualitatively, (in terms of residual response to different environmental variables) sun/shade does a better job. Compared to the sun/shade model, the big leaf model shows a lower goodness of fit and fails to respond to variations in the diffuse fraction, also having skewed responses to temperature and VPD. The separate treatment of sun and shade leaves in combination with the separation of the incoming light into direct beam and diffuse make sun/shade a strong modelling tool that catches more of the observed variability in canopy fluxes as measured by eddy covariance. In conclusion, the sun/shade approach is a relatively simple and effective tool for modelling photosynthetic carbon uptake that could be easily included in many terrestrial carbon models.
Resumo:
A search is performed for Higgs bosons produced in association with top quarks using the diphoton decay mode of the Higgs boson. Selection requirements are optimized separately for leptonic and fully hadronic final states from the top quark decays. The dataset used corresponds to an integrated luminosity of 4.5 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV and 20.3 fb−1 at 8 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess over the background prediction is observed and upper limits are set on the tt¯H production cross section. The observed exclusion upper limit at 95% confidence level is 6.7 times the predicted Standard Model cross section value. In addition, limits are set on the strength of the Yukawa coupling between the top quark and the Higgs boson, taking into account the dependence of the tt¯H and tH cross sections as well as the H→γγ branching fraction on the Yukawa coupling. Lower and upper limits at 95% confidence level are set at −1.3 and +8.0 times the Yukawa coupling strength in the Standard Model.
Resumo:
Searches are performed for resonant and non-resonant Higgs boson pair production in the hh→γγbb¯ final state using 20 fb−1 of proton--proton collisions at a center-of-mass energy of 8TeV recorded with the ATLAS detector at the CERN Large Hadron Collider. A 95% confidence level upper limit on the cross section times branching ratio of non--resonant production is set at 2.2 pb, while the expected limit is 1.0 pb. The corresponding limit observed for a narrow resonance ranges between 0.8 and 3.5 pb as a function of its mass.
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto
Resumo:
The study of the interaction between hair filaments and formulations or peptides is of utmost importance in fields like cosmetic research. Keratin intermediate filaments structure is not fully described, limiting the molecular dynamics (MD) studies in this field although its high potential to improve the area. We developed a computational model of a truncated protofibril, simulated its behavior in alcoholic based formulations and with one peptide. The simulations showed a strong interaction between the benzyl alcohol molecules of the formulations and the model, leading to the disorganization of the keratin chains, which regress with the removal of the alcohol molecules. This behavior can explain the increase of peptide uptake in hair shafts evidenced in fluorescence microscopy pictures. The model developed is valid to computationally reproduce the interaction between hair and alcoholic formulations and provide a robust base for new MD studies about hair properties. It is shown that the MD simulations can improve hair cosmetic research, improving the uptake of a compound of interest.
Resumo:
Poly(vinylidene fluoride), PVDF, films and membranes were prepared by solvent casting from dimethylformamide, DMF, by systematically varying polymer/solvent ratio and solvent evaporation temperature. The effect of the processing conditions on the morphology, degree of porosity, mechanical and thermal properties and crystalline phase of the polymer were evaluated. The obtained microstructure is explained by the Flory-Huggins theory. For the binary system, the porous membrane formation is attributed to a spinodal decomposition of the liquid-liquid phase separation. The morphological features were simulated through the correlation between the Gibbs total free energy and the Flory-Huggins theory. This correlation allowed the calculation of the PVDF/DMF phase diagram and the evolution of the microstructure in different regions of the phase diagram. Varying preparation conditions allow tailoring polymer 2 microstructure while maintaining a high degree of crystallinity and a large β crystalline phase content. Further, the membranes show adequate mechanical properties for applications in filtration or battery separator membranes.
Resumo:
Poly(vinylidene fluoride-co-chlorotrifluoroethylene), PVDF-CTFE, membranes were prepared by solven casting from dimethylformamide, DMF. The preparation conditions involved a systematic variation of polymer/solvent ratio and solvent evaporation temperature. The microstructural variations of the PVDF-CTFE membranes depend on the different regions of the PVDF-CTFE/DMF phase diagram, explained by the Flory-Huggins theory. The effect of the polymer/solvent ratio and solvent evaporation temperature on the morphology, degree of porosity, β-phase content, degree of crystallinity, mechanical, dielectric and piezoelectric properties of the PVDF-CTFE polymer were evaluated. In this binary system, the porous microstructure is attributed to a spinodal decomposition of the liquid-liquid phase separation. For a given polymer/solvent ratio, 20 wt%, and higher evaporation solvent temperature, the β-phase content is around 82% and the piezoelectric coefficient, d33, is - 4 pC/N.
Resumo:
In an underwater environment it is difficult to implement solutions for wireless communications. The existing technologies using electromagnetic waves or lasers are not very efficient due to the large attenuation in the aquatic environment. Ultrasound reveals a lower attenuation, and thus has been used in underwater long-distance communications. The much slower speed of acoustic propagation in water (about 1500 m/s) compared with that of electromagnetic and optical waves, is another limiting factor for efficient communication and networking. For high data-rates and real-time applications it is necessary to use frequencies in the MHz range, allowing communication distances of hundreds of meters with a delay of milliseconds. To achieve this goal, it is necessary to develop ultrasound transducers able to work at high frequencies and wideband, with suitable responses to digital modulations. This work shows how the acoustic impedance influences the performance of an ultrasonic emitter transducer when digital modulations are used and operating at frequencies between 100 kHz and 1 MHz. The study includes a Finite Element Method (FEM) and a MATLAB/Simulink simulation with an experimental validation to evaluate two types of piezoelectric materials: one based on ceramics (high acoustic impedance) with a resonance design and the other based in polymer (low acoustic impedance) designed to optimize the performance when digital modulations are used. The transducers performance for Binary Amplitude Shift Keying (BASK), On-Off Keying (OOK), Binary Phase Shift Keying (BPSK) and Binary Frequency Shift Keying (BFSK) modulations with a 1 MHz carrier at 125 kbps baud rate are compared.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.
Resumo:
Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature.
Resumo:
Dijet events produced in LHC proton--proton collisions at a center-of-mass energy s√=8 TeV are studied with the ATLAS detector using the full 2012 data set, with an integrated luminosity of 20.3 fb−1. Dijet masses up to about 4.5 TeV are probed. No resonance-like features are observed in the dijet mass spectrum. Limits on the cross section times acceptance are set at the 95% credibility level for various hypotheses of new phenomena in terms of mass or energy scale, as appropriate. This analysis excludes excited quarks with a mass below 4.09 TeV, color-octet scalars with a mass below 2.72 TeV, heavy W′ bosons with a mass below 2.45 TeV, chiral W∗ bosons with a mass below 1.75 TeV, and quantum black holes with six extra space-time dimensions with threshold mass below 5.82 TeV.
Resumo:
Programa Doutoral em Matemática e Aplicações.
Resumo:
The results of a search for charged Higgs bosons decaying to a τ lepton and a neutrino, H±→τ±ν, are presented. The analysis is based on 19.5 fb−1 of proton--proton collision data at s√=8 TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark. The final state is characterised by the presence of a hadronic τ decay, missing transverse momentum, b-tagged jets, a hadronically decaying W boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios B(t→bH±)×B(H±→τ±ν), between 0.23% and 1.3% for charged Higgs boson masses in the range 80--160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, σ(pp→tH±+X)×B(H±→τ±ν), between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of tanβ above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high tanβ for H± masses between 200 GeV and 250 GeV.
Resumo:
The inclusive jet cross-section is measured in proton--proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-kt algorithm with radius parameter values of 0.4 and 0.6. The double-differential cross-sections are presented as a function of the jet transverse momentum and the jet rapidity, covering jet transverse momenta from 100 GeV to 2 TeV. Next-to-leading-order QCD calculations corrected for non-perturbative effects and electroweak effects, as well as Monte Carlo simulations with next-to-leading-order matrix elements interfaced to parton showering, are compared to the measured cross-sections. A quantitative comparison of the measured cross-sections to the QCD calculations using several sets of parton distribution functions is performed.