23 resultados para FLOW PHANTOM EXPERIMENT
em Universidade do Minho
Resumo:
Correlations between the elliptic or triangular flow coefficients vm (m=2 or 3) and other flow harmonics vn (n=2 to 5) are measured using sNN−−−−√=2.76 TeV Pb+Pb collision data collected in 2010 by the ATLAS experiment at the LHC, corresponding to an integrated lumonisity of 7 μb−1. The vm-vn correlations are measured in midrapidity as a function of centrality, and, for events within the same centrality interval, as a function of event ellipticity or triangularity defined in a forward rapidity region. For events within the same centrality interval, v3 is found to be anticorrelated with v2 and this anticorrelation is consistent with similar anticorrelations between the corresponding eccentricities ϵ2 and ϵ3. On the other hand, it is observed that v4 increases strongly with v2, and v5 increases strongly with both v2 and v3. The trend and strength of the vm-vn correlations for n=4 and 5 are found to disagree with ϵm-ϵn correlations predicted by initial-geometry models. Instead, these correlations are found to be consistent with the combined effects of a linear contribution to vn and a nonlinear term that is a function of v22 or of v2v3, as predicted by hydrodynamic models. A simple two-component fit is used to separate these two contributions. The extracted linear and nonlinear contributions to v4 and v5 are found to be consistent with previously measured event-plane correlations.
Resumo:
The kinetics of GnP dispersion in polypropylene melt was studied using a prototype small scale modular extensional mixer. Its modular nature enabled the sequential application of a mixing step, melt relaxation, and a second mixing step. The latter could reproduce the flow conditions on the first mixing step, or generate milder flow conditions. The effect of these sequences of flow constraints upon GnP dispersion along the mixer length was studied for composites with 2 and 10 wt.% GnP. The samples collected along the first mixing zone showed a gradual decrease of number and size of GnP agglomerates, at a rate that was independent of the flow conditions imposed to the melt, but dependent on composition. The relaxation zone induced GnP re-agglomeration, and the application of a second mixing step caused variable dispersion results that were largely dependent on the hydrodynamic stresses generated.
Resumo:
We are living in the era of Big Data. A time which is characterized by the continuous creation of vast amounts of data, originated from different sources, and with different formats. First, with the rise of the social networks and, more recently, with the advent of the Internet of Things (IoT), in which everyone and (eventually) everything is linked to the Internet, data with enormous potential for organizations is being continuously generated. In order to be more competitive, organizations want to access and explore all the richness that is present in those data. Indeed, Big Data is only as valuable as the insights organizations gather from it to make better decisions, which is the main goal of Business Intelligence. In this paper we describe an experiment in which data obtained from a NoSQL data source (database technology explicitly developed to deal with the specificities of Big Data) is used to feed a Business Intelligence solution.
Resumo:
In this work we present semi-analytical solutions for the electro-osmotic annular flow of viscoelastic fluids modeled by the Linear and Exponential PTT models. The viscoelastic fluid flows in the axial direction between two concentric cylinders under the combined influences of electrokinetic and pressure forcings. The analysis invokes the Debye-Hückel approximation and includes the limit case of pure electro-osmotic flow. The solution is valid for both no slip and slip velocity at the walls and the chosen slip boundary condition is the linear Navier slip velocity model. The combined effects of fluid rheology, electro-osmotic and pressure gradient forcings on the fluid velocity distribution are also discussed.
Resumo:
This work provides analytical and numerical solutions for the linear, quadratic and exponential Phan–Thien–Tanner (PTT) viscoelastic models, for axial and helical annular fully-developed flows under no slip and slip boundary conditions, the latter given by the linear and nonlinear Navier slip laws. The rheology of the three PTT model functions is discussed together with the influence of the slip velocity upon the flow velocity and stress fields. For the linear PTT model, full analytical solutions for the inverse problem (unknown velocity) are devised for the linear Navier slip law and two different slip exponents. For the linear PTT model with other values of the slip exponent and for the quadratic PTT model, the polynomial equation for the radial location (β) of the null shear stress must be solved numerically. For both models, the solution of the direct problem is given by an iterative procedure involving three nonlinear equations, one for β, other for the pressure gradient and another for the torque per unit length. For the exponential PTT model we devise a numerical procedure that can easily compute the numerical solution of the pure axial flow problem
Resumo:
This work reports the implemen tation and verification of a new so lver in OpenFOAM® open source computational library, able to cope w ith integral viscoelastic models based on the integral upper-convected Maxwell model. The code is verified through the comparison of its predictions with anal ytical solutions and numerical results obtained with the differential upper-convected Maxwell model
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes. Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions.
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes . Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions
Resumo:
A search for a charged Higgs boson, H±, decaying to a W± boson and a Z boson is presented. The search is based on 20.3 fb−1 of proton-proton collision data at a center-of-mass energy of 8 TeV recorded with the ATLAS detector at the LHC. The H± boson is assumed to be produced via vector-boson fusion and the decays W±→qq′¯ and Z→e+e−/μ+μ− are considered. The search is performed in a range of charged Higgs boson masses from 200 to 1000 GeV. No evidence for the production of an H± boson is observed. Upper limits of 31--1020 fb at 95% CL are placed on the cross section for vector-boson fusion production of an H± boson times its branching fraction to W±Z. The limits are compared with predictions from the Georgi-Machacek Higgs Triplet Model.
Resumo:
A search for the decay to a pair of new particles of either the 125 GeV Higgs boson (h) or a second CP-even Higgs boson (H) is presented. The dataset correspods to an integrated luminosity of 20.3 fb−1 of pp collisions at s√= 8 TeV recorded by the ATLAS experiment at the LHC in 2012. The search was done in the context of the next-to-minimal supersymmetric standard model, in which the new particles are the lightest neutral pseudoscalar Higgs bosons (a). One of the two a bosons is required to decay to two muons while the other is required to decay to two τ-leptons. No significant excess is observed above the expected backgrounds in the dimuon invariant mass range from 3.7 GeV to 50 GeV. Upper limits are placed on the production of h→aa relative to the Standard Model gg→h production, assuming no coupling of the a boson to quarks. The most stringent limit is placed at 3.5% for ma= 3.75 GeV. Upper limits are also placed on the production cross section of H→aa from 2.33 pb to 0.72 pb, for fixed ma = 5 GeV with mH ranging from 100 GeV to 500 GeV.
Resumo:
This paper describes the trigger and offline reconstruction, identification and energy calibration algorithms for hadronic decays of tau leptons employed for the data collected from pp collisions in 2012 with the ATLAS detector at the LHC center-of-mass energy s√ = 8 TeV. The performance of these algorithms is measured in most cases with Z decays to tau leptons using the full 2012 dataset, corresponding to an integrated luminosity of 20.3 fb−1. An uncertainty on the offline reconstructed tau energy scale of 2% to 4%, depending on transverse energy and pseudorapidity, is achieved using two independent methods. The offline tau identification efficiency is measured with a precision of 2.5% for hadronically decaying tau leptons with one associated track, and of 4% for the case of three associated tracks, inclusive in pseudorapidity and for a visible transverse energy greater than 20 GeV. For hadronic tau lepton decays selected by offline algorithms, the tau trigger identification efficiency is measured with a precision of 2% to 8%, depending on the transverse energy. The performance of the tau algorithms, both offline and at the trigger level, is found to be stable with respect to the number of concurrent proton--proton interactions and has supported a variety of physics results using hadronically decaying tau leptons at ATLAS.
Resumo:
Biofilm research is growing more diverse and dependent on high-throughput technologies and the large-scale production of results aggravates data substantiation. In particular, it is often the case that experimental protocols are adapted to meet the needs of a particular laboratory and no statistical validation of the modified method is provided. This paper discusses the impact of intra-laboratory adaptation and non-rigorous documentation of experimental protocols on biofilm data interchange and validation. The case study is a non-standard, but widely used, workflow for Pseudomonas aeruginosa biofilm development, considering three analysis assays: the crystal violet (CV) assay for biomass quantification, the XTT assay for respiratory activity assessment, and the colony forming units (CFU) assay for determination of cell viability. The ruggedness of the protocol was assessed by introducing small changes in the biofilm growth conditions, which simulate minor protocol adaptations and non-rigorous protocol documentation. Results show that even minor variations in the biofilm growth conditions may affect the results considerably, and that the biofilm analysis assays lack repeatability. Intra-laboratory validation of non-standard protocols is found critical to ensure data quality and enable the comparison of results within and among laboratories.
Resumo:
The normalized differential cross section for top-quark pair production in association with at least one jet is studied as a function of the inverse of the invariant mass of the tt¯+1-jet system. This distribution can be used for a precise determination of the top-quark mass since gluon radiation depends on the mass of the quarks. The experimental analysis is based on proton--proton collision data collected by the ATLAS detector at the LHC with a centre-of-mass energy of 7 TeV corresponding to an integrated luminosity of 4.6 fb−1. The selected events were identified using the lepton+jets top-quark-pair decay channel, where lepton refers to either an electron or a muon. The observed distribution is compared to a theoretical prediction at next-to-leading-order accuracy in quantum chromodynamics using the pole-mass scheme. With this method, the measured value of the top-quark pole mass, mpolet, is: mpolet =173.7 ± 1.5 (stat.) ± 1.4 (syst.) +1.0−0.5 (theory) GeV. This result represents the most precise measurement of the top-quark pole mass to date.
Resumo:
A summary of the constraints from the ATLAS experiment on R-parity-conserving supersymmetry is presented. Results from 22 separate ATLAS searches are considered, each based on analysis of up to 20.3 fb−1 of proton-proton collision data at centre-of-mass energies of s√=7 and 8 TeV at the Large Hadron Collider. The results are interpreted in the context of the 19-parameter phenomenological minimal supersymmetric standard model, in which the lightest supersymmetric particle is a neutralino, taking into account constraints from previous precision electroweak and flavour measurements as well as from dark matter related measurements. The results are presented in terms of constraints on supersymmetric particle masses and are compared to limits from simplified models. The impact of ATLAS searches on parameters such as the dark matter relic density, the couplings of the observed Higgs boson, and the degree of electroweak fine-tuning is also shown. Spectra for surviving supersymmetry model points with low fine-tunings are presented.