988 resultados para Complex vector fields
Resumo:
The moisture content in concrete structures has an important influence in their behavior and performance. Several vali-dated numerical approaches adopt the governing equation for relative humidity fields proposed in Model Code 1990/2010. Nevertheless there is no integrative study which addresses the choice of parameters for the simulation of the humidity diffusion phenomenon, particularly in concern to the range of parameters forwarded by Model Code 1990/2010. A software based on a Finite Difference Method Algorithm (1D and axisymmetric cases) is used to perform sensitivity analyses on the main parameters in a normal strength concrete. Then, based on the conclusions of the sensi-tivity analyses, experimental results from nine different concrete compositions are analyzed. The software is used to identify the main material parameters that better fit the experimental data. In general, the model was able to satisfactory fit the experimental results and new correlations were proposed, particularly focusing on the boundary transfer coeffi-cient.
Resumo:
It is a difficult task to avoid the “smart systems” topic when discussing smart prevention and, similarly, it is a difficult task to address smart systems without focusing their ability to learn. Following the same line of thought, in the current reality, it seems a Herculean task (or an irreparable omission) to approach the topic of certified occupational health and safety management systems (OHSMS) without discussing the integrated management systems (IMSs). The available data suggest that seldom are the OHSMS operating as the single management system (MS) in a company so, any statement concerning OHSMS should mainly be interpreted from an integrated perspective. A major distinction between generic systems can be drawn between those that learn, i.e., those systems that have “memory” and those that have not. These former systems are often depicted as adaptive since they take into account past events to deal with novel, similar and future events modifying their structure to enable success in its environment. Often, these systems, present a nonlinear behavior and a huge uncertainty related to the forecasting of some events. This paper seeks to portray, for the first time as we were able to find out, the IMSs as complex adaptive systems (CASs) by listing their properties and dissecting the features that enable them to evolve and self-organize in order to, holistically, fulfil the requirements from different stakeholders and thus thrive by assuring the successful sustainability of a company. Based on the revision of literature carried out, this is the first time that IMSs are pointed out as CASs which may develop fruitful synergies both for the MSs and for CASs communities. By performing a thorough revision of literature and based on some concepts embedded in the “DNA” of the subsystems implementation standards it is intended, specifically, to identify, determine and discuss the properties of a generic IMS that should be considered to classify it as a CAS.
Resumo:
In the present work the benefits of using graphics processing units (GPU) to aid the design of complex geometry profile extrusion dies, are studied. For that purpose, a3Dfinite volume based code that employs unstructured meshes to solve and couple the continuity, momentum and energy conservation equations governing the fluid flow, together with aconstitutive equation, was used. To evaluate the possibility of reducing the calculation time spent on the numerical calculations, the numerical code was parallelized in the GPU, using asimple programing approach without complex memory manipulations. For verificationpurposes, simulations were performed for three benchmark problems: Poiseuille flow, lid-driven cavity flow and flow around acylinder. Subsequently, the code was used on the design of two real life extrusion dies for the production of a medical catheter and a wood plastic composite decking profile. To evaluate the benefits, the results obtained with the GPU parallelized code were compared, in terms of speedup, with a serial implementation of the same code, that traditionally runs on the central processing unit (CPU). The results obtained show that, even with the simple parallelization approach employed, it was possible to obtain a significant reduction of the computation times.
Resumo:
Anaerobic digestion (AD) is a well-established technology used for the treatment of wastes and wastewaters with high organic content. During AD organic matter is converted stepwise to methane-containing biogasa renewable energy carrier. Methane production occurs in the last AD step and relies on methanogens, which are rather sensitive to some contaminants commonly found in wastewaters (e.g. heavy metals), or easily outcompeted by other groups of microorganisms (e.g. sulphate reducing bacteria, SRB). This review gives an overview of previous research and pilot-scale studies that shed some light on the effects of sulphate and heavy metals on methanogenesis. Despite the numerous studies on this subject, comparison is not always possible due to differences in the experimental conditions used and parameters explained. An overview of the possible benefits of methanogens and SRB co-habitation is also covered. Small amounts of sulphide produced by SRB can precipitate with metals, neutralising the negative effects of sulphide accumulation and free heavy metals on methanogenesis. Knowledge on how to untangle and balance sulphate reduction and methanogenesis is crucial to take advantage of the potential for the utilisation of biogenic sulphide as a metal detoxification agent with minimal loss in methane production in anaerobic digesters.
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes. Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions.
Resumo:
[Extrat] Thermoplastic profiles are very attractive due to their inherent design freedom. However, the usual methodologies employed to design extrusion forming tools, based on experimental based trial–and–error procedures, are highly dependent on the designer’s experience and lead to high resources consumption. Despite of the relatively low cost of the raw materials employed on the production of this type of profiles, the resources involved in the die design process significantly increase their cost. These difficulties are even more evident when a complex geometry profile has to be produced and there is no previous experience with similar geometries. Therefore, novel design approaches are required, in order to reduce the required resources and guarantee a good performance for the produced profile. (...)
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes . Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions
Resumo:
The final ATLAS Run 1 measurements of Higgs boson production and couplings in the decay channel H→ZZ∗→ℓ+ℓ−ℓ′+ℓ′−, where ℓ,ℓ′=e or μ, are presented. These measurements were performed using pp collision data corresponding to integrated luminosities of 4.5 fb−1 and 20.3 fb−1 at center-of-mass energies of 7 TeV and 8 TeV, respectively, recorded with the ATLAS detector at the LHC. The H→ZZ∗→4ℓ signal is observed with a significance of 8.1 standard deviations at 125.36 GeV, the combined ATLAS measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels. The production rate relative to the Standard Model expectation, the signal strength, is measured in four different production categories in the H→ZZ∗→4ℓ channel. The measured signal strength, at this mass, and with all categories combined, is 1.44 +0.40−0.33. The signal strength for Higgs boson production in gluon fusion or in association with tt¯ or bb¯ pairs is found to be 1.7 +0.5−0.4, while the signal strength for vector-boson fusion combined with WH/ZH associated production is found to be 0.3 +1.6−0.9.
Resumo:
We report the observation of Higgs boson decays to WW∗ based on an excess over background of 6.1 standard deviations in the dilepton final state, where the Standard Model expectation is 5.8 standard deviations. Evidence for the vector-boson fusion (VBF) production process is obtained with a significance of 3.2 standard deviations. The results are obtained from a data sample corresponding to an integrated luminosity of 25 pb−1 from s√=7 and 8 TeV pp collisions recorded by the ATLAS detector at the LHC. For a Higgs boson mass of 125.36 GeV, the ratio of the measured value to the expected value of the total production cross section times branching fraction is 1.09+0.16−0.15 (stat.)+0.17−0.14 (syst.). The corresponding ratios for the gluon fusion and vector-boson fusion production mechanisms are 1.02±0.19 (stat.)+0.22−0.18 (syst.) and 1.27+0.44−0.40 (stat.)+0.30−0.21 (syst.), respectively. At s√=8 TeV, the total production cross sections are measured to be σ(gg→ H→WW∗)=4.6±0.9(stat.)+0.8−0.7(syst.)pb and σ(VBF H→WW∗)=0.51+0.17−0.15(stat.)+0.13−0.08(syst.)pb. The fiducial cross section is determined for the gluon-fusion process in exclusive final states with zero or one associated jet.
Resumo:
A new species of the genus Cerqueirellum Py-Daniel, 1983 (Diptera: Simuliidae) is described. The adults are similar to the species C. oyapockense (Floch & Abonnenc, 1946) and C. roraimense (Nunes de Mello, 1974), of which the females are similar, and the males present discrete differences. The main differences of this new species to others of the genus Cerqueirellum are the integument of the larva recovered from stout spines and long cephalic trichomes in the pupa. Some females were infected with Mansonella ozzardii (Manson, 1897) (Nematoda, Onchocercidae) and probably transmit mansonelliasis in the Ituxi river, state of Amazonas, Brazil.
Resumo:
A search has been performed for pair production of heavy vector-like down-type (B) quarks. The analysis explores the lepton-plus-jets final state, characterized by events with one isolated charged lepton (electron or muon), significant missing transverse momentum and multiple jets. One or more jets are required to be tagged as arising from b-quarks, and at least one pair of jets must be tagged as arising from the hadronic decay of an electroweak boson. The analysis uses the full data sample of pp collisions recorded in 2012 by the ATLAS detector at the LHC, operating at a center-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 20.3 fb−1. No significant excess of events is observed above the expected background. Limits are set on vector-like B production, as a function of the B branching ratios, assuming the allowable decay modes are B→Wt/Zb/Hb. In the chiral limit with a branching ratio of 100% for the decay B→Wt, the observed (expected) 95% CL lower limit on the vector-like B mass is 810 GeV (760 GeV). In the case where the vector-like B quark has branching ratio values corresponding to those of an SU(2) singlet state, the observed (expected) 95% CL lower limit on the vector-like B mass is 640 GeV (505 GeV). The same analysis, when used to investigate pair production of a colored, charge 5/3 exotic fermion T5/3, with subsequent decay T5/3→Wt, sets an observed (expected) 95% CL lower limit on the T5/3 mass of 840 GeV (780 GeV).
Resumo:
A search for a charged Higgs boson, H±, decaying to a W± boson and a Z boson is presented. The search is based on 20.3 fb−1 of proton-proton collision data at a center-of-mass energy of 8 TeV recorded with the ATLAS detector at the LHC. The H± boson is assumed to be produced via vector-boson fusion and the decays W±→qq′¯ and Z→e+e−/μ+μ− are considered. The search is performed in a range of charged Higgs boson masses from 200 to 1000 GeV. No evidence for the production of an H± boson is observed. Upper limits of 31--1020 fb at 95% CL are placed on the cross section for vector-boson fusion production of an H± boson times its branching fraction to W±Z. The limits are compared with predictions from the Georgi-Machacek Higgs Triplet Model.
Resumo:
A search for pair production of vector-like quarks, both up-type (T) and down-type (B), as well as for four-top-quark production, is presented. The search is based on pp collisions at s√=8 TeV recorded in 2012 with the ATLAS detector at the CERN Large Hadron Collider and corresponding to an integrated luminosity of 20.3 fb−1. Data are analysed in the lepton-plus-jets final state, characterised by an isolated electron or muon with high transverse momentum, large missing transverse momentum and multiple jets. Dedicated analyses are performed targeting three cases: a T quark with significant branching ratio to a W boson and a b-quark (TT¯→Wb+X), and both a T quark and a B quark with significant branching ratio to a Higgs boson and a third-generation quark (TT¯→Ht+X and BB¯→Hb+X respectively). No significant excess of events above the Standard Model expectation is observed, and 95% CL lower limits are derived on the masses of the vector-like T and B quarks under several branching ratio hypotheses assuming contributions from T→Wb, Zt, Ht and B→Wt, Zb, Hb decays. The 95% CL observed lower limits on the T quark mass range between 715 GeV and 950 GeV for all possible values of the branching ratios into the three decay modes, and are the most stringent constraints to date. Additionally, the most restrictive upper bounds on four-top-quark production are set in a number of new physics scenarios.
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
Wild boar (Sus scrofa) and red deer (Cervus elaphus) are the main maintenance hosts for bovine tuberculosis (bTB) in continental Europe. Understanding Mycobacterium tuberculosis complex (MTC) excretion routes is crucial to define strategies to control bTB in free-ranging populations, nevertheless available information is scarce. Aiming at filling this gap, four different MTC excretion routes (oronasal, bronchial-alveolar, fecal and urinary) were investigated by molecular methods in naturally infected hunter-harvested wild boar and red deer. In addition MTC concentrations were estimated by the Most Probable Number method. MTC DNA was amplified in all types of excretion routes. MTC DNA was amplified in at least one excretion route from 83.0% (CI95 70.8-90.8) of wild ungulates with bTB-like lesions. Oronasal or bronchial-alveolar shedding were detected with higher frequency than fecal shedding (p < 0.001). The majority of shedders yielded MTC concentrations <10(3) CFU/g or mL. However, from those ungulates from which oronasal, bronchial-alveolar and fecal samples were available, 28.2% of wild boar (CI95 16.6-43.8) and 35.7% of red deer (CI95 16.3-61.2) yielded MTC concentrations >10(3) CFU/g or mL (referred here as super-shedders). Red deer have a significantly higher risk of being super-shedders compared to wild boar (OR = 11.8, CI95 2.3-60.2). The existence of super-shedders among the naturally infected population of wild boar and red deer is thus reported here for the first time and MTC DNA concentrations greater than the minimum infective doses were estimated in excretion samples from both species.