987 resultados para Radiometric calibration
Resumo:
Dissertation to obtain the degree of Master in Chemical and Biochemical Engineering
Resumo:
This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.
Resumo:
Enhanced biological phosphorus removal (EBPR) is the most economic and sustainable option used in wastewater treatment plants (WWTPs) for phosphorus removal. In this process it is important to control the competition between polyphosphate accumulating organisms (PAOs) and glycogen accumulating organisms (GAOs), since EBPR deterioration or failure can be related with the proliferation of GAOs over PAOs. This thesis is focused on the effect of operational conditions (volatile fatty acid (VFA) composition, dissolved oxygen (DO) concentration and organic carbon loading) on PAO and GAO metabolism. The knowledge about the effect of these operational conditions on EBPR metabolism is very important, since they represent key factors that impact WWTPs performance and sustainability. Substrate competition between the anaerobic uptake of acetate and propionate (the main VFAs present in WWTPs) was shown in this work to be a relevant factor affecting PAO metabolism, and a metabolic model was developed that successfully describes this effect. Interestingly, the aerobic metabolism of PAOs was not affected by different VFA compositions, since the aerobic kinetic parameters for phosphorus uptake, polyhydroxyalkanoates (PHAs) degradation and glycogen production were relatively independent of acetate or propionate concentration. This is very relevant for WWTPs, since it will simplify the calibration procedure for metabolic models, facilitating their use for full-scale systems. The DO concentration and aerobic hydraulic retention time (HRT) affected the PAO-GAO competition, where low DO levels or lower aerobic HRT was more favourable for PAOs than GAOs. Indeed, the oxygen affinity coefficient was significantly higher for GAOs than PAOs, showing that PAOs were far superior at scavenging for the often limited oxygen levels in WWTPs. The operation of WWTPs with low aeration is of high importance for full-scale systems, since it decreases the energetic costs and can potentially improve WWTP sustainability. Extended periods of low organic carbon load, which are the most common conditions that exist in full-scale WWTPs, also had an impact on PAO and GAO activity. GAOs exhibited a substantially higher biomass decay rate as compared to PAOs under these conditions, which revealed a higher survival capacity for PAOs, representing an advantage for PAOs in EBPR processes. This superior survival capacity of PAOs under conditions more closely resembling a full-scale environment was linked with their ability to maintain a residual level of PHA reserves for longer than GAOs, providing them with an effective energy source for aerobic maintenance processes. Overall, this work shows that each of these key operational conditions play an important role in the PAO-GAO competition and should be considered in WWTP models in order to improve EBPR processes.
Resumo:
To cope with modernity, the interesting of having a fully automated house has been increasing over the years, as technology evolves and as our lives become more stressful and overloaded. An automation system provides a way to simplify some daily tasks, allowing us to have more spare time to perform activities where we are really needed. There are some systems in this domain that try to implement these characteristics, but this kind of technology is at its early stages of evolution being that it is still far away of empowering the user with the desired control over a habitation. The reason is that the mentioned systems miss some important features such as adaptability, extension and evolution. These systems, developed from a bottom-up approach, are often tailored for programmers and domain experts, discarding most of the times the end users that remain with unfinished interfaces or products that they have difficulty to control. Moreover, complex behaviors are avoided, since they are extremely difficult to implement mostly due to the necessity of handling priorities, conflicts and device calibration. Besides, these solutions are only reachable at very high costs, yet they still have the limitation of being difficult to configure by non-technical people once in runtime operation. As a result, it is necessary to create a tool that allows the execution of several automated actions, with an interface that is easy to use but at the same time supports all the main features of this domain. It is also desirable that this tool is independent of the hardware so it can be reused, thus a Model Driven Development approach (MDD) is the ideal option, as it is a method that follows those principles. Since the automation domain has some very specific concepts, the use of models should be combined with a Domain Specific Language (DSL). With these two methods, it is possible to create a solution that is adapted to the end users, but also to domain experts and programmers due to the several levels of abstraction that can be added to diminish the complexity of use. The aim of this thesis is to design a Domain Specific Language (DSL) that uses the Model Driven Development approach (MDD), with the purpose of supporting Home Automation (HA) concepts. In this implementation, the development of simple and complex scenarios should be supported and will be one of the most important concerns. This DSL should also support other significant features in this domain, such as the ability to schedule tasks, which is something that is limited in the current existing solutions.
Resumo:
Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).
Resumo:
The “CMS Safety Closing Sensors System” (SCSS, or CSS for brevity) is a remote monitoring system design to control safety clearance and tight mechanical movements of parts of the CMS detector, especially during CMS assembly phases. We present the different systems that makes SCSS: its sensor technologies, the readout system, the data acquisition and control software. We also report on calibration and installation details, which determine the resolution and limits of the system. We present as well our experience from the operation of the system and the analysis of the data collected since 2008. Special emphasis is given to study positioning reproducibility during detector assembly and understanding how the magnetic fields influence the detector structure.
Resumo:
This paper aims to investigate if the market capital charge of the trading book increased in Basel III compared to Basel II. I showed that the capital charge rises by 232% and 182% under the standardized and internal model, respectively. The varying liquidity horizons, the calibration to a stress period, the introduction of credit spread risk, the restrictions on correlations across risk categories and the incremental default charge boost Basel III requirements. Nevertheless, the impact of Expected shortfall at 97.5% is low and long term shocks decrease the charge. The standardized approach presents advantages and disadvantages relative to internal models.
Resumo:
The main purpose of the present dissertation is the simulation of the response of fibre grout strengthened RC panels when subjected to blast effects using the Applied Element Method, in order to validate and verify its applicability. Therefore, four experimental models, three of which were strengthened with a cement-based grout, each reinforced by one type of steel reinforcement, were tested against blast effects. After the calibration of the experimental set-up, it was possible to obtain and compare the response to the blast effects of the model without strengthening (reference model), and a fibre grout strengthened RC panel (strengthened model). Afterwards, a numerical model of the reference model was created in the commercial software Extreme Loading for Structures, which is based on the Applied Element Method, and calibrated to the obtained experimental results, namely to the residual displacement obtained by the experimental monitoring system. With the calibration verified, it is possible to assume that the numerical model correctly predicts the response of fibre grout RC panels when subjected to blast effects. In order to verify this assumption, the strengthened model was modelled and subjected to the blast effects of the corresponding experimental set-up. The comparison between the residual and maximum displacements and the bottom surface’s cracking obtained in the experimental and the numerical tests yields a difference of 4 % for the maximum displacements of the reference model, and a difference of 4 and 10 % for the residual and maximum displacements of the strengthened model, respectively. Additionally, the cracking on the bottom surface of the models was similar in both methods. Therefore, one can conclude that the Applied ElementMethod can correctly predict and simulate the response of fibre grout strengthened RC panels when subjected to blast effects.
Resumo:
Within the civil engineering field, the use of the Finite Element Method has acquired a significant importance, since numerical simulations have been employed in a broad field, which encloses the design, analysis and prediction of the structural behaviour of constructions and infrastructures. Nevertheless, these mathematical simulations can only be useful if all the mechanical properties of the materials, boundary conditions and damages are properly modelled. Therefore, it is required not only experimental data (static and/or dynamic tests) to provide references parameters, but also robust calibration methods able to model damage or other special structural conditions. The present paper addresses the model calibration of a footbridge bridge tested with static loads and ambient vibrations. Damage assessment was also carried out based on a hybrid numerical procedure, which combines discrete damage functions with sets of piecewise linear damage functions. Results from the model calibration shows that the model reproduces with good accuracy the experimental behaviour of the bridge.
Resumo:
O objetivo deste artigo é verificar a influência da geometria urbana na intensidade de ilhas de calor noturnas com uso de uma ferramenta computacional desenvolvida como extensão de um SIG. O método deste trabalho está dividido em três principais etapas: desenvolvimento da ferramenta, calibração do modelo e simulação de cenários hipotéticos com diferentes geometrias urbanas. Um modelo simplificado que relaciona as intensidades máximas de ilha de calor urbana (ICUmáx) com a geometria urbana foi incorporado à subrotina de cálculo e, posteriormente, adaptado para fornecer resultados mais aproximados à realidade de duas cidades brasileiras, as quais serviram de base para a calibração do modelo. A comparação entre dados reais e simulados mostraram uma diferença no aumento da ICUmáx em função da relação H/W e da faixa de comprimento de rugosidade (Z0). Com a ferramenta já calibrada, foi realizada uma simulação de diferentes cenários urbanos, demonstrando que o modelo simplificado original subestima valores de ICUmáx para as configurações de cânions urbanos de Z0 < 2,0 e superestima valores de ICUmáx para as configurações de cânions urbanos de Z0 ≥ 2,0. Além disso, este estudo traz como contribuição à verificação de que cânions urbanos com maiores áreas de fachadas e com alturas de edificações mais heterogêneas resultam em ICUmáx menores em relação aos cânions mais homogêneos e com maiores áreas médias ocupadas pelas edificações, para um mesmo valor de relação H/W. Essa diferença pode ser explicada pelos diferentes efeitos na turbulência dos ventos e nas áreas sombreadas provocados pela geometria urbana.
Resumo:
The Our Lady of Conception church is located in village of Monforte (Portugal) and is not in use nowadays. The church presents structural damage and, consequently, a study was carried out. The study involved the survey of the damage, dynamic identification tests under ambient vibration and the numerical analysis. The church is constituted by the central nave, the chancel, the sacristy and the corridor to access the pulpit. The masonry walls present different thickness, namely 0.65 m in the chancel, 0.70 m in the sacristy, 0.92 in the central nave and 0.65 m in the corridor. The masonry walls present 8 buttresses with different dimensions. The total longitudinal and transversal dimensions of the church are equal to 21.10 m and 14.26 m, respectively. The survey of the damage showed that, in general, the masonry walls are in good conditions, with exception of the transversal walls of the nave, which present severe cracks. The arches of the vault presents also severe cracks along the central nave. As consequence, the infiltrations have increased the degradation of the vault and paintings. Furthermore, the foundations present settlements in the Southwest direction. The dynamic identification test were carried out under the action of ambient excitation of the wind and using 12 piezoelectric accelerometers of high sensitivity. The dynamic identification tests allowed to estimate the dynamic properties of the church, namely frequencies, mode shapes and damping ratios. A FEM numerical model was prepared and calibrated, based on the first four experimental modes estimated in the dynamic identification tests. The average error between the experimental and numerical frequencies of the first four modes is equal to 5%. After calibration of the numerical model, pushover analyses with a load pattern proportional to the mass, in the transversal and longitudinal direction of the church, were performed. The results of the analysis numerical allow to conclude that the most vulnerable direction of the church is in the transversal one and the maximum load factor is equal to 0.35.
Resumo:
A numerical approach to simulate the behaviour of timber shear walls under both static and dynamic loading is proposed. Because the behaviour of timber shear walls hinges on the behaviour of the nail connections, the force-displacement behaviour of sheathing-to-framing nail connections are first determined and then used to define the hysteretic properties of finite elements representing these connections. The model nails are subsequently implemented into model walls. The model walls are verified using experimental results for both monotonic and cyclic loading. It is demonstrated that the complex hysteretic behaviour of timber shear walls can be reasonably represented using model shear walls in which nonlinear material failure is concentrated only at the sheathing-to-framing nail connections.
Resumo:
In this study, we concentrate on modelling gross primary productivity using two simple approaches to simulate canopy photosynthesis: "big leaf" and "sun/shade" models. Two approaches for calibration are used: scaling up of canopy photosynthetic parameters from the leaf to the canopy level and fitting canopy biochemistry to eddy covariance fluxes. Validation of the models is achieved by using eddy covariance data from the LBA site C14. Comparing the performance of both models we conclude that numerically (in terms of goodness of fit) and qualitatively, (in terms of residual response to different environmental variables) sun/shade does a better job. Compared to the sun/shade model, the big leaf model shows a lower goodness of fit and fails to respond to variations in the diffuse fraction, also having skewed responses to temperature and VPD. The separate treatment of sun and shade leaves in combination with the separation of the incoming light into direct beam and diffuse make sun/shade a strong modelling tool that catches more of the observed variability in canopy fluxes as measured by eddy covariance. In conclusion, the sun/shade approach is a relatively simple and effective tool for modelling photosynthetic carbon uptake that could be easily included in many terrestrial carbon models.
Resumo:
[Excerpt] The advantages resulting from the use of numerical modelling tools to support the design of processing equipment are almost consensual. The design of calibration systems in profile extrusion is not an exception . H owever , the complex geome tries and heat exchange phenomena involved in this process require the use of numerical solvers able to model the heat exchange in more than one domain ( calibrator and polymer), the compatibilization of the heat transfer at the profile - calibrator interface and with the ability to deal with complex geometries. The combination of all these features is usually hard to find in commercial software. Moreover , the dimension of the meshes required to ob tain accurate results, result in computational times prohibitive for industrial application. (...)
Resumo:
The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of s√=7 TeV corresponding to an integrated luminosity of 4.7 fb −1 . Jets are reconstructed from energy deposits forming topological clusters of calorimeter cells using the anti- kt algorithm with distance parameters R=0.4 or R=0.6 , and are calibrated using MC simulations. A residual JES correction is applied to account for differences between data and MC simulations. This correction and its systematic uncertainty are estimated using a combination of in situ techniques exploiting the transverse momentum balance between a jet and a reference object such as a photon or a Z boson, for 20≤pjetT<1000 GeV and pseudorapidities |η|<4.5 . The effect of multiple proton–proton interactions is corrected for, and an uncertainty is evaluated using in situ techniques. The smallest JES uncertainty of less than 1 % is found in the central calorimeter region ( |η|<1.2 ) for jets with 55≤pjetT<500 GeV . For central jets at lower pT , the uncertainty is about 3 %. A consistent JES estimate is found using measurements of the calorimeter response of single hadrons in proton–proton collisions and test-beam data, which also provide the estimate for pjetT>1 TeV. The calibration of forward jets is derived from dijet pT balance measurements. The resulting uncertainty reaches its largest value of 6 % for low- pT jets at |η|=4.5 . Additional JES uncertainties due to specific event topologies, such as close-by jets or selections of event samples with an enhanced content of jets originating from light quarks or gluons, are also discussed. The magnitude of these uncertainties depends on the event sample used in a given physics analysis, but typically amounts to 0.5–3 %.