980 resultados para SEMIEMPIRICAL CALCULATIONS
Resumo:
The hospital pharmacy in large and advanced institutions has evolved from a simple storage and distribution unit into a highly specialized manipulation and dispensation center, responsible for the handling of hundreds of clinical requests, many of them unique and not obtainable from commercial companies. It was therefore quite natural that in many environments, a manufacturing service was gradually established, to cater to both conventional and extraordinary demands of the medical staff. That was the case of Hospital das Clinicas, where multiple categories of drugs are routinely produced inside the pharmacy. However, cost-containment imperatives dictate that such activities be reassessed in the light of their efficiency and essentiality. METHODS: In a prospective study, the output of the Manufacturing Service of the Central Pharmacy during a 12-month period was documented and classified into three types. Group I comprised drugs similar to commercially distributed products, Group II included exclusive formulations for routine consumption, and Group III dealt with special demands related to clinical investigations. RESULTS: Findings for the three categories indicated that these groups represented 34.4%, 45.3%, and 20.3% of total manufacture orders, respectively. Costs of production were assessed and compared with market prices for Group 1 preparations, indicating savings of 63.5%. When applied to the other groups, for which direct equivalent in market value did not exist, these results would suggest total yearly savings of over 5 100 000 US dollars. Even considering that these calculations leave out many components of cost, notably those concerning marketing and distribution, it might still be concluded that at least part of the savings achieved were real. CONCLUSIONS: The observed savings, allied with the convenience and reliability with which the Central Pharmacy performed its obligations, support the contention that internal manufacture of pharmaceutical formulations was a cost-effective alternative in the described setting.
Resumo:
In order to address and resolve the wastewater contamination problem of the Sines refinery with the main objective of optimizing the quality of this stream and reducing the costs charged to the refinery, a dynamic mass balance was developed nd implemented for ammonia and polar oil and grease (O&G) contamination in the wastewater circuit. The inadequate routing of sour gas from the sour water stripping unit and the kerosene caustic washing unit, were identified respectively as the major source of ammonia and polar substances present in the industrial wastewater effluent. For the O&G content, a predictive model was developed for the kerosene caustic washing unit, following the Projection to Latent Structures (PLS) approach. Comparison between analytical data for ammonia and polar O&G concentrations in refinery wastewater originating from the Dissolved Air Flotation (DAF) effluent and the model predictions of the dynamic mass balance calculations are in a very good agreement and highlights the dominant impact of the identified streams for the wastewater contamination levels. The ammonia contamination problem was solved by rerouting the sour gas through an existing clogged line with ammonia salts due to a non-insulated line section, while for the O&G a dynamic mass balance was implemented as an online tool, which allows for prevision of possible contamination situations and taking the required preventive actions, and can also serve as a basis for establishing relationships between the O&G contamination in the refinery wastewater with the properties of the refined crude oils and the process operating conditions. The PLS model developed could be of great asset in both optimizing the existing and designing new refinery wastewater treatment units or reuse schemes. In order to find a possible treatment solution for the spent caustic problem, an on-site pilot plant experiments for NaOH recovery from the refinery kerosene caustic washing unit effluent using an alkaline-resistant nanofiltration (NF) polymeric membrane were performed in order to evaluate its applicability for treating these highly alkaline and contaminated streams. For a constant operating pressure and temperature and adequate operating conditions, 99.9% of oil and grease rejection and 97.7% of chemical oxygen demand (COD) rejection were observed. No noticeable membrane fouling or flux decrease were registered until a volume concentration factor of 3. These results allow for NF permeate reuse instead of fresh caustic and for significant reduction of the wastewater contamination, which can result in savings of 1.5 M€ per year at the current prices for the largest Portuguese oil refinery. The capital investments needed for implementation of the required NF membrane system are less than 10% of those associated with the traditional wet air oxidation solution of the spent caustic problem. The operating costs are very similar, but can be less than half if reusing the NF concentrate in refinery pH control applications. The payback period was estimated to be 1.1 years. Overall, the pilot plant experimental results obtained and the process economic evaluation data indicate a very competitive solution through the proposed NF treatment process, which represents a highly promising alternative to conventional and existing spent caustic treatment units.
Resumo:
Joints play a major role in the structural behaviour of old timber frames [1]. Current standards mainly focus on modern dowel-type joints and usually provide little guidance (with the exception of German and Swiss NAs) to designers regarding traditional joints. With few exceptions, see e.g. [2], [3], [4], most of the research undertaken today is mainly focused on the reinforcement of dowel-type connections. When considering old carpentry joints, it is neither realistic nor useful to try to describe the behaviour of each and every type of joint. The discussion here is not an extra attempt to classify or compare joint configurations [5], [6], [7]. Despite the existence of some classification rules which define different types of carpentry joints, their applicability becomes difficult. This is due to the differences in the way joints are fashioned depending, on the geographical location and their age. In view of this, it is mandatory to check the relevance of the calculations as a first step. This first step, to, is mandatory. A limited number of carpentry joints, along with some calculation rules and possible strengthening techniques are presented here.
Resumo:
Accepted Manuscript
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão Industrial
Resumo:
In the present work the benefits of using graphics processing units (GPU) to aid the design of complex geometry profile extrusion dies, are studied. For that purpose, a3Dfinite volume based code that employs unstructured meshes to solve and couple the continuity, momentum and energy conservation equations governing the fluid flow, together with aconstitutive equation, was used. To evaluate the possibility of reducing the calculation time spent on the numerical calculations, the numerical code was parallelized in the GPU, using asimple programing approach without complex memory manipulations. For verificationpurposes, simulations were performed for three benchmark problems: Poiseuille flow, lid-driven cavity flow and flow around acylinder. Subsequently, the code was used on the design of two real life extrusion dies for the production of a medical catheter and a wood plastic composite decking profile. To evaluate the benefits, the results obtained with the GPU parallelized code were compared, in terms of speedup, with a serial implementation of the same code, that traditionally runs on the central processing unit (CPU). The results obtained show that, even with the simple parallelization approach employed, it was possible to obtain a significant reduction of the computation times.
Resumo:
The monitoring data collected during tunnel excavation can be used in inverse analysis procedures in order to identify more realistic geomechanical parameters that can increase the knowledge about the interested formations. These more realistic parameters can be used in real time to adapt the project to the real structure in situ behaviour. However, monitoring plans are normally designed for safety assessment and not especially for the purpose of inverse analysis. In fact, there is a lack of knowledge about what types and quantity of measurements are needed to succeed in identifying the parameters of interest. Also, the optimisation algorithm chosen for the identification procedure may be important for this matter. In this work, this problem is addressed using a theoretical case with which a thorough parametric study was carried out using two optimisation algorithms based on different calculation paradigms, namely a conventional gradient-based algorithm and an evolution strategy algorithm. Calculations were carried for different sets of parameters to identify several combinations of types and amount of monitoring data. The results clearly show the high importance of the available monitoring data and the chosen algorithm for the success rate of the inverse analysis process.
Resumo:
The observational method in tunnel engineering allows the evaluation in real time of the actual conditions of the ground and to take measures if its behavior deviates considerably from predictions. However, it lacks a consistent and structured methodology to use the monitoring data to adapt the support system in real time. The definition of limit criteria above which adaptation is required are not defined and complex inverse analysis procedures (Rechea et al. 2008, Levasseur et al. 2010, Zentar et al. 2001, Lecampion et al. 2002, Finno and Calvello 2005, Goh 1999, Cui and Pan 2012, Deng et al. 2010, Mathew and Lehane 2013, Sharifzadeh et al. 2012, 2013) may be needed to consistently analyze the problem. In this paper a methodology for the real time adaptation of the support systems during tunneling is presented. In a first step limit criteria for displacements and stresses are proposed. The methodology uses graphics that are constructed during the project stage based on parametric calculations to assist in the process and when these graphics are not available, since it is not possible to predict every possible scenario, inverse analysis calculations are carried out. The methodology is applied to the “Bois de Peu” tunnel which is composed by two tubes with over 500 m long. High uncertainty levels existed concerning the heterogeneity of the soil and consequently in the geomechanical design parameters. The methodology was applied in four sections and the results focus on two of them. It is shown that the methodology has potential to be applied in real cases contributing for a consistent approach of a real time adaptation of the support system and highlight the importance of the existence of good quality and specific monitoring data to improve the inverse analysis procedure.
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.
Resumo:
Zn1−xCoxO films with different Co concentrations (with x=0.00, 0.10, 0.15, and 0.30) were grown by pulsed laser deposition (PLD) technique. The structural and optical properties of the films were investigated by grazing incidence X-ray diffraction (GIXRD), Raman spectroscopy and photoluminescence (PL). The magnetic properties were measured by conventional magnetometry using a SQUID and simulated by ab-initio calculations using Korring–Khon–Rostoker (KKR) method combined with coherent potential approximation (CPA). The effect of Co-doping on the GIXRD and Raman peaks positions, shape and intensity is discussed. PL studies demonstrate that Co-doping induces a decrease of the bandgap energy and quenching of the UV emission. They also suggest the presence of Zn interstitials when x≥0.15. The 10% Co-doped ZnO film shows ferromagnetism at 390 K with a spontaneous magnetic moment ≈4×10−5 emu and coercive field ≈0.17 kOe. The origin of ferromagnetism is explained based on the calculations using KKR method.
Resumo:
The building sector is one of the Europeâ s main energy consumer, making buildings an important target for a wiser energy use, improving indoor comfort conditions and reducing the energy consumption. To achieve the European Union targets for energy consumption and carbon reductions it is crucial to act in new, but also in existing buildings, which constitute the majority of the building stock. In existing buildings, the significant improvement of their efficiency requires important investments. Therefore, costs are a major concern in the decision making process and the analysis of the cost effectiveness of the interventions is an important path in the guidance for the selection of the different renovation scenarios. The Portuguese thermal legislation considers the simple payback method for the calculations of the time for the return of the investment. However, this method does not take into consideration inflation, cash flows and cost of capital, as well as the future costs of energy and the building elements lifetime as it happens in a life cycle cost analysis. In order to understand the impact of the economic analysis method used in the choice of the renovation measures, a case study has been analysed using simple payback calculations and life cycle costs analysis. Overall results show that less far-reaching renovation measures are indicated when using the simple payback calculations which may be leading to solutions less cost-effective in a long run perspective.
Resumo:
The inclusive jet cross-section is measured in proton--proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-kt algorithm with radius parameter values of 0.4 and 0.6. The double-differential cross-sections are presented as a function of the jet transverse momentum and the jet rapidity, covering jet transverse momenta from 100 GeV to 2 TeV. Next-to-leading-order QCD calculations corrected for non-perturbative effects and electroweak effects, as well as Monte Carlo simulations with next-to-leading-order matrix elements interfaced to parton showering, are compared to the measured cross-sections. A quantitative comparison of the measured cross-sections to the QCD calculations using several sets of parton distribution functions is performed.
Resumo:
A measurement of W boson production in lead-lead collisions at sNN−−−√=2.76 TeV is presented. It is based on the analysis of data collected with the ATLAS detector at the LHC in 2011 corresponding to an integrated luminosity of 0.14 nb−1 and 0.15 nb−1 in the muon and electron decay channels, respectively. The differential production cross-sections and lepton charge asymmetry are each measured as a function of the average number of participating nucleons ⟨Npart⟩ and absolute pseudorapidity of the charged lepton. The results are compared to predictions based on next-to-leading-order QCD calculations. These measurements are, in principle, sensitive to possible nuclear modifications to the parton distribution functions and also provide information on scaling of W boson production in multi-nucleon systems.
Resumo:
Double-differential three-jet production cross-sections are measured in proton--proton collisions at a centre-of-mass energy of s√=7TeV using the ATLAS detector at the Large Hadron Collider. The measurements are presented as a function of the three-jet mass (mjjj), in bins of the sum of the absolute rapidity separations between the three leading jets (|Y∗|). Invariant masses extending up to 5 TeV are reached for 8<|Y∗|<10. These measurements use a sample of data recorded using the ATLAS detector in 2011, which corresponds to an integrated luminosity of 4.51fb−1. Jets are identified using the anti-kt algorithm with two different jet radius parameters, R=0.4 and R=0.6. The dominant uncertainty in these measurements comes from the jet energy scale. Next-to-leading-order QCD calculations corrected to account for non-perturbative effects are compared to the measurements. Good agreement is found between the data and the theoretical predictions based on most of the available sets of parton distribution functions, over the full kinematic range, covering almost seven orders of magnitude in the measured cross-section values.
Resumo:
This paper presents cross sections for the production of a W boson in association with jets, measured in proton--proton collisions at s√=7 TeV with the ATLAS experiment at the Large Hadron Collider. With an integrated luminosity of 4.6fb−1, this data set allows for an exploration of a large kinematic range, including jet production up to a transverse momentum of 1 TeV and multiplicities up to seven associated jets. The production cross sections for W bosons are measured in both the electron and muon decay channels. Differential cross sections for many observables are also presented including measurements of the jet observables such as the rapidities and the transverse momenta as well as measurements of event observables such as the scalar sums of the transverse momenta of the jets. The measurements are compared to numerous QCD predictions including next-to-leading-order perturbative calculations, resummation calculations and Monte Carlo generators.