996 resultados para process calculation
Resumo:
The aim of this thesis is to define effects of lignin separation process on Pulp mill chemical balance especially on sodium/sulphur-balance. The objective is to develop a simulation model with WinGEMS Process Simulator and use that model to simulate the chemical balances and process changes. The literature part explains what lignin is and how kraft pulp is produced. It also introduces to the methods that can be used to extract lignin from black liquor stream and how those methods affect the pulping process. In experimental part seven different cases are simulated with the created simulation model. The simulations are based on selected reference mill that produces 500 000 tons of bleached air-dried (90 %) pulp per year. The simulations include the chemical balance calculation and the estimated production increase. Based on the simulations the heat load of the recovery boiler can be reduced and the pulp production increased when lignin is extracted. The simulations showed that decreasing the waste acid stream intake from the chlorine dioxide plant is an effective method to control the sulphidity level when about 10 % of lignin is extracted. With higher lignin removal rates the in-mill sulphuric acid production has been discovered to be a better alternative to the sulphidity control.
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
The research record on the quantification of sediment transport processes in periglacial mountain environments in Scandimvia dates back to the 1950s. A wide range of measurements is. available, especially from the Karkevagge region of northern Sweden. Within this paper satellite image analysis and tools provided by geographic information systems (GIS) are exploited in order to extend and improve this research and to complement geophysical methods. The processes of interest include mass movements such as solifluction, slope wash, dirty avalanches and rock-and boulder falls. Geomorphic process units have been derived in order to allow quantification via GIS techniques at a catchment scale. Mass movement rates based on existing Field measurements are employed in the budget calculation. In the Karkevagge catch ment. 80% of the area can be identified either as a source area for sediments or as a zone where sediments are deposited. The overall budget for the slopes beneath the rockwalls in the Karkevagge is approximately 680 t a(-1) whilst about 150 : a-1 are transported into the fluvial System.
Resumo:
This work reports the energy transfer mechanism process of [Eu(TTA)(2)(NO(3))(TPPO)(2)] (bis-TTA complex) and [Eu(TTA)(3)(TPPO)(2)] (tris-TTA complex) based on experimental and theoretical spectroscopic properties, where TTA = 2-thienoyltrifluoroacetone and TPPO = triphenylphosphine oxide. These complexes were synthesized and characterized by elemental analyses, infrared spectroscopy and thermogavimetric analysis. The theoretical complexes geometry data by using Sparkle model for the calculation of lanthanide complexes (SMLC) is in agreement with the crystalline structure determined by single-crystal X-ray diffraction analysis. The emission spectra for [Gd(TTA)(3)(TPPO)(2)] and [Gd(TTA)(2) (NO(3))(TPPO)(2)] complexes are associated to T -> S(0) transitions centered on coordinated TTA ligands. Experimental luminescent properties of the bis-TTA complex have been quantified through emission intensity parameters Omega(lambda)(lambda = 2 and 4), spontaneous emission rates (A(rad)), luminescence lifetime (tau), emission quantum efficiency (eta) and emission quantum yield (q), which were compared with those for tris-TTA complex. The experimental data showed that the intensity parameter value for bis-TTA complex is twice smaller than the one for tris-TTA complex, indicating the less polarizable chemical environment in the system containing nitrate ion. A good agreement between the theoretical and experimental quantum yields for both Eu(Ill) complexes was obtained. The triboluminescence (TL) of the [Eu(TTA)(2)(NO(3))(TPPO)(2)] complexes are discussed in terms of ligand-to-metal energy transfer. (c) 2007 Elsevier B.V. All fights reserved.
Resumo:
Density functional calculation at B3LYP level was employed to study the surface oxygen vacancies and the doping process of Co, Cu and Zn on SnO2 (110) surface models. Large clusters, based on (SnO2)(15) models, were selected to simulate the oxidized (Sn15O30), half-reduced (Sn15O29) and the reduced (Sn15O28) surfaces. The doping process was considered on the reduced surfaces: Sn13Co2O28, Sn13Cu2O28 and Sn13Zn2O28. The results are analyzed and discussed based on a calculation of the energy levels along the bulk band gap region, determined by a projection of the monoelectron level structure on to the atomic basis set and by the density of states. This procedure enables one to distinguish the states coming from the bulk, the oxygen vacancies and the doping process, on passing from an oxidized to a reduced surface, missing bridge oxygen atoms generate electronic levels along the band gap region, associated with 5s/5p of four-/five-fold Sn and 2p of in-plane O centers located on the exposed surface, which is in agreement with previous theoretical and experimental investigations. The formation energy of one and two oxygen vacancies is 3.0 and 3.9 eV, respectively. (C) 2001 Elsevier B.V. B.V. All rights reserved.
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
The Environmental Process and Simulation Center (EPSC) at Michigan Technological University started accommodating laboratories for an Environmental Engineering senior level class CEE 4509 Environmental Process and Simulation Laboratory since 2004. Even though the five units that exist in EPSC provide the students opportunities to have hands-on experiences with a wide range of water/wastewater treatment technologies, a key module was still missing for the student to experience a full cycle of treatment. This project fabricated a direct-filtration pilot system in EPSC and generated a laboratory manual for education purpose. Engineering applications such as clean bed head loss calculation, backwash flowrate determination, multimedia density calculation and run length prediction are included in the laboratory manual. The system was tested for one semester and modifications have been made both to the direct filtration unit and the laboratory manual. Future work is also proposed to further refine the module.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
A procedure for measuring the overheating temperature (ΔT ) of a p-n junction area in the structure of photovoltaic (PV) cells converting laser or solar radiations relative to the ambient temperature has been proposed for the conditions of connecting to an electric load. The basis of the procedure is the measurement of the open-circuit voltage (VO C ) during the initial time period after the fast disconnection of the external resistive load. The simultaneous temperature control on an external heated part of a PV module gives the means for determining the value of VO C at ambient temperature. Comparing it with that measured after switching OFF the load makes the calculation of ΔT possible. Calibration data on the VO C = f(T ) dependences for single-junction AlGaAs/GaAs and triple-junction InGaP/GaAs/Ge PV cells are presented. The temperature dynamics in the PV cells has been determined under flash illumination and during fast commutation of the load. Temperature measurements were taken in two cases: converting continuous laser power by single-junction cells and converting solar power by triple-junction cells operating in the concentrator modules.
Resumo:
This paper describes the use of the Business Process Execution Language for Web Services (BPEL4WS/BPEL) for managing scientific workflows. This work is result of our attempt to adopt Service Oriented Architecture in order to perform Web services – based simulation of metal vapor lasers. Scientific workflows can be more demanding in their requirements than business processes. In the context of addressing these requirements, the features of the BPEL4WS specification are discussed, which is widely regarded as the de-facto standard for orchestrating Web services for business workflows. A typical use case of calculation the electric field potential and intensity distributions is discussed as an example of building a BPEL process to perform distributed simulation constructed by loosely-coupled services.
Resumo:
If a regenerative process is represented as semi-regenerative, we derive formulae enabling us to calculate basic characteristics associated with the first occurrence time starting from corresponding characteristics for the semi-regenerative process. Recursive equations, integral equations, and Monte-Carlo algorithms are proposed for practical solving of the problem.
Resumo:
The cell:cell bond between an immune cell and an antigen presenting cell is a necessary event in the activation of the adaptive immune response. At the juncture between the cells, cell surface molecules on the opposing cells form non-covalent bonds and a distinct patterning is observed that is termed the immunological synapse. An important binding molecule in the synapse is the T-cell receptor (TCR), that is responsible for antigen recognition through its binding with a major-histocompatibility complex with bound peptide (pMHC). This bond leads to intracellular signalling events that culminate in the activation of the T-cell, and ultimately leads to the expression of the immune eector function. The temporal analysis of the TCR bonds during the formation of the immunological synapse presents a problem to biologists, due to the spatio-temporal scales (nanometers and picoseconds) that compare with experimental uncertainty limits. In this study, a linear stochastic model, derived from a nonlinear model of the synapse, is used to analyse the temporal dynamics of the bond attachments for the TCR. Mathematical analysis and numerical methods are employed to analyse the qualitative dynamics of the nonequilibrium membrane dynamics, with the specic aim of calculating the average persistence time for the TCR:pMHC bond. A single-threshold method, that has been previously used to successfully calculate the TCR:pMHC contact path sizes in the synapse, is applied to produce results for the average contact times of the TCR:pMHC bonds. This method is extended through the development of a two-threshold method, that produces results suggesting the average time persistence for the TCR:pMHC bond is in the order of 2-4 seconds, values that agree with experimental evidence for TCR signalling. The study reveals two distinct scaling regimes in the time persistent survival probability density prole of these bonds, one dominated by thermal uctuations and the other associated with the TCR signalling. Analysis of the thermal fluctuation regime reveals a minimal contribution to the average time persistence calculation, that has an important biological implication when comparing the probabilistic models to experimental evidence. In cases where only a few statistics can be gathered from experimental conditions, the results are unlikely to match the probabilistic predictions. The results also identify a rescaling relationship between the thermal noise and the bond length, suggesting a recalibration of the experimental conditions, to adhere to this scaling relationship, will enable biologists to identify the start of the signalling regime for previously unobserved receptor:ligand bonds. Also, the regime associated with TCR signalling exhibits a universal decay rate for the persistence probability, that is independent of the bond length.
Resumo:
We have obtained total and differential cross sections for the strangeness changing charged current weak reaction ν L + p → Λ(Σ0) + L+ using standard dipole form factors, where L stands for an electron, muon, or tau lepton, and L + stands for an positron, anti-muon or anti-tau lepton. We calculated these reactions from near threshold few hundred MeV to 8 GeV of incoming neutrino energy and obtained the contributions of the various form factors to the total and differential cross sections. We did this in support of possible experiments which might be carried out by the MINERνA collaboration at Fermilab. The calculation is phenomenologically based and makes use of SU(3) relations to obtain the standard vector current form factors and data from Λ beta decay to obtain the axial current form factor. We also made estimates for the contributions of the pseudoscalar form factor and for the F E and FS form factors to the total and differential cross sections. We discuss our results and consider under what circumstances we might extract the various form factors. In particular we wish to test the SU(3) assumptions made in determining all the form factors over a range of q2 values. Recently new form factors were obtained from recoil proton measurements in electron-proton electromagnetic scattering at Jefferson Lab. We thus calculated the contributions of the individual form factors to the total and differential cross sections for this new set of form factors. We found that the differential and total cross sections for Λ production change only slightly between the two sets of form factors but that the differential and total cross sections change substantially for Σ 0 production. We discuss the possibility of distinguishing between the two cases for the experiments planned by the MINERνA Collaboration. We also undertook the calculation for the inverse reaction e − + p → Λ + νe for a polarized outgoing Λ which might be performed at Jefferson Lab, and provided additional analysis of the contributions of the individual form factors to the differential cross sections for this case. ^
Resumo:
Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.