989 resultados para Calculation-based
Resumo:
Homogenous secondary pyrolysis is category of reactions following the primary pyrolysis and presumed important for fast pyrolysis. For the comprehensive chemistry and fluid dynamics, a probability density functional (PDF) approach is used; with a kinetic scheme comprising 134 species and 4169 reactions being implemented. With aid of acceleration techniques, most importantly Dimension Reduction, Chemistry Agglomeration and In-situ Tabulation (ISAT), a solution within reasonable time was obtained. More work is required; however, a solution for levoglucosan (C6H10O5) being fed through the inlet with fluidizing gas at 500 °C, has been obtained. 88.6% of the levoglucosan remained non-decomposed, and 19 different decomposition product species were found above 0.01% by weight. A homogenous secondary pyrolysis scheme proposed can thus be implemented in a CFD environment and acceleration techniques can speed-up the calculation for application in engineering settings.
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
The PMSG-based wind power generation system protection is presented in this paper. For large-scale systems, a voltagesource converter rectifier is included. Protection circuits for this topology are studied with simulation results for cable permanent fault conditions. These electrical protection methods are all in terms of dumping redundant energy resulting from disrupted path of power delivery. Pitch control of large-scale wind turbines are considered for effectively reducing rotor shaft overspeed. Detailed analysis and calculation of damping power and resistances are presented. Simulation results including fault overcurrent, DC-link overvoltage and wind turbine overspeed are shown to illustrate the system responses under different protection schemes to compare their application and effectiveness.
Resumo:
The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.
Resumo:
The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.
Resumo:
A Szolvencia II néven említett új irányelv elfogadása az Európai Unióban új helyzetet teremt a biztosítók tőkeszükséglet-számításánál. A tanulmány a biztosítók működését modellezve azt elemzi, hogyan hatnak a biztosítók állományának egyes jellemzői a tőkeszükséglet értékére egy olyan elméleti modellben, amelyben a tőkeszükséglet-értékek a Szolvencia II szabályok alapján számolhatók. A modellben biztosítási illetve pénzügyi kockázati "modul" figyelembevételére kerül sor külön-külön számolással, illetve a két kockázatfajta közös modellben való együttes figyelembevételével (a Szolvencia II eredményekkel való összehasonlításhoz). Az elméleti eredmények alapján megállapítható, hogy a tőkeszükségletre vonatkozóan számolható értékek eltérhetnek e két esetben. Az eredmények alapján lehetőség van az eltérések hátterében álló tényezők tanulmányozására is. ____ The new Solvency II directive results in a new environment for calculating the solvency capital requirement of insurance companies in the European Union. By modelling insurance companies the study analyses the impact of certain characteristics of insurance population on the solvency capital based on Solvency II rules. The model includes insurance and financial risk module by calculating solvency capital for the given risk types separately and together, respectively. Based on the theoretical results the difference between these two approaches can be observed. Based on the results the analysis of factors in°uencing the differences is also possible.
Resumo:
This thesis describes the development of an adaptive control algorithm for Computerized Numerical Control (CNC) machines implemented in a multi-axis motion control board based on the TMS320C31 DSP chip. The adaptive process involves two stages: Plant Modeling and Inverse Control Application. The first stage builds a non-recursive model of the CNC system (plant) using the Least-Mean-Square (LMS) algorithm. The second stage consists of the definition of a recursive structure (the controller) that implements an inverse model of the plant by using the coefficients of the model in an algorithm called Forward-Time Calculation (FTC). In this way, when the inverse controller is implemented in series with the plant, it will pre-compensate for the modification that the original plant introduces in the input signal. The performance of this solution was verified at three different levels: Software simulation, implementation in a set of isolated motor-encoder pairs and implementation in a real CNC machine. The use of the adaptive inverse controller effectively improved the step response of the system in all three levels. In the simulation, an ideal response was obtained. In the motor-encoder test, the rise time was reduced by as much as 80%, without overshoot, in some cases. Even with the larger mass of the actual CNC machine, decrease of the rise time and elimination of the overshoot were obtained in most cases. These results lead to the conclusion that the adaptive inverse controller is a viable approach to position control in CNC machinery.
Resumo:
This research focuses on developing active suspension optimal controllers for two linear and non-linear half-car models. A detailed comparison between quarter-car and half-car active suspension approaches is provided for improving two important scenarios in vehicle dynamics, i.e. ride quality and road holding. Having used a half-car vehicle model, heave and pitch motion are analyzed for those scenarios, with cargo mass as a variable. The governing equations of the system are analysed in a multi-energy domain package, i.e., 20-Sim. System equations are presented in the bond-graph language to facilitate calculation of energy usage. The results present optimum set of gains for both ride quality and road holding scenarios are the gains which has derived when maximum allowable cargo mass is considered for the vehicle. The energy implications of substituting passive suspension units with active ones are studied by considering not only the energy used by the actuator, but also the reduction in energy lost through the passive damper. Energy analysis showed less energy was dissipated in shock absorbers when either quarter-car or half-car controllers were used instead of passive suspension. It was seen that more energy could be saved by using half-car active controllers than the quarter-car ones. Results also proved that using active suspension units, whether quarter-car or half-car based, under those realistic limitations is energy-efficient and suggested.
Resumo:
Bulk delta15N values in surface sediment samples off the southwestern coast of Africa were measured to investigate the biogeochemical processes occurring in the water column. Nitrate concentrations and the degree of utilization of the nitrate pool are the predominant controls on sedimentary delta15N in the Benguela Current region. Denitrification does not appear to have had an important effect on the delta15N signal of these sediments and, based on delta15N and delta13C, there is little terrestrial input.
Resumo:
Multiphase flows, type oil–water-gas are very common among different industrial activities, such as chemical industries and petroleum extraction, and its measurements show some difficulties to be taken. Precisely determining the volume fraction of each one of the elements that composes a multiphase flow is very important in chemical plants and petroleum industries. This work presents a methodology able to determine volume fraction on Annular and Stratified multiphase flow system with the use of neutrons and artificial intelligence, using the principles of transmission/scattering of fast neutrons from a 241Am-Be source and measurements of point flow that are influenced by variations of volume fractions. The proposed geometries used on the mathematical model was used to obtain a data set where the thicknesses referred of each material had been changed in order to obtain volume fraction of each phase providing 119 compositions that were used in the simulation with MCNP-X –computer code based on Monte Carlo Method that simulates the radiation transport. An artificial neural network (ANN) was trained with data obtained using the MCNP-X, and used to correlate such measurements with the respective real fractions. The ANN was able to correlate the data obtained on the simulation with MCNP-X with the volume fractions of the multiphase flows (oil-water-gas), both in the pattern of annular flow as stratified, resulting in a average relative error (%) for each production set of: annular (air= 3.85; water = 4.31; oil=1.08); stratified (air=3.10, water 2.01, oil = 1.45). The method demonstrated good efficiency in the determination of each material that composes the phases, thus demonstrating the feasibility of the technique.
Resumo:
With the objective to improve the reactor physics calculation on a 2D and 3D nuclear reactor via the Diffusion Equation, an adaptive automatic finite element remeshing method, based on the elementary area (2D) or volume (3D) constraints, has been developed. The adaptive remeshing technique, guided by a posteriori error estimator, makes use of two external mesh generator programs: Triangle and TetGen. The use of these free external finite element mesh generators and an adaptive remeshing technique based on the current field continuity show that they are powerful tools to improve the neutron flux distribution calculation and by consequence the power solution of the reactor core even though they have a minor influence on the critical coefficient of the calculated reactor core examples. Two numerical examples are presented: the 2D IAEA reactor core numerical benchmark and the 3D model of the Argonauta research reactor, built in Brasil.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This document describes steps to take in preventing type 2 diabetes. Included is a risk test, a prediabetes screening test and BMI calculation chart.
Resumo:
Among different classes of ionic liquids (ILs), those with cyano-based anions have been of special interest due to their low viscosity and enhanced solvation ability for a large variety of compounds. Experimental results from this work reveal that the solubility of glucose in some of these ionic liquids may be higher than in water – a well-known solvent with enhanced capacity to dissolve mono- and disaccharides. This raises questions on the ability of cyano groups to establish strong hydrogen bonds with carbohydrates and on the optimal number of cyano groups at the IL anion that maximizes the solubility of glucose. In addition to experimental solubility data, these questions are addressed in this study using a combination of density functional theory (DFT) and molecular dynamics (MD) simulations. Through the calculation of the number of hydrogen bonds, coordination numbers, energies of interaction and radial and spatial distribution functions, it was possible to explain the experimental results and to show that the ability to favorably interact with glucose is driven by the polarity of each IL anion, with the optimal anion being dicyanamide.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.