1000 resultados para imbibition methods
Resumo:
The cassava leaf, waste generated in the harvest of the roots, is characterized by high content of protein, vitamins and minerals; however, its use is limited due to the high fiber content and antinutritional substances, which can be removed by obtaining protein concentrates. In this context, the objective of this study was to evaluate protein extraction processes, aiming the use of cassava leaves (Manihot esculenta Crantz) as an alternative protein. Four methods were tested: 1) Coagulation of Proteins by Lowering the Temperature, 2) Extraction by Isoelectric Precipitation, 3) Solubilization of Proteins and 4) Fermentation of Filter Leaf Juice. To obtain the concentrates, the use of fresh or dried leaves and extraction in one or two steps were also evaluated. The solubilization of proteins (method 3) showed a higher extraction yield; however, with concentrate of low quality. The fermentation of the juice (method 4) produced concentrates with higher quality and lower costs and the isoelectric precipitation (method 2) promoted the obtention of concentrates in less time, both with good prospects for use. The use of two extraction steps was not advantageous to the process and there was no difference between the use of fresh or dried leaf, and the use of fresh leaves is presented as a good option for the simplicity of the method.
Resumo:
This study compares the precision of three image classification methods, two of remote sensing and one of geostatistics applied to areas cultivated with citrus. The 5,296.52ha area of study is located in the city of Araraquara - central region of the state of São Paulo (SP), Brazil. The multispectral image from the CCD/CBERS-2B satellite was acquired in 2009 and processed through the Geographic Information System (GIS) SPRING. Three classification methods were used, one unsupervised (Cluster), and two supervised (Indicator Kriging/IK and Maximum Likelihood/Maxver), in addition to the screen classification taken as field checking.. Reliability of classifications was evaluated by Kappa index. In accordance with the Kappa index, the Indicator kriging method obtained the highest degree of reliability for bands 2 and 4. Moreover the Cluster method applied to band 2 (green) was the best quality classification between all the methods. Indicator Kriging was the classifier that presented the citrus total area closest to the field check estimated by -3.01%, whereas Maxver overestimated the total citrus area by 42.94%.
Resumo:
One approach to verify the adequacy of estimation methods of reference evapotranspiration is the comparison with the Penman-Monteith method, recommended by the United Nations of Food and Agriculture Organization - FAO, as the standard method for estimating ET0. This study aimed to compare methods for estimating ET0, Makkink (MK), Hargreaves (HG) and Solar Radiation (RS), with Penman-Monteith (PM). For this purpose, we used daily data of global solar radiation, air temperature, relative humidity and wind speed for the year 2010, obtained through the automatic meteorological station, with latitude 18° 91' 66" S, longitude 48° 25' 05" W and altitude of 869m, at the National Institute of Meteorology situated in the Campus of Federal University of Uberlandia - MG, Brazil. Analysis of results for the period were carried out in daily basis, using regression analysis and considering the linear model y = ax, where the dependent variable was the method of Penman-Monteith and the independent, the estimation of ET0 by evaluated methods. Methodology was used to check the influence of standard deviation of daily ET0 in comparison of methods. The evaluation indicated that methods of Solar Radiation and Penman-Monteith cannot be compared, yet the method of Hargreaves indicates the most efficient adjustment to estimate ETo.
Resumo:
In this thesis, a classi cation problem in predicting credit worthiness of a customer is tackled. This is done by proposing a reliable classi cation procedure on a given data set. The aim of this thesis is to design a model that gives the best classi cation accuracy to e ectively predict bankruptcy. FRPCA techniques proposed by Yang and Wang have been preferred since they are tolerant to certain type of noise in the data. These include FRPCA1, FRPCA2 and FRPCA3 from which the best method is chosen. Two di erent approaches are used at the classi cation stage: Similarity classi er and FKNN classi er. Algorithms are tested with Australian credit card screening data set. Results obtained indicate a mean classi cation accuracy of 83.22% using FRPCA1 with similarity classi- er. The FKNN approach yields a mean classi cation accuracy of 85.93% when used with FRPCA2, making it a better method for the suitable choices of the number of nearest neighbors and fuzziness parameters. Details on the calibration of the fuzziness parameter and other parameters associated with the similarity classi er are discussed.
Resumo:
The focus in this thesis is to study both technical and economical possibilities of novel on-line condition monitoring techniques in underground low voltage distribution cable networks. This thesis consists of literature study about fault progression mechanisms in modern low voltage cables, laboratory measurements to determine the base and restrictions of novel on-line condition monitoring methods, and economic evaluation, based on fault statistics and information gathered from Finnish distribution system operators. This thesis is closely related to master’s thesis “Channel Estimation and On-line Diagnosis of LV Distribution Cabling”, which focuses more on the actual condition monitoring methods and signal theory behind them.
Resumo:
The aim of this master’s thesis is to study how Agile method (Scrum) and open source software are utilized to produce software for a flagship product in a complex production environment. The empirical case and the used artefacts are taken from the Nokia MeeGo N9 product program, and from the related software program, called as the Harmattan. The single research case is analysed by using a qualitative method. The Grounded Theory principles are utilized, first, to find out all the related concepts from artefacts. Second, these concepts are analysed, and finally categorized to a core category and six supported categories. The result is formulated as the operation of software practices conceivable in circumstances, where the accountable software development teams and related context accepts a open source software nature as a part of business vision and the whole organization supports the Agile methods.
Resumo:
Singular Value Decomposition (SVD), Principal Component Analysis (PCA) and Multiple Linear Regression (MLR) are some of the mathematical pre- liminaries that are discussed prior to explaining PLS and PCR models. Both PLS and PCR are applied to real spectral data and their di erences and similarities are discussed in this thesis. The challenge lies in establishing the optimum number of components to be included in either of the models but this has been overcome by using various diagnostic tools suggested in this thesis. Correspondence analysis (CA) and PLS were applied to ecological data. The idea of CA was to correlate the macrophytes species and lakes. The di erences between PLS model for ecological data and PLS for spectral data are noted and explained in this thesis. i
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
Frequency converters are widely used in the industry to enable better controllability and efficiency of variable speed AC motor drives. Despite these advantages, certain challenges concerning the inverter and motor interfacing have been present for decades. As insulated gate bipolar transistors entered the market, the inverter output voltage transition rate significantly increased compared with their predecessors. Inverters operate based on pulse width modulation of the output voltage, and the steep voltage edge fed by the inverter produces a motor terminal overvoltage. The overvoltage causes extra stress to the motor insulation, which may lead to a prematuremotor failure. The overvoltage is not generated by the inverter alone, but also by the sum effect of the motor cable length and the impedance mismatch between the cable and the motor. Many solutions have been shown to limit the overvoltage, and the mainstream products focus on passive filters. This doctoral thesis studies an alternative methodology for motor overvoltage reduction. The focus is on minimization of the passive filter dimensions, physical and electrical, or better yet, on operation without any filter. This is achieved by additional inverter control and modulation. The studied methods are implemented on different inverter topologies, varying in nominal voltage and current.For two-level inverters, the studied method is termed active du/dt. It consists of a small output LC filter, which is controlled by an independent modulator. The overvoltage is limited by a reduced voltage transition rate. For multilevel inverters, an overvoltage mitigation method operating without a passive filter, called edge modulation, is implemented. The method uses the capability of the inverter to produce two switching operations in the same direction to cancel the oscillating voltages of opposite phases. For parallel inverters, two methods are studied. They are both intended for two-level inverters, but the first uses individual motor cables from each inverter while the other topology applies output inductors. The overvoltage is reduced by interleaving the switching operations to produce a similar oscillation accumulation as with the edge modulation. The implementation of these methods is discussed in detail, and the necessary modifications to the control system of the inverter are presented. Each method is experimentally verified by operating industrial frequency converters with the modified control. All the methods are found feasible, and they provide sufficient overvoltage protection. The limitations and challenges brought about by the methods are discussed.
Resumo:
Knowledge of the behaviour of cellulose, hemicelluloses, and lignin during wood and pulp processing is essential for understanding and controlling the processes. Determination of monosaccharide composition gives information about the structural polysaccharide composition of wood material and helps when determining the quality of fibrous products. In addition, monitoring of the acidic degradation products gives information of the extent of degradation of lignin and polysaccharides. This work describes two capillary electrophoretic methods developed for the analysis of monosaccharides and for the determination of aliphatic carboxylic acids from alkaline oxidation solutions of lignin and wood. Capillary electrophoresis (CE), in its many variants is an alternative separation technique to chromatographic methods. In capillary zone electrophoresis (CZE) the fused silica capillary is filled with an electrolyte solution. An applied voltage generates a field across the capillary. The movement of the ions under electric field is based on the charge and hydrodynamic radius of ions. Carbohydrates contain hydroxyl groups that are ionised only in strongly alkaline conditions. After ionisation, the structures are suitable for electrophoretic analysis and identification through either indirect UV detection or electrochemical detection. The current work presents a new capillary zone electrophoretic method, relying on in-capillary reaction and direct UV detection at the wavelength of 270 nm. The method has been used for the simultaneous separation of neutral carbohydrates, including mono- and disaccharides and sugar alcohols. The in-capillary reaction produces negatively charged and UV-absorbing compounds. The optimised method was applied to real samples. The methodology is fast since no other sample preparation, except dilution, is required. A new method for aliphatic carboxylic acids in highly alkaline process liquids was developed. The goal was to develop a method for the simultaneous analysis of the dicarboxylic acids, hydroxy acids and volatile acids that are oxidation and degradation products of lignin and wood polysaccharides. The CZE method was applied to three process cases. First, the fate of lignin under alkaline oxidation conditions was monitored by determining the level of carboxylic acids from process solutions. In the second application, the degradation of spruce wood using alkaline and catalysed alkaline oxidation were compared by determining carboxylic acids from the process solutions. In addition, the effectiveness of membrane filtration and preparative liquid chromatography in the enrichment of hydroxy acids from black liquor was evaluated, by analysing the effluents with capillary electrophoresis.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
In many industrial applications, such as the printing and coatings industry, wetting of porous materials by liquids includes not only imbibition and permeation into the bulk but also surface spreading and evaporation. By understanding these phenomena, valuable information can be obtained for improved process control, runnability and printability, in which liquid penetration and subsequent drying play important quality and economic roles. Knowledge of the position of the wetting front and the distribution/degree of pore filling within the structure is crucial in describing the transport phenomena involved. Although exemplifying paper as a porous medium in this work, the generalisation to dynamic liquid transfer onto a surface, including permeation and imbibition into porous media, is of importance to many industrial and naturally occurring environmental processes. This thesis explains the phenomena in the field of heatset web offset printing but the content and the analyses are applicable in many other printing methods and also other technologies where water/moisture monitoring is crucial in order to have a stable process and achieve high quality end products. The use of near-infrared technology to study the water and moisture response of porous pigmented structures is presented. The use of sensitive surface chemical and structural analysis, as well as the internal structure investigation of a porous structure, to inspect liquid wetting and distribution, complements the information obtained by spectroscopic techniques. Strong emphasis has been put on the scale of measurement, to filter irrelevant information and to understand the relationship between interactions involved. The near-infrared spectroscopic technique, presented here, samples directly the changes in signal absorbance and its variation in the process at multiple locations in a print production line. The in-line non-contact measurements are facilitated by using several diffuse reflectance probes, giving the absolute water/moisture content from a defined position in the dynamic process in real-time. The nearinfrared measurement data illustrate the changes in moisture content as the paper is passing through the printing nips and dryer, respectively, and the analysis of the mechanisms involved highlight the roles of the contacting surfaces and the relative liquid carrier properties of both non-image and printed image areas. The thesis includes laboratory studies on wetting of porous media in the form of coated paper and compressed pigment tablets by mono-, dual-, and multi-component liquids, and paper water/moisture content analysis in both offline and online conditions, thus also enabling direct sampling of temporal water/moisture profiles from multiple locations. One main focus in this thesis was to establish a measurement system which is able to monitor rapid changes in moisture content of paper. The study suggests that near-infrared diffuse reflectance spectroscopy can be used as a moisture sensitive system and to provide accurate online qualitative indicators, but, also, when accurately calibrated, can provide quantification of water/moisture levels, its distribution and dynamic liquid transfer. Due to the high sensitivity, samples can be measured with excellent reproducibility and good signal to noise ratio. Another focus of this thesis was on the evolution of the moisture content, i.e. changes in moisture content referred to (re)wetting, and liquid distribution during printing of coated paper. The study confirmed different wetting phases together with the factors affecting each phase both for a single droplet and a liquid film applied on a porous substrate. For a single droplet, initial capillary driven imbibition is followed by equilibrium pore filling and liquid retreat by evaporation. In the case of a liquid film applied on paper, the controlling factors defining the transportation were concluded to be the applied liquid volume in relation to surface roughness, capillarity and permeability of the coating giving the liquid uptake capacity. The printing trials confirmed moisture gradients in the printed sheet depending on process parameters such as speed, fountain solution dosage and drying conditions as well as the printed layout itself. Uneven moisture distribution in the printed sheet was identified to be one of the sources for waving appearance and the magnitude of waving was influenced by the drying conditions.
Resumo:
The ocelot (Leopardus pardalis) is included in list of wild felid species protected by CITES and is part of conservation strategies that necessarily involve the use of assisted reproduction techniques, which requires practical and minimally invasive techniques of high reproducibility that permit the study of animal reproductive physiology. The objective of this study was to compare and validate two commercial assays: ImmuChem Double Antibody Corticosterone 125I RIA from ICN Biomedicals, Costa Mesa, CA, USA; and Coat-a-Count Cortisol 125I RIA from DPC, Los Angeles, CA, USA, for assessment of fecal glucocorticoid metabolites in ocelots submitted to ACTH (adrenocorticotropic hormone) challenge. Fecal samples were collected from five ocelots kept at the Brazilian Center of Neotropical Felines, Associação Mata Ciliar, São Paulo, Brazil, and one of the animals was chosen as a negative control. The experiment was conducted over a period of 9 days. On day 0, a total dose of 100 IU ACTH was administered intramuscularly. Immediately after collection the samples were stored at 20C in labeled plastic bags. The hormone metabolites were subsequently extracted and assayed using the two commercial kits. Previously it was performed a trial with the DPC kit to check the best extraction method for hormones metabolites. Data were analyzed with the SAS program for Windows V8 and reported as means ± SEM. The Schwarzenberger extraction method was slightly better when compared with the Wasser extraction method (103,334.56 ± 19,010.37ng/g of wet feces and 59,223.61 ± 12,725.36ng/g of wet feces respectively; P=0,0657). The ICN kit detected an increase in glucocorticoid metabolite concentrations in a more reliable manner. Metabolite concentrations (ng/g wet feces) on day 0 and day 1 were 66,956.28 ± 36,786.93 and 92,991.19 ± 28,555.63 for the DPC kit, and 205,483.32 ± 83,811.32 and 814,578.75 ± 292,150.47 for the ICN kit, respectively. The limit of detection for the ICN kit was 7.7 ng/mL for 100% B/Bo (25ng/mL for 88%B/Bo) and for the DPC kit it was 0.2ug/dL for 90.95% B/Bo (1ug/dL for 81.27% B/Bo). In conclusion it was confirmed that the Schwarzenberger extraction method and the ICN kit are superior for extracting and measuring fecal glucocorticoid metabolites in ocelot fecal samples.