967 resultados para Processing methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fan systems are responsible for approximately 10% of the electricity consumption in industrial and municipal sectors, and it has been found that there is energy-saving potential in these systems. To this end, variable speed drives (VSDs) are used to enhance the efficiency of fan systems. Usually, fan system operation is optimized based on measurements of the system, but there are seldom readily installed meters in the system that can be used for the purpose. Thus, sensorless methods are needed for the optimization of fan system operation. In this thesis, methods for the fan operating point estimation with a variable speed drive are studied and discussed. These methods can be used for the energy efficient control of the fan system without additional measurements. The operation of these methods is validated by laboratory measurements and data from an industrial fan system. In addition to their energy consumption, condition monitoring of fan systems is a key issue as fans are an integral part of various production processes. Fan system condition monitoring is usually carried out with vibration measurements, which again increase the system complexity. However, variable speed drives can already be used for pumping system condition monitoring. Therefore, it would add to the usability of a variablespeed- driven fan system if the variable speed drive could be used as a condition monitoring device. In this thesis, sensorless detection methods for three lifetime-reducing phenomena are suggested: these are detection of the fan contamination build-up, the correct rotational direction, and the fan surge. The methods use the variable speed drive monitoring and control options for the detection along with simple signal processing methods, such as power spectrum density estimates. The methods have been validated by laboratory measurements. The key finding of this doctoral thesis is that a variable speed drive can be used on its own as a monitoring and control device for the fan system energy efficiency, and it can also be used in the detection of certain lifetime-reducing phenomena.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ion mobility spectrometry (IMS) is a straightforward, low cost method for fast and sensitive determination of organic and inorganic analytes. Originally this portable technique was applied to the determination of gas phase compounds in security and military use. Nowadays, IMS has received increasing attention in environmental and biological analysis, and in food quality determination. This thesis consists of literature review of suitable sample preparation and introduction methods for liquid matrices applicable to IMS from its early development stages to date. Thermal desorption, solid phase microextraction (SPME) and membrane extraction were examined in experimental investigations of hazardous aquatic pollutants and potential pollutants. Also the effect of different natural waters on the extraction efficiency was studied, and the utilised IMS data processing methods are discussed. Parameters such as extraction and desorption temperatures, extraction time, SPME fibre depth, SPME fibre type and salt addition were examined for the studied sample preparation and introduction methods. The observed critical parameters were extracting material and temperature. The extraction methods showed time and cost effectiveness because sampling could be performed in single step procedures and from different natural water matrices within a few minutes. Based on these experimental and theoretical studies, the most suitable method to test in the automated monitoring system is membrane extraction. In future an IMS based early warning system for monitoring water pollutants could ensure the safe supply of drinking water. IMS can also be utilised for monitoring natural waters in cases of environmental leakage or chemical accidents. When combined with sophisticated sample introduction methods, IMS possesses the potential for both on-line and on-site identification of analytes in different water matrices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ionic liquids, ILs, have recently been studied with accelerating interest to be used for a deconstruction/fractionation, dissolution or pretreatment processing method of lignocellulosic biomass. ILs are usually utilized combined with heat. Regarding lignocellulosic recalcitrance toward fractionation and IL utilization, most of the studies concern IL utilization in the biomass fermentation process prior to the enzymatic hydrolysis step. It has been demonstrated that IL-pretreatment gives more efficient hydrolysis of the biomass polysaccharides than enzymatic hydrolysis alone. Both cellulose (especially cellulose) and lignin are very resistant towards fractionation and even dissolution methods. As an example, it can be mentioned that softwood, hardwood and grass-type plant species have different types of lignin structures leading to the fact that softwood lignin (guaiacyl lignin dominates) is the most difficult to solubilize or chemically disrupt. In addition to the known conventional biomass processing methods, several ILs have also been found to efficiently dissolve either cellulose and/or wood samples – different ILs are suitable for different purposes. An IL treatment of wood usually results in non-fibrous pulp, where lignin is not efficiently separated and wood components are selectively precipitated, as cellulose is not soluble or degradable in ionic liquids under mild conditions. Nevertheless, new ILs capable of rather good fractionation performance have recently emerged. The capability of the IL to dissolve or deconstruct wood or cellulose depends on several factors, (e.g. sample origin, the particle size of the biomass, mechanical treatments as pulverization, initial biomassto-IL ratio, water content of the biomass, possible impurities of IL, reaction conditions, temperature etc). The aim of this study was to obtain (fermentable) saccharides and other valuable chemicals from wood by a combined heat and IL-treatment. Thermal treatments alone contribute to the degradation of polysaccharides (e.g. 150 °C alone is said to cause the degradation of polysaccharides), thus temperatures below that should be used, if the research interest lies on the IL effectiveness. On the other hand, the efficiency of the IL-treatment can also be enhanced to combine other treatment methods, (e.g. microwave heating). The samples of spruce, pine and birch sawdust were treated with either 1-Ethyl-3-methylimidazolium chloride, Emim Cl, or 1-Ethyl-3-methylimidazolium acetate, Emim Ac, (or with ionized water for comparison) at various temperatures (where focus was between 80 and 120 °C). The samples were withdrawn at fixed time intervals (the main interest treatment time area lied between 0 and 100 hours). Double experiments were executed. The selected mono- and disaccharides, as well as their known degradation products, 5-hydroxymethylfurfural, 5-HMF, and furfural were analyzed with capillary electrophoresis, CE, and high-performance liquid chromatography, HPLC. Initially, even GC and GC-MS were utilized. Galactose, glucose, mannose and xylose were the main monosaccharides that were present in the wood samples exposed to ILs at elevated temperatures; in addition, furfural and 5-HMF were detected; moreover, the quantitative amount of the two latter ones were naturally increasing in line with the heating time or the IL:wood ratio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study is a literature review on laser scribing in monolithically interconnected thin-film PV modules, focusing on efficiency of modules based on absorber materials CIGS, CdTe and a-Si. In thin-film PV module manufacturing scribing is used to interconnect individual cells monolithically by P1, P2 and P3 scribes. Laser scribing has several advantages compared to mechanical scribing for this purpose. However, laser scribing of thin-films can be a challenging process and may induce efficiency reducing defects. Some of these defects can be avoided by improving optimisation or processing methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electrical machine drives are the most electrical energy-consuming systems worldwide. The largest proportion of drives is found in industrial applications. There are, however many other applications that are also based on the use of electrical machines, because they have a relatively high efficiency, a low noise level, and do not produce local pollution. Electrical machines can be classified into several categories. One of the most commonly used electrical machine types (especially in the industry) is induction motors, also known as asynchronous machines. They have a mature production process and a robust rotor construction. However, in the world pursuing higher energy efficiency with reasonable investments not every application receives the advantage of using this type of motor drives. The main drawback of induction motors is the fact that they need slipcaused and thus loss-generating current in the rotor, and additional stator current for magnetic field production along with the torque-producing current. This can reduce the electric motor drive efficiency, especially in low-speed, low-power applications. Often, when high torque density is required together with low losses, it is desirable to apply permanent magnet technology, because in this case there is no need to use current to produce the basic excitation of the machine. This promotes the effectiveness of copper use in the stator, and further, there is no rotor current in these machines. Again, if permanent magnets with a high remanent flux density are used, the air gap flux density can be higher than in conventional induction motors. These advantages have raised the popularity of PMSMs in some challenging applications, such as hybrid electric vehicles (HEV), wind turbines, and home appliances. Usually, a correctly designed PMSM has a higher efficiency and consequently lower losses than its induction machine counterparts. Therefore, the use of these electrical machines reduces the energy consumption of the whole system to some extent, which can provide good motivation to apply permanent magnet technology to electrical machines. However, the cost of high performance rare earth permanent magnets in these machines may not be affordable in many industrial applications, because the tight competition between the manufacturers dictates the rules of low-cost and highly robust solutions, where asynchronous machines seem to be more feasible at the moment. Two main electromagnetic components of an electrical machine are the stator and the rotor. In the case of a conventional radial flux PMSM, the stator contains magnetic circuit lamination and stator winding, and the rotor consists of rotor steel (laminated or solid) and permanent magnets. The lamination itself does not significantly influence the total cost of the machine, even though it can considerably increase the construction complexity, as it requires a special assembly arrangement. However, thin metal sheet processing methods are very effective and economically feasible. Therefore, the cost of the machine is mainly affected by the stator winding and the permanent magnets. The work proposed in this doctoral dissertation comprises a description and analysis of two approaches of PMSM cost reduction: one on the rotor side and the other on the stator side. The first approach on the rotor side includes the use of low-cost and abundant ferrite magnets together with a tooth-coil winding topology and an outer rotor construction. The second approach on the stator side exploits the use of a modular stator structure instead of a monolithic one. PMSMs with the proposed structures were thoroughly analysed by finite element method based tools (FEM). It was found out that by implementing the described principles, some favourable characteristics of the machine (mainly concerning the machine size) will inevitable be compromised. However, the main target of the proposed approaches is not to compete with conventional rare earth PMSMs, but to reduce the price at which they can be implemented in industrial applications, keeping their dimensions at the same level or lower than those of a typical electrical machine used in the industry at the moment. The measurement results of the prototypes show that the main performance characteristics of these machines are at an acceptable level. It is shown that with certain specific actions it is possible to achieve a desirable efficiency level of the machine with the proposed cost reduction methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mineraalien rikastamiseen käytetään useita fysikaalisia ja kemiallisia menetelmiä. Prosessi sisältää malmin hienonnuksen, rikastuksen ja lopuksi vedenpoistamisen rikastelietteestä. Malmin rikastamiseen käytetään muun muassa vaahdotusta, liuotusta, magneettista rikastusta ja tiheyseroihin perustuvia rikastusmenetelmiä. Rikastuslietteestä voidaan poistaa vettä sakeuttamalla ja suodattamalla. Rikastusprosessin ympäristövaikutuksia voidaan arvioida laskemalla tuotteen vesijalanjälki, joka kertoo valmistamiseen kulutetun veden määrän. Tässä kirjallisuustyössä esiteltiin mineraalien käsittelymenetelmiä sekä prosessijätevesien puhdistusmenetelmiä. Kirjallisuuslähteiden pohjalta selvitettiin Pyhäsalmen kaivoksella valmistetun kuparianodin vesijalanjälki sekä esitettiin menetelmiä, joilla prosessiin tarvittavan raakaveden kulutusta voitaisiin vähentää. Pyhäsalmella kuparirikasteesta valmistetun kuparianodin vesijalanjälki on 240 litraa H2O ekvivalenttia tuotettua tonnia kohden. Pyhäsalmen prosessin raakaveden kulutusta voidaan vähentää lisäämällä sisäistä vedenkierrätystä. Kalsiumsulfaatin saostuminen putkiin ja pumppuihin on ilmentynyt ongelmaksi vedenkierrätyksen lisäämisessä. Kalsiumsulfaattia voidaan erottaa vedestä membraaneihin, ioninvaihtoon ja sähkökemiaan perustuvilla tekniikoilla. Vaihtoehdossa, jossa johdetaan kaikista kolmesta vaahdotuksesta saatavat rikastuslietteen ja rikastushiekan sakeutuksien ylitteet sekä suodatuksien suodosvedet samaan vedenkäsittelyyn voidaan kattaa arviolta noin 65 % koko veden tarpeesta. Raakavettä säästetään vuodessa 3,4 Mm^3 ja samalla rikastushiekka-altaiden tarvittava koko pienenee, joka vähentää ympäristöriskejä.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La matière sombre est un mystère dans le domaine de l’astrophysique depuis déjà plusieurs années. De nombreuses observations montrent que jusqu’à 85 % de la masse gravitationnelle totale de l’univers serait composée de cette matière de nature inconnue. Une théorie expliquant cette masse manquante considérerait les WIMPs (Weakly Interacting Massive Particles), particules stables, non chargées, prédites par des extensions du modèle standard, comme candidats. Le projet PICASSO (Projet d’Identification des CAndidats Supersymétriques à la matière Sombre) est une expérience qui tente de détecter directement le WIMP. Le projet utilise des détecteurs à gouttelettes de fréon (C4F10) surchauffées. La collision entre un WIMP et le noyau de fluor crée un recul nucléaire qui cause à son tour une transition de phase de la gouttelette liquide à une bulle gazeuse. Le bruit de ce phénomène est alors capté par des senseurs piézoélectriques montés sur les parois des détecteurs. Le WIMP n’est cependant pas la seule particule pouvant causer une telle transition de phase. D’autres particules environnantes peuvent former des bulles, telles les particules alpha où même des rayons gamma . Le système d’acquisition de données (DAQ) est aussi en proie à du bruit électronique qui peut être enregistré, ainsi que sensible à du bruit acoustique extérieur au détecteur. Finalement, des fractures dans le polymère qui tient les gouttelettes en place peut également causer des transitions de phase spontanées. Il faut donc minimiser l’impact de tous ces différents bruit de fond. La pureté du matériel utilisé dans la fabrication des détecteurs devient alors très importante. On fait aussi appel à des méthodes qui impliquent l’utilisation de variables de discrimination développées dans le but d’améliorer les limites d’exclusion de détection du WIMP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The quality of minced fish, as mentioned earlier depends largely on the type and quality of the raw material used, as well as on the processing methods employed. Moreover, fish mincing involves cutting up of tissues thereby increasing surface area to a great extent and releasing of enzymes and nutrients from the tissues. Due to these factors fish mince is relatively more prone to chemical. autolytic and microbial spoilage. Hence study of minced fish with these factors in focus is very important. Equally important is the availability, price and preference of the raw material vis-a-vis the end products and the storage period it passes through. In the present study. changes in the bacterial flora. both quantitative and qualitative of the dressed fish, viz. Nemipterus japonicas and mince from the same fish during freezing and frozen storage have been investigated in detail. The effect of a preservative. viz. . EZDTA on the bacteriological and shelf life characteristics of the minced fish has also been investigated. Attempts have also been made to develop various types of products from mince and to study their storage life.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frozen storage characteristics and shelflife vary considerably among species as well as within the species (Powrie, 1973; Fennema. 1973). This can be attributed to the variation in the composition of fish among various species. In certain species like sardines and mackerel. wide seasonal variation in chemical composition occur within the species. These variations affect the quality and shelflife. The nutritional level of water. spawning, method of catching, struggling etc. are found to have profound influence on the condition of the fresh fish. Soon after death the deteriorative changes in fish start due to autolysis and bacterial growth. The rate of these changes depends mainly on temperature. The handling methods have great influence on bacterial contamination. Thus the type oi'handling. temperature control. period of chill storage. processing methods. type of freezing, condition of frozen storage and period of storage affect the quality and shelflife Of the fisho In the present study extensive investigations were carried out on various factors affecting the quality of fish as well as their effect on the physical. chemical and sensory qualities of fish during frozen storage and the shelflife

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unprocessed seafood harbor high number of bacteria, hence are more prone to spoilage. In this circumstance, the use of spice in fish for reduction of microorganism can play an important role in seafood processing. Many essential oils from herbs and spices are used widely in the food, health and personal care industries and are classified as GRAS (Generally regarded as safe) substances or are permitted food additives. A large number of these compounds have been the subject of extensive toxicological scrutiny. However, their principal function is to impart desirable flavours and aromas and not necessarily to act as antimicrobial agents. Given the high flavour and aroma impact to plant essential oils, the future for using these compound as food preservatives lies in the careful selection and evaluation of their efficacy at low concentrations but in combination with other chemical preservatives or preservation processes. For this reason they are worth of study alone or in combination with processing methods in order to establish if they could extend the shelf-life of foods. In this study, the effect of the spices, clove, turmeric, cardamom, oregano, rosemary and garlic in controlling the spoilage and pathogenic bacteria is investigated. Their effect on biogenic amine formation in tuna especially, histamine, as a result of bacterial control is also studied in detail. The contribution of spice oleoresin in the sensory and textural parameters is investigated using textural profile analysis and sensory panel. Finally, the potential of spices in quality stabilization and in increasing the shelf–life of tuna during frozen storage is analysed

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper compares the most common digital signal processing methods of exon prediction in eukaryotes, and also proposes a technique for noise suppression in exon prediction. The specimen used here which has relevance in medical research, has been taken from the public genomic database - GenBank.Here exon prediction has been done using the digital signal processing methods viz. binary method, EIIP (electron-ion interaction psuedopotential) method and filter methods. Under filter method two filter designs, and two approaches using these two designs have been tried. The discrete wavelet transform has been used for de-noising of the exon plots.Results of exon prediction based on the methods mentioned above, which give values closest to the ones found in the NCBI database are given here. The exon plot de-noised using discrete wavelet transform is also given.Alterations to the proven methods as done by the authors, improves performance of exon prediction algorithms. Also it has been proven that the discrete wavelet transform is an effective tool for de-noising which can be used with exon prediction algorithms

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.