873 resultados para data acquisition
Resumo:
The rapid data acquisition, natural fluorescence rejection and experimental ease are the advantages of the ultra-fast Raman loss scattering (URLS) which makes it a unique and valuable molecular structure-determining technique. URLS is an analogue of stimulated Raman scattering (SRS) but far more sensitive than SRS. It involves the interaction of two laser sources, viz. a picosecond (ps) pulse and white light, with the sample leading to the generation of loss signal on the higher energy (blue) side with respect to the wavelength of the ps pulse, unlike the gain signal observed on the red side in SRS. These loss signals are at least 1.5 times more intense than the SRS signals. Also, the very prerequisite of the experimental protocol for signal detection to be on the higher energy side by design eliminates the interference from fluorescence, which always appears on the red side. Unlike coherent anti-Stokes Raman scattering, URLS signals are not precluded by non-resonant background under resonance condition and also being a self-phase matched process, it is experimentally easier.
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
This thesis evaluates the security of Supervisory Control and Data Acquisition (SCADA) systems, which are one of the key foundations of many critical infrastructures. Specifically, it examines one of the standardised SCADA protocols called the Distributed Network Protocol Version 3, which attempts to provide a security mechanism to ensure that messages transmitted between devices, are adequately secured from rogue applications. To achieve this, the thesis applies formal methods from theoretical computer science to formally analyse the correctness of the protocol.
Resumo:
Purpose: A computationally efficient algorithm (linear iterative type) based on singular value decomposition (SVD) of the Jacobian has been developed that can be used in rapid dynamic near-infrared (NIR) diffuse optical tomography. Methods: Numerical and experimental studies have been conducted to prove the computational efficacy of this SVD-based algorithm over conventional optical image reconstruction algorithms. Results: These studies indicate that the performance of linear iterative algorithms in terms of contrast recovery (quantitation of optical images) is better compared to nonlinear iterative (conventional) algorithms, provided the initial guess is close to the actual solution. The nonlinear algorithms can provide better quality images compared to the linear iterative type algorithms. Moreover, the analytical and numerical equivalence of the SVD-based algorithm to linear iterative algorithms was also established as a part of this work. It is also demonstrated that the SVD-based image reconstruction typically requires O(NN2) operations per iteration, as contrasted with linear and nonlinear iterative methods that, respectively, requir O(NN3) and O(NN6) operations, with ``NN'' being the number of unknown parameters in the optical image reconstruction procedure. Conclusions: This SVD-based computationally efficient algorithm can make the integration of image reconstruction procedure with the data acquisition feasible, in turn making the rapid dynamic NIR tomography viable in the clinic to continuously monitor hemodynamic changes in the tissue pathophysiology.
Resumo:
The Dissolved Gas Analysis (DGA) a non destructive test procedure, has been in vogue for a long time now, for assessing the status of power and related transformers in service. An early indication of likely internal faults that may exist in Transformers has been seen to be revealed, to a reasonable degree of accuracy by the DGA. The data acquisition and subsequent analysis needs an expert in the concerned area to accurately assess the condition of the equipment. Since the presence of the expert is not always guaranteed, it is incumbent on the part of the power utilities to requisition a well planned and reliable artificial expert system to replace, at least in part, an expert. This paper presents the application of Ordered Ant Mner (OAM) classifier for the prediction of involved fault. Secondly, the paper also attempts to estimate the remaining life of the power transformer as an extension to the elapsed life estimation method suggested in the literature.
Resumo:
Polymeric outdoor insulators are being increasingly used for electrical power transmission and distribution in the recent years. One of the current topics of interest for the power transmission community is the aging of such outdoor polymeric insulators. A few research groups are carrying out aging studies at room temperature with wet period as an integral part of multistress aging cycle as specified by IEC standards. However, aging effect due to dry conditions alone at elevated temperatures and electric stress in the presence of radiation environment has probably not been explored. It is interesting to study and understand the insulator performance under dry conditions where wet periods are either rare or absent and to estimate the extent of aging caused by multiple stresses. This paper deals with the long-term accelerated multistress aging on full-scale 11 kV distribution class composite silicone rubber insulators. In order to assess the long-term synergistic effect of electric stress, temperature and UV radiation on insulators, they are subjected to accelerated aging in a specially designed multistress-aging chamber for 3800 hours. All the stresses are applied at an accelerated level. Using a data acquisition system developed for the work, leakage current has been monitored in LabVIEW environment. Chemical changes due to degradations have been studied using Energy Dispersive X-Ray analysis, Scanning Electron Microscope and Fourier transform Infrared Spectroscopy. Periodically different parameters like low molecular weight (LMW) molecular content, hydrophobicity, leakage current and surface morphology were monitored. The aging study is under progress and only intermediate results are presented in this paper.
Resumo:
In this paper, we describe a system for the automatic recognition of isolated handwritten Devanagari characters obtained by linearizing consonant conjuncts. Owing to the large number of characters and resulting demands on data acquisition, we use structural recognition techniques to reduce some characters to others. The residual characters are then classified using the subspace method. Finally the results of structural recognition and feature-based matching are mapped to give final output. The proposed system Ifs evaluated for the writer dependent scenario.
Resumo:
When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.
Resumo:
Control centers (CC) play a very important role in power system operation. An overall view of the system with information about all existing resources and needs is implemented through SCADA (Supervisory control and data acquisition system) and an EMS (energy management system). As advanced technologies have made their way into the utility environment, the operators are flooded with huge amount of data. The last decade has seen extensive applications of AI techniques, knowledge-based systems, Artificial Neural Networks in this area. This paper focuses on the need for development of an intelligent decision support system to assist the operator in making proper decisions. The requirements for realization of such a system are recognized for the effective operation and energy management of the southern grid in India The application of Petri nets leading to decision support system has been illustrated considering 24 bus system that is a part of southern grid.
Resumo:
The phenomena of nonlinear I-V behavior and electrical switching find extensive applications in power control, information storage, oscillators, etc. The study of I-V characteristics and switching parameters is necessary for the proper application of switching materials and devices. In the present work, a simple low-cost electrical switching analyzer has been developed for the measurement of the electrical characteristics of switching materials and devices. The system developed consists of a microcontroller-based excitation source and a high-speed data acquisition system. The design details of the excitation source, its interface with the high-speed data acquisition system and personal computer, and the details of the application software developed for automated measurements are described. Typical I-V characteristics and switching curves obtained with the system developed are also presented to illustrate the capability of the instrument developed.
Resumo:
A completely automated temperature-programmed reaction (TPR) system for carrying out gas-solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass spectrometer SX200 (VG Scientific), a tubular furnace and micro-reactor, a temperature controller, a versatile gas handling system, and a data acquisition and analysis system. The performance of the system has been tested under standard experimental conditions for CO oxidation over well-characterized Ce1-x-y(La/Y)(y)O2-delta catalysts. Testing of 3-way catalysis with CO, NO and C2H2 to convert to CO2, N-2 and H2O is done with this catalyst which shows complete removal of pollutants below 325 degrees C. Fixed oxide-ion defects in Pt substituted Ce1-y(La/Y)(y)O2-y/2 show higher catalytic activity than Pt ion-substituted CeO2
Resumo:
A pulsed field gradient spin echo NMR spectrometer has been assembled by interfacing a programmable pulse generator and a data acquisition system designed and fabricated in our laboratory with other imported units. Calibration results of the magnetic field gradients are presented.
Resumo:
A completely automated temperature-programmed reaction (TPR) system for carrying out gas-solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass spectrometer SX200 (VG Scientific), a tubular furnace and micro-reactor, a temperature controller, a versatile gas handling system, and a data acquisition and analysis system. The performance of the system has been tested under standard experimental conditions for CO oxidation over well-characterized Ce1-x-y(La/Y)(y)O2-delta catalysts. Testing of 3-way catalysis with CO, NO and C2H2 to convert to CO2, N-2 and H2O is done with this catalyst which shows complete removal of pollutants below 325 degrees C. Fixed oxide-ion defects in Pt substituted Ce1-y(La/Y)(y)O2-y/2 show higher catalytic activity than Pt ion-substituted CeO2.
Resumo:
Menneinä vuosikymmeninä maatalouden työt ovat ensin koneellistuneet voimakkaasti ja sittemmin mukaan on tullut automaatio. Nykyään koneiden kokoa suurentamalla ei enää saada tuottavuutta nostettua merkittävästi, vaan työn tehostaminen täytyy tehdä olemassa olevien resurssien käyttöä tehostamalla. Tässä työssä tarkastelun kohteena on ajosilppuriketju nurmisäilörehun korjuussa. Säilörehun korjuun intensiivisyys ja koneyksiköiden runsas määrä ovat työnjohdon kannalta vaativa yhdistelmä. Työn tavoitteena oli selvittää vaatimuksia maatalouden urakoinnin tueksi kehitettävälle tiedonhallintajärjestelmälle. Tutkimusta varten haastateltiin yhteensä 12 urakoitsijaa tai yhteistyötä tekevää viljelijää. Tutkimuksen perusteella urakoitsijoilla on tarvetta tietojärjestelmille.Luonnollisesti urakoinnin laajuus ja järjestelyt vaikuttavat asiaan. Tutkimuksen perusteella keskeisimpiä vaatimuksia tiedonhallinnalle ovat: • mahdollisimman laaja, yksityiskohtainen ja automaattinen tiedon keruu tehtävästä työstä • karttapohjaisuus, kuljettajien opastus kohteisiin • asiakasrekisteri, työn tilaus sähköisesti • tarjouspyyntöpohjat, hintalaskurit • luotettavuus, tiedon säilyvyys • sovellettavuus monenlaisiin töihin • yhteensopivuus muiden järjestelmien kanssa Kehitettävän järjestelmän tulisi siis tutkimuksen perusteella sisältää seuraavia osia: helppokäyttöinen suunnittelu/asiakasrekisterityökalu, toimintoja koneiden seurantaan, opastukseen ja johtamiseen, työnaikainen tiedonkeruu sekä kerätyn tiedon käsittelytoimintoja. Kaikki käyttäjät eivät kuitenkaan tarvitse kaikkia toimintoja, joten urakoitsijan on voitava valita tarvitsemansa osat ja mahdollisesti lisätä toimintoja myöhemmin. Tiukoissa taloudellisissa ja ajallisissa raameissa toimivat urakoitsijat ovat vaativia asiakkaita, joiden käyttämän tekniikan tulee olla toimivaa ja luotettavaa. Toisaalta inhimillisiä virheitä sattuu kokeneillekin, joten hyvällä tietojärjestelmällä työstä tulee helpompaa ja tehokkaampaa.