887 resultados para data acquisition system
Resumo:
Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.
Resumo:
Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.
Resumo:
A multi-channel gated integrator and PXI based data acquisition system have been developed for nuclear detector arrays with hundreds of detector units. The multi-channel gated integrator can be controlled by a programmable Cl controller. The PXI-DAQ system consists of NI PXI-1033 chassis with several PXI-DAQ cards. The system software has a user-friendly GUI which is written in C language using LabWindows/CVI under Windows XP operating system. The performance of the PXI-DAQ system is very reliable and capable of handling event rate up to 40 kHz. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A dynamic measurement system was developed by the Institute of Modern Physics (IMP) for the dipole prototype of Rapid Cycle Synchrotron (RCS) of China Spallation Neutron Source (CSNS). The repetition frequency of RCS is 25 Hz. The probe is a moving arc searching-coil, and the data acquisition system is based on the dynamic analysis modular of National Instrument. To get the error of high order harmonics of the field at basic frequency, the hardware integrator is replaced by a high speed ADC with software filter and integrator. A series of harmonic coefficients of field are used to express the varieties of dynamic fields in space and time simultaneously. The measurement system has been tested in Institute of High Energy Physics (IHEP), and the property of the dipole prototype of RCS has been measured. Some measurement results and the repeatability of system are illustrated in this paper.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
Dissertação de mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.
Resumo:
The operation of power systems in a Smart Grid (SG) context brings new opportunities to consumers as active players, in order to fully reach the SG advantages. In this context, concepts as smart homes or smart buildings are promising approaches to perform the optimization of the consumption, while reducing the electricity costs. This paper proposes an intelligent methodology to support the consumption optimization of an industrial consumer, which has a Combined Heat and Power (CHP) facility. A SCADA (Supervisory Control and Data Acquisition) system developed by the authors is used to support the implementation of the proposed methodology. An optimization algorithm implemented in the system in order to perform the determination of the optimal consumption and CHP levels in each instant, according to the Demand Response (DR) opportunities. The paper includes a case study with several scenarios of consumption and heat demand in the context of a DR event which specifies a maximum demand level for the consumer.
Resumo:
Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.