944 resultados para Data processing methods
Resumo:
The presented database contains time-referenced sea ice draft values from upward looking sonar (ULS) measurements in the Weddell Sea, Antarctica. The sea ice draft data can be used to infer the thickness of the ice. They were collected during the period 1990-2008. In total, the database includes measurements from 13 locations in the Weddell Sea and was generated from more than 3.7 million measurements of sea ice draft. The files contain uncorrected raw drafts, corrected drafts and the basic parameters measured by the ULS. The measurement principle, the data processing procedure and the quality control are described in detail. To account for the unknown speed of sound in the water column above the ULS, two correction methods were applied to the draft data. The first method is based on defining a reference level from the identification of open water leads. The second method uses a model of sound speed in the oceanic mixed layer and is applied to ice draft in austral winter. Both methods are discussed and their accuracy is estimated. Finally, selected results of the processing are presented.
Resumo:
The Håkon Mosby Mud Volcano is a natural laboratory to study geological, geochemical, and ecological processes related to deep-water mud volcanism. High resolution bathymetry of the Håkon Mosby Mud Volcano was recorded during RV Polarstern expedition ARK-XIX/3 utilizing the multibeam system Hydrosweep DS-2. Dense spacing of the survey lines and slow ship speed (5 knots) provided necessary point density to generate a regular 10 m grid. Generalization was applied to preserve and represent morphological structures appropriately. Contour lines were derived showing detailed topography at the centre of the Håkon Mosby Mud Volcano and generalized contours in the vicinity. We provide a brief introduction to the Håkon Mosby Mud Volcano area and describe in detail data recording and processing methods, as well as the morphology of the area. Accuracy assessment was made to evaluate the reliability of a 10 m resolution terrain model. Multibeam sidescan data were recorded along with depth measurements and show reflectivity variations from light grey values at the centre of the Håkon Mosby Mud Volcano to dark grey values (less reflective) at the surrounding moat.
Resumo:
Evolution of approaches and methods for reconstruction of paleoenvironmental conditions from microfossils contained in bottom sediments is assessed. Authors elaborated a new actualistic basis for such reconstructions, consisting of a database on contents of tests of planktonic foraminifers in the surface layer of Atlantic sediments and a package of mathematical tools for computer data processing. Structure of the database is described. It contains data on test contents for 29 species and varieties of planktonic foraminifers in 381 samples. A mathematical model designed for reconstructions is based on factor analysis and multidimensional spline interpolation. The model allows one to deduce Quaternary hydrological parameters (paleotemperature, paleosalinity) for standard hydrological levels down to depth of 250 m for the four seasons of the year. Reconstructions are illustrated by an example of a sedimentary core from the North Atlantic representing a period of 300 ky. During the next to last and the last maxima of continental glaciation (oxygen isotope stages 8, 6, 4, and 2), the subarctic water mass was spread here. Winter and summer surface water temperatures comprised 1-5° and 5-7°C, respectively. During interglacials and in Holocene the conditions were close to present ones: winter and summer surface water temperatures comprised 10-12 and 15-17°C, respectively. Vertical paleohydrological profiles compiled for peaks of climatostratigraphic intervals suggest that during cold intervals water stratification was stronger than during the warm ones. At depth 50 m seasonal salinity oscillations did not exceed 0.4 per mil and commonly salinity was minimum in winter and maximum in summer.
Resumo:
Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.
Resumo:
This paper reports on an innovative approach that aims to reduce information management costs in data-intensive and cognitively-complex biomedical environments. Recognizing the importance of prominent high-performance computing paradigms and large data processing technologies as well as collaboration support systems to remedy data-intensive issues, it adopts a hybrid approach by building on the synergy of these technologies. The proposed approach provides innovative Web-based workbenches that integrate and orchestrate a set of interoperable services that reduce the data-intensiveness and complexity overload at critical decision points to a manageable level, thus permitting stakeholders to be more productive and concentrate on creative activities.
Resumo:
A basic requirement of the data acquisition systems used in long pulse fusion experiments is the real time physical events detection in signals. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. This type of applications can be implemented in ITER using fast controllers. ITER is standardizing the architectures to be used for fast controller implementation. Until now the standards chosen are PXIe architectures (based on PCIe) for the hardware and EPICS middleware for the software. This work presents the methodology for implementing data acquisition and pre-processing using FPGA-based DAQ cards and how to integrate these in fast controllers using EPICS.
Resumo:
Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.