969 resultados para Optical data processing
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
This paper is part of a special issue of Applied Geochemistry focusing on reliable applications of compositional multivariate statistical methods. This study outlines the application of compositional data analysis (CoDa) to calibration of geochemical data and multivariate statistical modelling of geochemistry and grain-size data from a set of Holocene sedimentary cores from the Ganges-Brahmaputra (G-B) delta. Over the last two decades, understanding near-continuous records of sedimentary sequences has required the use of core-scanning X-ray fluorescence (XRF) spectrometry, for both terrestrial and marine sedimentary sequences. Initial XRF data are generally unusable in ‘raw-format’, requiring data processing in order to remove instrument bias, as well as informed sequence interpretation. The applicability of these conventional calibration equations to core-scanning XRF data are further limited by the constraints posed by unknown measurement geometry and specimen homogeneity, as well as matrix effects. Log-ratio based calibration schemes have been developed and applied to clastic sedimentary sequences focusing mainly on energy dispersive-XRF (ED-XRF) core-scanning. This study has applied high resolution core-scanning XRF to Holocene sedimentary sequences from the tidal-dominated Indian Sundarbans, (Ganges-Brahmaputra delta plain). The Log-Ratio Calibration Equation (LRCE) was applied to a sub-set of core-scan and conventional ED-XRF data to quantify elemental composition. This provides a robust calibration scheme using reduced major axis regression of log-ratio transformed geochemical data. Through partial least squares (PLS) modelling of geochemical and grain-size data, it is possible to derive robust proxy information for the Sundarbans depositional environment. The application of these techniques to Holocene sedimentary data offers an improved methodological framework for unravelling Holocene sedimentation patterns.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
A crescente urbanização global tem como consequência o aumento dos níveis de poluentes na atmosfera e a respetiva deterioração da qualidade do ar. O controlo da poluição atmosférica e monitorização da qualidade do ar são passos fundamentais para implementar estratégias de redução e estimular a consciência ambiental dos cidadãos. Com este intuito, existem várias técnicas e tecnologias que podem ser usadas para monitorizar a qualidade do ar. A utilização de microsensores surge como uma ferramenta inovadora para a monitorização da qualidade do ar. E, apesar dos desempenhos dos microsensores permitirem uma nova estratégia, resultando em respostas rápidas, baixos custos operacionais e eficiências elevadas, que não podem ser alcançados apenas com abordagens convencionais, ainda é necessário aprofundar o conhecimento a fim de integrar estas novas tecnologias, particularmente quanto à verificação do desempenho dos sensores comparativamente aos métodos de referência em campanhas experimentais. Esta dissertação, desenvolvida no Instituto do Ambidente e Desenvolvimento em forma de estágio, teve como objetivo a avaliação do desempenho de sensores de baixo custo comparativamente com os métodos de referência, tendo como base uma campanha de monitorização da qualidade do ar realizada no centro de Aveiro durante 2 semanas de outubro de 2014. De forma mais específica pretende-se perceber até que ponto se podem utilizar sensores de baixo custo que cumpram os requisitos especificados na legislação e as especificidades das normas, estabelecendo assim um protocolo de avaliação de microsensores. O trabalho realizado passou ainda pela caracterização da qualidade do ar no centro de Aveiro para o período da campanha de monitorização. A aplicação de microsensores eletroquímicos, MOS e OPC em paralelo com equipamento de referência neste estudo de campo permitiu avaliar a fiabilidade e a incerteza destas novas tecnologias de monitorização. Com este trabalho verificou-se que os microsensores eletroquímicos são mais precisos comparativamente aos microsensores baseados em óxidos metálicos, apresentando correlações fortes com os métodos de referência para diversos poluentes. Por sua vez, os resultados obtidos pelos contadores óticos de partículas foram satisfatórios, contudo poderiam ser melhorados quer pelo modo de amostragem, quer pelo método de tratamento de dados aplicado. Idealmente, os microsensores deveriam apresentar fortes correlações com o método de referência e elevada eficiência de recolha de dados. No entanto, foram identificados alguns problemas na eficiência de recolha de dados dos sensores que podem estar relacionados com a humidade relativa e temperaturas elevadas durante a campanha, falhas de comunicação intermitentes e, também, a instabilidade e reatividade causada por gases interferentes. Quando as limitações das tecnologias de sensores forem superadas e os procedimentos adequados de garantia e controlo de qualidade possam ser cumpridos, os sensores de baixo custo têm um grande potencial para permitir a monitorização da qualidade do ar com uma elevada cobertura espacial, sendo principalmente benéfico em áreas urbanas.
Resumo:
A crescente urbanização global tem como consequência o aumento dos níveis de poluentes na atmosfera e a respetiva deterioração da qualidade do ar. O controlo da poluição atmosférica e monitorização da qualidade do ar são passos fundamentais para implementar estratégias de redução e estimular a consciência ambiental dos cidadãos. Com este intuito, existem várias técnicas e tecnologias que podem ser usadas para monitorizar a qualidade do ar. A utilização de microsensores surge como uma ferramenta inovadora para a monitorização da qualidade do ar. E, apesar dos desempenhos dos microsensores permitirem uma nova estratégia, resultando em respostas rápidas, baixos custos operacionais e eficiências elevadas, que não podem ser alcançados apenas com abordagens convencionais, ainda é necessário aprofundar o conhecimento a fim de integrar estas novas tecnologias, particularmente quanto à verificação do desempenho dos sensores comparativamente aos métodos de referência em campanhas experimentais. Esta dissertação, desenvolvida no Instituto do Ambidente e Desenvolvimento em forma de estágio, teve como objetivo a avaliação do desempenho de sensores de baixo custo comparativamente com os métodos de referência, tendo como base uma campanha de monitorização da qualidade do ar realizada no centro de Aveiro durante 2 semanas de outubro de 2014. De forma mais específica pretende-se perceber até que ponto se podem utilizar sensores de baixo custo que cumpram os requisitos especificados na legislação e as especificidades das normas, estabelecendo assim um protocolo de avaliação de microsensores. O trabalho realizado passou ainda pela caracterização da qualidade do ar no centro de Aveiro para o período da campanha de monitorização. A aplicação de microsensores eletroquímicos, MOS e OPC em paralelo com equipamento de referência neste estudo de campo permitiu avaliar a fiabilidade e a incerteza destas novas tecnologias de monitorização. Com este trabalho verificou-se que os microsensores eletroquímicos são mais precisos comparativamente aos microsensores baseados em óxidos metálicos, apresentando correlações fortes com os métodos de referência para diversos poluentes. Por sua vez, os resultados obtidos pelos contadores óticos de partículas foram satisfatórios, contudo poderiam ser melhorados quer pelo modo de amostragem, quer pelo método de tratamento de dados aplicado. Idealmente, os microsensores deveriam apresentar fortes correlações com o método de referência e elevada eficiência de recolha de dados. No entanto, foram identificados alguns problemas na eficiência de recolha de dados dos sensores que podem estar relacionados com a humidade relativa e temperaturas elevadas durante a campanha, falhas de comunicação intermitentes e, também, a instabilidade e reatividade causada por gases interferentes. Quando as limitações das tecnologias de sensores forem superadas e os procedimentos adequados de garantia e controlo de qualidade possam ser cumpridos, os sensores de baixo custo têm um grande potencial para permitir a monitorização da qualidade do ar com uma elevada cobertura espacial, sendo principalmente benéfico em áreas urbanas.
Resumo:
Optical mapping of voltage signals has revolutionised the field and study of cardiac electrophysiology by providing the means to visualise changes in electrical activity at a high temporal and spatial resolution from the cellular to the whole heart level under both normal and disease conditions. The aim of this thesis was to develop a novel method of panoramic optical mapping using a single camera and to study myocardial electrophysiology in isolated Langendorff-perfused rabbit hearts. First, proper procedures for selection, filtering and analysis of the optical data recorded from the panoramic optical mapping system were established. This work was followed by extensive characterisation of the electrical activity across the epicardial surface of the preparation investigating time and heart dependent effects. In an initial study, features of epicardial electrophysiology were examined as the temperature of the heart was reduced below physiological values. This manoeuvre was chosen to mimic the temperatures experienced during various levels of hypothermia in vivo, a condition known to promote arrhythmias. The facility for panoramic optical mapping allowed the extent of changes in conduction timing and pattern of ventricular activation and repolarisation to be assessed. In the main experimental section, changes in epicardial electrical activity were assessed under various pacing conditions in both normal hearts and in a rabbit model of chronic MI. In these experiments, there was significant changes in the pattern of electrical activation corresponding with the changes in pacing regime. These experiments demonstrated a negative correlation between activation time and APD, which was not maintained during ventricular pacing. This suggests that activation pattern is not the sole determinant of action potential duration in intact hearts. Lastly, a realistic 3D computational model of the rabbit left ventricle was developed to simulate the passive and active mechanical properties of the heart. The aim of this model was to infer further information from the experimental optical mapping studies. In future, it would be feasible to gain insight into the electrical and mechanical performance of the heart by simulating experimental pacing conditions in the model.
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).