969 resultados para Optical data processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of organic materials displaying high two-photon absorption (TPA) has attracted much attention in recent years due to a variety of potential applications in photonics and optoelectronics, such as three-dimensional optical data storage, fluorescence imaging, two-photon microscopy, optical limiting, microfabrication, photodynamic therapy, upconverted lasing, etc. The most frequently employed structural motifs for TPA materials are donor–pi bridge–acceptor (D–pi–A) dipoles, donor–pi bridge–donor (D–pi–D) and acceptor–pi bridge-acceptor (A–pi–A) quadrupoles, octupoles, etc. In this work we present the synthesis and photophysical characterization of quadrupolar heterocyclic systems with potential applications in materials and biological sciences as TPA chromophores. Indole is a versatile building block for the synthesis of heterocyclic systems for several optoelectronic applications (chemosensors, nonlinear optical, OLEDs) due to its photophysical properties and donor electron ability and 4H-pyran-4-ylidene fragment is frequently used for the synthesis of red light-emitting materials. On the other hand, 2-(2,6-dimethyl-4H-pyran-4-ylidene)malononitrile (1) and 1,3-diethyl-dihydro-5-(2,6-dimethyl-4H-pyran-4-ylidene)-2-thiobarbituric (2) units are usually used as strong acceptor moieties for the preparation of π-conjugated systems of the push-pull type. These building blocks were prepared by Knoevenagel condensation of the corresponding ketone precursor with malononitrile or 1,3-diethyl-dihydro-2-thiobarbituric acid. The new quadrupolar 4H-pyran-4-ylidene fluorophores (3) derived from indole were prepared through condensation of 5-methyl-1H-indole-3-carbaldehyde with the acceptor precursors 1 and 2, in the presence of a catalytical amount of piperidine. The new compounds were characterized by the usual spectroscopic techniques (UV-vis., FT-IR and multinuclear NMR - 1H, 13C).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The data acquisition process in real-time is fundamental to provide appropriate services and improve health professionals decision. In this paper a pervasive adaptive data acquisition architecture of medical devices (e.g. vital signs, ventilators and sensors) is presented. The architecture was deployed in a real context in an Intensive Care Unit. It is providing clinical data in real-time to the INTCare system. The gateway is composed by several agents able to collect a set of patients’ variables (vital signs, ventilation) across the network. The paper shows as example the ventilation acquisition process. The clients are installed in a machine near the patient bed. Then they are connected to the ventilators and the data monitored is sent to a multithreading server which using Health Level Seven protocols records the data in the database. The agents associated to gateway are able to collect, analyse, interpret and store the data in the repository. This gateway is composed by a fault tolerant system that ensures a data store in the database even if the agents are disconnected. The gateway is pervasive, universal, and interoperable and it is able to adapt to any service using streaming data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Configuració d'un entorn de desenvolupament en el IDE Eclipse. Introducció als SIG. Usos, utilitats i exemples. Conèixer la eina gvSIG. Conèixer els estàndards més estesos de l'Open Geospatial Consortium (OGC) i en especial del Web Processing Services. Analitzar, dissenyar i desenvolupar un client capaç de consumir serveis wps.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estudi dels estàndards definits per l'Open Geospatial Consortium, i més concretament en l'estàndard Web Processing Service (wps). Així mateix, ha tingut una component pràctica que ha consistit en el disseny i desenvolupament d'un client capaç de consumir serveis Web creats segons wps i integrat a la plataforma gvSIG.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many regions of the world, including inland lakes, present with suboptimal conditions for the remotely sensed retrieval of optical signals, thus challenging the limits of available satellite data-processing tools, such as atmospheric correction models (ACM) and water constituent-retrieval (WCR) algorithms. Working in such regions, however, can improve our understanding of remote-sensing tools and their applicabil- ity in new contexts, in addition to potentially offering useful information about aquatic ecology. Here, we assess and compare 32 combinations of two ACMs, two WCRs, and three binary categories of data quality standards to optimize a remotely sensed proxy of plankton biomass in Lake Kivu. Each parameter set is compared against the available ground-truth match-ups using Spearman's right-tailed ρ. Focusing on the best sets from each ACM-WCR combination, their performances are discussed with regard to data distribution, sample size, spatial completeness, and seasonality. The results of this study may be of interest both for ecological studies on Lake Kivu and for epidemio- logical studies of disease, such as cholera, the dynamics of which has been associated with plankton biomass in other regions of the world.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The relief of the seafloor is an important source of data for many scientists. In this paper we present an optical system to deal with underwater 3D reconstruction. This system is formed by three cameras that take images synchronously in a constant frame rate scheme. We use the images taken by these cameras to compute dense 3D reconstructions. We use Bundle Adjustment to estimate the motion ofthe trinocular rig. Given the path followed by the system, we get a dense map of the observed scene by registering the different dense local reconstructions in a unique and bigger one

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

DnaSP is a software package for a comprehensive analysis of DNA polymorphism data. Version 5 implements a number of new features and analytical methods allowing extensive DNA polymorphism analyses on large datasets. Among other features, the newly implemented methods allow for: (i) analyses on multiple data files; (ii) haplotype phasing; (iii) analyses on insertion/deletion polymorphism data; (iv) visualizing sliding window results integrated with available genome annotations in the UCSC browser.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.