840 resultados para Signal-to-noise Ratio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the authors provide a methodology to design nonparametric permutation tests and, in particular, nonparametric rank tests for applications in detection. In the first part of the paper, the authors develop the optimization theory of both permutation and rank tests in the Neyman?Pearson sense; in the second part of the paper, they carry out a comparative performance analysis of the permutation and rank tests (detectors) against the parametric ones in radar applications. First, a brief review of some contributions on nonparametric tests is realized. Then, the optimum permutation and rank tests are derived. Finally, a performance analysis is realized by Monte-Carlo simulations for the corresponding detectors, and the results are shown in curves of detection probability versus signal-to-noise ratio

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method of light modulation is reported. This method is based on the electro-optical properties of nematic materials and on the use of a new wedge structure. The advantages of this structure are the possibility of modulating nonpolarized light and the improved signal-to-noise ratio. The highest modulating frequency obtained is 25 kHz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical communications receivers using wavelet signals processing is proposed in this paper for dense wavelength-division multiplexed (DWDM) systems and modal-division multiplexed (MDM) transmissions. The optical signal-to-noise ratio (OSNR) required to demodulate polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) modulation format is alleviated with the wavelet denoising process. This procedure improves the bit error rate (BER) performance and increasing the transmission distance in DWDM systems. Additionally, the wavelet-based design relies on signal decomposition using time-limited basis functions allowing to reduce the computational cost in Digital-Signal-Processing (DSP) module. Attending to MDM systems, a new scheme of encoding data bits based on wavelets is presented to minimize the mode coupling in few-mode (FWF) and multimode fibers (MMF). The Shifted Prolate Wave Spheroidal (SPWS) functions are proposed to reduce the modal interference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las sondas eléctricas se emplean habitualmente en la diagnosis de plasmas. La presente tesis aborda la operación de las sondas colectoras y emisoras de Langmuir en plasmas fríos de baja densidad. El estudio se ha centrado en la determinación del potencial de plasma, Vsp, mediante el potencial flotante de una sonda emisora. Esta técnica consiste en la medida del potencial de la sonda correspondiente a la condición de corriente neta igual a cero, el cual se denomina potencial flotante, VF. Este potencial se desplaza hacia el potencial del plasma según aumenta la emisión termoiónica de la sonda, hasta que se satura cerca de Vsp. Los experimentos llevados a cabo en la pluma de plasma de un motor iónico y en un plasma de descarga de glow muestran que la corriente de electrones termoiónicos es mayor que la corriente de electrones recogidos para una sonda polarizada por debajo del potencial del plasma, resultado inconsistente con la teoría tradicionalmente aceptada. Para investigar estos resultados se ha introducido el parámetro R, definido como el cociente entre la corriente de electrones emitidos y recogidos por la sonda. Este parámetro, que está relacionado con la diferencia de potencial VF - Vsp, también es útil para la descripción de los modos de operación de la sonda emisora (débil, fuerte y más allá del fuerte). Los resultados experimentales evidencian que, al contrario de lo que indica la teoría, R es mayor que la unidad. Esta discrepancia se puede solucionar introduciendo una población efectiva de electrones. Con dicha población, el nuevo modelo para la corriente total de la sonda reproduce los datos experimentales. El origen de este grupo electrónico es todavía una cuestión abierta, pero podría estar originada por una nueva estructura de potencial cerca de la sonda cuando ésta trabaja en el régimen de emisión fuerte. Para explicar dicha estructura de potencial, se propone un modelo unidimensional compuesto por un mínimo de potencial cerca de la superficie de la sonda. El análisis numérico indica que este pozo de potencial aparece para muy altas temperaturas de la sonda, reduciendo la cantidad de electrones emitidos que alcanzan el plasma y evitando así cualquier posible perturbación de éste. Los aspectos experimentales involucrados en el método del potencial flotante también se han estudiado, incluyendo cuestiones como las diferentes técnicas de obtención del VF, el cociente señal-ruido, el acoplamiento de la señal de los equipos utilizados para la obtención de las curvas I-V o la evidencia experimental de los diferentes modos de operación de la sonda. Estas evidencias empíricas se encuentran en todos los aspectos de operación de la sonda: la recolección de electrones, el potencial flotante, la precisión en las curvas I-V y la emisión electrónica. Ésta última también se estudia en la tesis, debido a que un fenómeno de super emisión tiene lugar en el régimen de emisión fuerte. En este modo de operación, las medidas experimentales indican que las corrientes termoiónicas de electrones son mayores que aquéllas predichas por la ecuación de Richardson-Dushman clásica. Por último, la diagnosis de plasmas usando sondas eléctrica bajo presencia de granos de polvo (plasmas granulares) en plasmas fríos de baja densidad también se ha estudiado, mediante la aplicación numérica de la técnica del potencial flotante de la sonda emisora en un plasma no convencional. Los resultados apuntan a que el potencial flotante de una sonda emisora se vería afectado por altas densidades de polvo o grandes partículas. ABSTRACT Electric probes are widely employed for plasma diagnostics. This dissertation concerns the operation of collecting and emissive Langmuir probes in low density cold plasmas. The study is focused on the determination of the plasma potential, Vsp, by means of the floating potential of emissive probes. This technique consists of the measurement of the probe potential, corresponding to the zero net probe current, which is the so-called floating potential, VF . This potential displaces towards the plasma potential as the thermionic electron emission increases, until it saturates near Vsp. Experiments carried out in the plasma plume of an ion thruster and in a glow discharge plasma show the thermionic electron current of the emissive Langmuir probe is higher than the collected electron current, for a probe with a bias potential below Vsp, which is inconsistent with the traditional accepted theory. To investigate these results, a parameter R is introduced as the ratio between the emitted and the collected electron current. This parameter, which is related to the difference VF - Vsp, is also useful for the description of the operation modes of the emissive Langmuir probe (weak, strong and beyond strong). The experimental results give an inconsistency of R > 1, which is solved by a modification of the theory for emissive probes, with the introduction of an effective electron population. With this new electron group, the new model for the total probe current agrees with the experimental data. The origin of this electron group remains an open question, but it might be originated by a new potential structure near the emissive probe when it operates in the strong emission regime. A simple one-dimension model composed by a minimum of potential near the probe surface is discussed for strongly emitting emissive probes. The results indicate that this complex potential structure appears for very high probe temperatures and the potential well might reduce the emitted electrons population reaching the plasma bulk. The experimental issues involved in the floating potential method are also studied, as the different obtaining techniques of VF, the signal-to-noise ratio, the signal coupling of the I-V curve measurement system or the experimental evidence of the probe operation modes. These empirical proofs concern all the probe operation aspects: the electron collection, the floating potential, the I-V curve accuracy as well as the electron emission. This last issue is also investigated in this dissertation, because a super emission takes place in the strong emission regime. In this operation mode, the experimental results indicate that the thermionic electron currents might be higher than those predicted by the classical Richardson-Dushman equation. Finally, plasma diagnosis using electric probes in the presence of dust grains (dusty plasmas) in low density cold plasmas is also addressed. The application of the floating potential technique of the emissive probe in a non-conventional complex plasma is numerically investigated, whose results point out the floating potential of the emissive probe might be shifted for high dust density or large dust particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to accurately observe the Earth's carbon cycles from space gives scientists an important tool to analyze climate change. Current space-borne Integrated-Path Differential Absorption (IPDA) Iidar concepts have the potential to meet this need. They are mainly based on the pulsed time-offlight principle, in which two high energy pulses of different wavelengths interrogate the atmosphere for its transmission properties and are backscattered by the ground. In this paper, feasibility study results of a Pseudo-Random Single Photon Counting (PRSPC) IPDA lidar are reported. The proposed approach replaces the high energy pulsed source (e.g. a solidstate laser), with a semiconductor laser in CW operation with a similar average power of a few Watts, benefiting from better efficiency and reliability. The auto-correlation property of Pseudo-Random Binary Sequence (PRBS) and temporal shifting of the codes can be utilized to transmit both wavelengths simultaneously, avoiding the beam misalignment problem experienced by pulsed techniques. The envelope signal to noise ratio has been analyzed, and various system parameters have been selected. By restricting the telescopes field-of-view, the dominant noise source of ambient light can be suppressed, and in addition with a low noise single photon counting detector, a retrieval precision of 1.5 ppm over 50 km along-track averaging could be attained. We also describe preliminary experimental results involving a negative feedback Indium Gallium Arsenide (InGaAs) single photon avalanche photodiode and a low power Distributed Feedback laser diode modulated with PRBS driven acoustic optical modulator. The results demonstrate that higher detector saturation count rates will be needed for use in future spacebourne missions but measurement linearity and precision should meet the stringent requirements set out by future Earthobserving missions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a multichannel tomographic technique to detect fluorescent objects embedded in thick (6.4 cm) tissue-like turbid media using early-arriving photons. The experiments use picosecond laser pulses and a streak camera with single photon counting capability to provide short time resolution and high signal-to-noise ratio. The tomographic algorithm is based on the Laplace transform of an analytical diffusion approximation of the photon migration process and provides excellent agreement between the actual positions of the fluorescent objects and the experimental estimates. Submillimeter localization accuracy and 4- to 5-mm resolution are demonstrated. Moreover, objects can be accurately localized when fluorescence background is present. The results show the feasibility of using early-arriving photons to image fluorescent objects embedded in a turbid medium and its potential in clinical applications such as breast tumor detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance microscopy (MRM) theoretically provides the spatial resolution and signal-to-noise ratio needed to resolve neuritic plaques, the neuropathological hallmark of Alzheimer’s disease (AD). Two previously unexplored MR contrast parameters, T2* and diffusion, are tested for plaque-specific contrast to noise. Autopsy specimens from nondemented controls (n = 3) and patients with AD (n = 5) were used. Three-dimensional T2* and diffusion MR images with voxel sizes ranging from 3 × 10−3 mm3 to 5.9 × 10−5 mm3 were acquired. After imaging, specimens were cut and stained with a microwave king silver stain to demonstrate neuritic plaques. From controls, the alveus, fimbria, pyramidal cell layer, hippocampal sulcus, and granule cell layer were detected by either T2* or diffusion contrast. These structures were used as landmarks when correlating MRMs with histological sections. At a voxel resolution of 5.9 × 10−5 mm3, neuritic plaques could be detected by T2*. The neuritic plaques emerged as black, spherical elements on T2* MRMs and could be distinguished from vessels only in cross-section when presented in three dimension. Here we provide MR images of neuritic plaques in vitro. The MRM results reported provide a new direction for applying this technology in vivo. Clearly, the ability to detect and follow the early progression of amyloid-positive brain lesions will greatly aid and simplify the many possibilities to intervene pharmacologically in AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A theory is provided for the detection efficiency of diffuse light whose frequency is modulated by an acoustical wave. We derive expressions for the speckle pattern of the modulated light, as well as an expression for the signal-to-noise ratio for the detector. The aim is to develop a new imaging technology for detection of tumors in humans. The acoustic wave is focused into a small geometrical volume, which provides the spatial resolution for the imaging. The wavelength of the light wave can be selected to provide information regarding the kind of tumor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an attempt to improve behavioral memory, we devised a strategy to amplify the signal-to-noise ratio of the cAMP pathway, which plays a central role in hippocampal synaptic plasticity and behavioral memory. Multiple high-frequency trains of electrical stimulation induce long-lasting long-term potentiation, a form of synaptic strengthening in hippocampus that is greater in both magnitude and persistence than the short-lasting long-term potentiation generated by a single tetanic train. Studies using pharmacological inhibitors and genetic manipulations have shown that this difference in response depends on the activity of cAMP-dependent protein kinase A. Genetic studies have also indicated that protein kinase A and one of its target transcription factors, cAMP response element binding protein, are important in memory in vivo. These findings suggested that amplification of signals through the cAMP pathway might lower the threshold for generating long-lasting long-term potentiation and increase behavioral memory. We therefore examined the biochemical, physiological, and behavioral effects in mice of partial inhibition of a hippocampal cAMP phosphodiesterase. Concentrations of a type IV-specific phosphodiesterase inhibitor, rolipram, which had no significant effect on basal cAMP concentration, increased the cAMP response of hippocampal slices to stimulation with forskolin and induced persistent long-term potentiation in CA1 after a single tetanic train. In both young and aged mice, rolipram treatment before training increased long- but not short-term retention in freezing to context, a hippocampus-dependent memory task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II   regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The practical application of optical antennas in detection devices strongly depends on its ability to produce an acceptable signal-to-noise ratio for the given task. It is known that, due to the intrinsic problems arising from its sub-wavelength dimensions, optical antennas produce very small signals. The quality of these signals depends on the involved transduction mechanism. The contribution of different types of noise should be adapted to the transducer and to the signal extraction regime. Once noise is evaluated and measured, the specific detectivity, D*, becomes the parameter of interest when comparing the performance of antenna coupled devices with other detectors. However, this parameter involves some magnitudes that can be defined in several ways for optical antennas. In this contribution we are interested in the evaluation and comparison of D_ values for several bolometric optical antennas working in the infrared and involving two materials. At the same time, some material and geometrical parameters involved in the definition of noise and detectivity will be discussed to analyze the suitability of D_ to properly account for the performance of optical antennas.