955 resultados para signal-to-noise-ratio (SNR)
Resumo:
Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML?fiber couple.
Resumo:
This work presents results for the three-dimensional displacement field at Tenerife Island calculated from campaign GPS and ascending and descending ENVISAT DInSAR interferograms. The goal of this work is to provide an example of the flexibility of the technique by fusing together new varieties of geodetic data, and to observe surface deformations and study precursors of potential activity in volcanic regions. Interferometric processing of ENVISAT data was performed with GAMMA software. All possible combinations were used to create interferograms and then stacking was used to increase signal-to-noise ratio. Decorrelated areas were widely observed, particularly for interferograms with large perpendicular baseline and large time span. Tropospheric signal was also observed which significantly complicated the interpretation. Subsidence signal was observed in the NW part of the island and around Mount Teide and agreed in some regions with campaign GPS data. It is expected that the technique will provide better results when more high quality DInSAR and GPS data is available
Resumo:
Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.
Resumo:
In this paper, the authors provide a methodology to design nonparametric permutation tests and, in particular, nonparametric rank tests for applications in detection. In the first part of the paper, the authors develop the optimization theory of both permutation and rank tests in the Neyman?Pearson sense; in the second part of the paper, they carry out a comparative performance analysis of the permutation and rank tests (detectors) against the parametric ones in radar applications. First, a brief review of some contributions on nonparametric tests is realized. Then, the optimum permutation and rank tests are derived. Finally, a performance analysis is realized by Monte-Carlo simulations for the corresponding detectors, and the results are shown in curves of detection probability versus signal-to-noise ratio
Resumo:
A new method of light modulation is reported. This method is based on the electro-optical properties of nematic materials and on the use of a new wedge structure. The advantages of this structure are the possibility of modulating nonpolarized light and the improved signal-to-noise ratio. The highest modulating frequency obtained is 25 kHz.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Optical communications receivers using wavelet signals processing is proposed in this paper for dense wavelength-division multiplexed (DWDM) systems and modal-division multiplexed (MDM) transmissions. The optical signal-to-noise ratio (OSNR) required to demodulate polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) modulation format is alleviated with the wavelet denoising process. This procedure improves the bit error rate (BER) performance and increasing the transmission distance in DWDM systems. Additionally, the wavelet-based design relies on signal decomposition using time-limited basis functions allowing to reduce the computational cost in Digital-Signal-Processing (DSP) module. Attending to MDM systems, a new scheme of encoding data bits based on wavelets is presented to minimize the mode coupling in few-mode (FWF) and multimode fibers (MMF). The Shifted Prolate Wave Spheroidal (SPWS) functions are proposed to reduce the modal interference.
Resumo:
Las sondas eléctricas se emplean habitualmente en la diagnosis de plasmas. La presente tesis aborda la operación de las sondas colectoras y emisoras de Langmuir en plasmas fríos de baja densidad. El estudio se ha centrado en la determinación del potencial de plasma, Vsp, mediante el potencial flotante de una sonda emisora. Esta técnica consiste en la medida del potencial de la sonda correspondiente a la condición de corriente neta igual a cero, el cual se denomina potencial flotante, VF. Este potencial se desplaza hacia el potencial del plasma según aumenta la emisión termoiónica de la sonda, hasta que se satura cerca de Vsp. Los experimentos llevados a cabo en la pluma de plasma de un motor iónico y en un plasma de descarga de glow muestran que la corriente de electrones termoiónicos es mayor que la corriente de electrones recogidos para una sonda polarizada por debajo del potencial del plasma, resultado inconsistente con la teoría tradicionalmente aceptada. Para investigar estos resultados se ha introducido el parámetro R, definido como el cociente entre la corriente de electrones emitidos y recogidos por la sonda. Este parámetro, que está relacionado con la diferencia de potencial VF - Vsp, también es útil para la descripción de los modos de operación de la sonda emisora (débil, fuerte y más allá del fuerte). Los resultados experimentales evidencian que, al contrario de lo que indica la teoría, R es mayor que la unidad. Esta discrepancia se puede solucionar introduciendo una población efectiva de electrones. Con dicha población, el nuevo modelo para la corriente total de la sonda reproduce los datos experimentales. El origen de este grupo electrónico es todavía una cuestión abierta, pero podría estar originada por una nueva estructura de potencial cerca de la sonda cuando ésta trabaja en el régimen de emisión fuerte. Para explicar dicha estructura de potencial, se propone un modelo unidimensional compuesto por un mínimo de potencial cerca de la superficie de la sonda. El análisis numérico indica que este pozo de potencial aparece para muy altas temperaturas de la sonda, reduciendo la cantidad de electrones emitidos que alcanzan el plasma y evitando así cualquier posible perturbación de éste. Los aspectos experimentales involucrados en el método del potencial flotante también se han estudiado, incluyendo cuestiones como las diferentes técnicas de obtención del VF, el cociente señal-ruido, el acoplamiento de la señal de los equipos utilizados para la obtención de las curvas I-V o la evidencia experimental de los diferentes modos de operación de la sonda. Estas evidencias empíricas se encuentran en todos los aspectos de operación de la sonda: la recolección de electrones, el potencial flotante, la precisión en las curvas I-V y la emisión electrónica. Ésta última también se estudia en la tesis, debido a que un fenómeno de super emisión tiene lugar en el régimen de emisión fuerte. En este modo de operación, las medidas experimentales indican que las corrientes termoiónicas de electrones son mayores que aquéllas predichas por la ecuación de Richardson-Dushman clásica. Por último, la diagnosis de plasmas usando sondas eléctrica bajo presencia de granos de polvo (plasmas granulares) en plasmas fríos de baja densidad también se ha estudiado, mediante la aplicación numérica de la técnica del potencial flotante de la sonda emisora en un plasma no convencional. Los resultados apuntan a que el potencial flotante de una sonda emisora se vería afectado por altas densidades de polvo o grandes partículas. ABSTRACT Electric probes are widely employed for plasma diagnostics. This dissertation concerns the operation of collecting and emissive Langmuir probes in low density cold plasmas. The study is focused on the determination of the plasma potential, Vsp, by means of the floating potential of emissive probes. This technique consists of the measurement of the probe potential, corresponding to the zero net probe current, which is the so-called floating potential, VF . This potential displaces towards the plasma potential as the thermionic electron emission increases, until it saturates near Vsp. Experiments carried out in the plasma plume of an ion thruster and in a glow discharge plasma show the thermionic electron current of the emissive Langmuir probe is higher than the collected electron current, for a probe with a bias potential below Vsp, which is inconsistent with the traditional accepted theory. To investigate these results, a parameter R is introduced as the ratio between the emitted and the collected electron current. This parameter, which is related to the difference VF - Vsp, is also useful for the description of the operation modes of the emissive Langmuir probe (weak, strong and beyond strong). The experimental results give an inconsistency of R > 1, which is solved by a modification of the theory for emissive probes, with the introduction of an effective electron population. With this new electron group, the new model for the total probe current agrees with the experimental data. The origin of this electron group remains an open question, but it might be originated by a new potential structure near the emissive probe when it operates in the strong emission regime. A simple one-dimension model composed by a minimum of potential near the probe surface is discussed for strongly emitting emissive probes. The results indicate that this complex potential structure appears for very high probe temperatures and the potential well might reduce the emitted electrons population reaching the plasma bulk. The experimental issues involved in the floating potential method are also studied, as the different obtaining techniques of VF, the signal-to-noise ratio, the signal coupling of the I-V curve measurement system or the experimental evidence of the probe operation modes. These empirical proofs concern all the probe operation aspects: the electron collection, the floating potential, the I-V curve accuracy as well as the electron emission. This last issue is also investigated in this dissertation, because a super emission takes place in the strong emission regime. In this operation mode, the experimental results indicate that the thermionic electron currents might be higher than those predicted by the classical Richardson-Dushman equation. Finally, plasma diagnosis using electric probes in the presence of dust grains (dusty plasmas) in low density cold plasmas is also addressed. The application of the floating potential technique of the emissive probe in a non-conventional complex plasma is numerically investigated, whose results point out the floating potential of the emissive probe might be shifted for high dust density or large dust particles.
Resumo:
Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.
Resumo:
The ability to accurately observe the Earth's carbon cycles from space gives scientists an important tool to analyze climate change. Current space-borne Integrated-Path Differential Absorption (IPDA) Iidar concepts have the potential to meet this need. They are mainly based on the pulsed time-offlight principle, in which two high energy pulses of different wavelengths interrogate the atmosphere for its transmission properties and are backscattered by the ground. In this paper, feasibility study results of a Pseudo-Random Single Photon Counting (PRSPC) IPDA lidar are reported. The proposed approach replaces the high energy pulsed source (e.g. a solidstate laser), with a semiconductor laser in CW operation with a similar average power of a few Watts, benefiting from better efficiency and reliability. The auto-correlation property of Pseudo-Random Binary Sequence (PRBS) and temporal shifting of the codes can be utilized to transmit both wavelengths simultaneously, avoiding the beam misalignment problem experienced by pulsed techniques. The envelope signal to noise ratio has been analyzed, and various system parameters have been selected. By restricting the telescopes field-of-view, the dominant noise source of ambient light can be suppressed, and in addition with a low noise single photon counting detector, a retrieval precision of 1.5 ppm over 50 km along-track averaging could be attained. We also describe preliminary experimental results involving a negative feedback Indium Gallium Arsenide (InGaAs) single photon avalanche photodiode and a low power Distributed Feedback laser diode modulated with PRBS driven acoustic optical modulator. The results demonstrate that higher detector saturation count rates will be needed for use in future spacebourne missions but measurement linearity and precision should meet the stringent requirements set out by future Earthobserving missions.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Fluorescence tomographic imaging in turbid media using early-arriving photons and Laplace transforms
Resumo:
We present a multichannel tomographic technique to detect fluorescent objects embedded in thick (6.4 cm) tissue-like turbid media using early-arriving photons. The experiments use picosecond laser pulses and a streak camera with single photon counting capability to provide short time resolution and high signal-to-noise ratio. The tomographic algorithm is based on the Laplace transform of an analytical diffusion approximation of the photon migration process and provides excellent agreement between the actual positions of the fluorescent objects and the experimental estimates. Submillimeter localization accuracy and 4- to 5-mm resolution are demonstrated. Moreover, objects can be accurately localized when fluorescence background is present. The results show the feasibility of using early-arriving photons to image fluorescent objects embedded in a turbid medium and its potential in clinical applications such as breast tumor detection.
Resumo:
Magnetic resonance microscopy (MRM) theoretically provides the spatial resolution and signal-to-noise ratio needed to resolve neuritic plaques, the neuropathological hallmark of Alzheimer’s disease (AD). Two previously unexplored MR contrast parameters, T2* and diffusion, are tested for plaque-specific contrast to noise. Autopsy specimens from nondemented controls (n = 3) and patients with AD (n = 5) were used. Three-dimensional T2* and diffusion MR images with voxel sizes ranging from 3 × 10−3 mm3 to 5.9 × 10−5 mm3 were acquired. After imaging, specimens were cut and stained with a microwave king silver stain to demonstrate neuritic plaques. From controls, the alveus, fimbria, pyramidal cell layer, hippocampal sulcus, and granule cell layer were detected by either T2* or diffusion contrast. These structures were used as landmarks when correlating MRMs with histological sections. At a voxel resolution of 5.9 × 10−5 mm3, neuritic plaques could be detected by T2*. The neuritic plaques emerged as black, spherical elements on T2* MRMs and could be distinguished from vessels only in cross-section when presented in three dimension. Here we provide MR images of neuritic plaques in vitro. The MRM results reported provide a new direction for applying this technology in vivo. Clearly, the ability to detect and follow the early progression of amyloid-positive brain lesions will greatly aid and simplify the many possibilities to intervene pharmacologically in AD.
Resumo:
A theory is provided for the detection efficiency of diffuse light whose frequency is modulated by an acoustical wave. We derive expressions for the speckle pattern of the modulated light, as well as an expression for the signal-to-noise ratio for the detector. The aim is to develop a new imaging technology for detection of tumors in humans. The acoustic wave is focused into a small geometrical volume, which provides the spatial resolution for the imaging. The wavelength of the light wave can be selected to provide information regarding the kind of tumor.
Resumo:
In an attempt to improve behavioral memory, we devised a strategy to amplify the signal-to-noise ratio of the cAMP pathway, which plays a central role in hippocampal synaptic plasticity and behavioral memory. Multiple high-frequency trains of electrical stimulation induce long-lasting long-term potentiation, a form of synaptic strengthening in hippocampus that is greater in both magnitude and persistence than the short-lasting long-term potentiation generated by a single tetanic train. Studies using pharmacological inhibitors and genetic manipulations have shown that this difference in response depends on the activity of cAMP-dependent protein kinase A. Genetic studies have also indicated that protein kinase A and one of its target transcription factors, cAMP response element binding protein, are important in memory in vivo. These findings suggested that amplification of signals through the cAMP pathway might lower the threshold for generating long-lasting long-term potentiation and increase behavioral memory. We therefore examined the biochemical, physiological, and behavioral effects in mice of partial inhibition of a hippocampal cAMP phosphodiesterase. Concentrations of a type IV-specific phosphodiesterase inhibitor, rolipram, which had no significant effect on basal cAMP concentration, increased the cAMP response of hippocampal slices to stimulation with forskolin and induced persistent long-term potentiation in CA1 after a single tetanic train. In both young and aged mice, rolipram treatment before training increased long- but not short-term retention in freezing to context, a hippocampus-dependent memory task.