901 resultados para frequency-domain spectroscopy, photon migration, absorption, reduced scattering, Intralipid, temperature measurement
Resumo:
A gain-of-function R620W polymorphism in the PTPN22 gene, encoding the lymphoid tyrosine phosphatase LYP, has recently emerged as an important risk factor for human autoimmunity. Here we report that another missense substitution (R263Q) within the catalytic domain of LYP leads to reduced phosphatase activity. High-resolution structural analysis revealed the molecular basis for this loss of function. Furthermore, the Q263 variant conferred protection against human systemic lupus erythematosus, reinforcing the proposal that inhibition of LYP activity could be beneficial in human autoimmunity.
Resumo:
The aim of the present study was to investigate the effects of different speech tasks (recitation of prose (PR), alliteration (AR) and hexameter (HR) verses) and a control task (mental arithmetic (MA) with voicing of the result) on endtidal CO2 (ET-CO2), cerebral hemodynamics; i.e. total hemoglobin (tHb) and tissue oxygen saturation (StO2). tHb and StO2 were measured with a frequency domain near infrared spectrophotometer (ISS Inc., USA) and ET-CO2 with a gas analyzer (Nellcor N1000). Measurements were performed in 24 adult volunteers (11 female, 13 male; age range 22 to 64 years) during task performance in a randomized order on 4 different days to avoid potential carry over effects. Statistical analysis was applied to test differences between baseline, 2 recitation and 5 recovery periods. The two brain hemispheres and 4 tasks were tested separately. Data analysis revealed that during the recitation tasks (PR, AR and HR) StO2 decreased statistically significant (p < 0.05) during PR and AR in the right prefrontal cortex (PFC) and during AR and HR in the left PFC. tHb showed a significant decrease during HR in the right PFC and during PR, AR and HR in the left PFC. During the MA task, StO2 increased significantly. A significant decrease in ET-CO2 was found during all 4 tasks with the smallest decrease during the MA task. In conclusion, we hypothesize that the observed changes in tHb and StO2 are mainly caused by an altered breathing during the tasks that led a lowering of the CO2 content in the blood provoked a cerebral CO2 reaction, i.e. a vasoconstriction of blood vessels due to decreased CO2 pressure and thereby decrease in cerebral blood volume. Therefore, breathing changes should be monitored during brain studies involving speech when using functional near infrared spectroscopy (fNIRS) to ensure a correct interpretation of changes in hemodynamics and oxygenation.
Resumo:
BACKGROUND Sodium channel NaV1.5 underlies cardiac excitability and conduction. The last 3 residues of NaV1.5 (Ser-Ile-Val) constitute a PDZ domain-binding motif that interacts with PDZ proteins such as syntrophins and SAP97 at different locations within the cardiomyocyte, thus defining distinct pools of NaV1.5 multiprotein complexes. Here, we explored the in vivo and clinical impact of this motif through characterization of mutant mice and genetic screening of patients. METHODS AND RESULTS To investigate in vivo the regulatory role of this motif, we generated knock-in mice lacking the SIV domain (ΔSIV). ΔSIV mice displayed reduced NaV1.5 expression and sodium current (INa), specifically at the lateral myocyte membrane, whereas NaV1.5 expression and INa at the intercalated disks were unaffected. Optical mapping of ΔSIV hearts revealed that ventricular conduction velocity was preferentially decreased in the transversal direction to myocardial fiber orientation, leading to increased anisotropy of ventricular conduction. Internalization of wild-type and ΔSIV channels was unchanged in HEK293 cells. However, the proteasome inhibitor MG132 rescued ΔSIV INa, suggesting that the SIV motif is important for regulation of NaV1.5 degradation. A missense mutation within the SIV motif (p.V2016M) was identified in a patient with Brugada syndrome. The mutation decreased NaV1.5 cell surface expression and INa when expressed in HEK293 cells. CONCLUSIONS Our results demonstrate the in vivo significance of the PDZ domain-binding motif in the correct expression of NaV1.5 at the lateral cardiomyocyte membrane and underline the functional role of lateral NaV1.5 in ventricular conduction. Furthermore, we reveal a clinical relevance of the SIV motif in cardiac disease.
Resumo:
In situ and simultaneous measurement of the three most abundant isotopologues of methane using mid-infrared laser absorption spectroscopy is demonstrated. A field-deployable, autonomous platform is realized by coupling a compact quantum cascade laser absorption spectrometer (QCLAS) to a preconcentration unit, called trace gas extractor (TREX). This unit enhances CH4 mole fractions by a factor of up to 500 above ambient levels and quantitatively separates interfering trace gases such as N2O and CO2. The analytical precision of the QCLAS isotope measurement on the preconcentrated (750 ppm, parts-per-million, µmole mole−1) methane is 0.1 and 0.5 ‰ for δ13C- and δD-CH4 at 10 min averaging time. Based on repeated measurements of compressed air during a 2-week intercomparison campaign, the repeatability of the TREX–QCLAS was determined to be 0.19 and 1.9 ‰ for δ13C and δD-CH4, respectively. In this intercomparison campaign the new in situ technique is compared to isotope-ratio mass spectrometry (IRMS) based on glass flask and bag sampling and real time CH4 isotope analysis by two commercially available laser spectrometers. Both laser-based analyzers were limited to methane mole fraction and δ13C-CH4 analysis, and only one of them, a cavity ring down spectrometer, was capable to deliver meaningful data for the isotopic composition. After correcting for scale offsets, the average difference between TREX–QCLAS data and bag/flask sampling–IRMS values are within the extended WMO compatibility goals of 0.2 and 5 ‰ for δ13C- and δD-CH4, respectively. This also displays the potential to improve the interlaboratory compatibility based on the analysis of a reference air sample with accurately determined isotopic composition.
Resumo:
We propose dual-domain filtering, an image processing paradigm that couples spatial domain with frequency domain filtering. Our dual-domain defined filter removes artifacts like residual noise of other image denoising methods and compression artifacts. Moreover, iterating the filter achieves state-of-the-art image denoising results, but with a much simpler algorithm than competing approaches. The simplicity and versatility of the dual-domain filter makes it an attractive tool for image processing.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
This paper presents a theoretical analysis of possible jitter impact in application of numeric criterion for fastmeasurement of frequency by coincidence principle. The primary goal is the generation of a signal containing a known amount of each jitter components. This signal was used for testing signals with regular pulse trains. Initially, jitter components are analyzed and modeled individually. Next, sequences for combining different kinds of jitter are modeled, simulated and evaluated. Jitter model simulation in Matlab is utilized to show the independence of frequencymeasurement results on the total jitter present in the reference and desired pulse trains independently. A good agreement between previously introduced theory of fastmeasurement of frequency and simulation in jitter presence is verified; these results allows to engineers use the numeric criterion for fastmeasurement of frequency in spite to interactions among jitter components in various applications for frequency domain sensors.
Resumo:
The evolution of water content on a sandy soil during the sprinkler irrigation campaign, in the summer of 2010, of a field of sugar beet crop located at Valladolid (Spain) is assessed by a capacitive FDR (Frequency Domain Reflectometry) EnviroScan. This field is one of the experimental sites of the Spanish research center for the sugar beet development (AIMCRA). The objective of the work focus on monitoring the soil water content evolution of consecutive irrigations during the second two weeks of July (from the 12th to the 28th). These measurements will be used to simulate water movement by means of Hydrus-2D. The water probe logged water content readings (m3/m3) at 10, 20, 40 and 60 cm depth every 30 minutes. The probe was placed between two rows in one of the typical 12 x 15 m sprinkler irrigation framework. Furthermore, a texture analysis at the soil profile was also conducted. The irrigation frequency in this farm was set by the own personal farmer 0 s criteria that aiming to minimizing electricity pumping costs, used to irrigate at night and during the weekend i.e. longer irrigation frequency than expected. However, the high evapotranspiration rates and the weekly sugar beet water consumption—up to 50mm/week—clearly determined the need for lower this frequency. Moreover, farmer used to irrigate for six or five hours whilst results from the EnviroScan probe showed the soil profile reaching saturation point after the first three hours. It must be noted that AIMCRA provides to his members with a SMS service regarding weekly sugar beet water requirement; from the use of different meteorological stations and evapotranspiration pans, farmers have an idea of the weekly irrigation needs. Nevertheless, it is the farmer 0 s decision to decide how to irrigate. Thus, in order to minimize water stress and pumping costs, a suitable irrigation time and irrigation frequency was modeled with Hydrus-2D. Results for the period above mentioned showed values of water content ranging from 35 and 30 (m3/m3) for the first 10 and 20cm profile depth (two hours after irrigation) to the minimum 14 and 13 (m3/m3) ( two hours before irrigation). For the 40 and 60 cm profile depth, water content moves steadily across the dates: The greater the root activity the greater the water content variation. According to the results in the EnviroScan probe and the modeling in Hydrus-2D, shorter frequencies and irrigation times are suggested.
Resumo:
Axisymmetric shells are analyzed by means of one-dimensional continuum elements by using the analogy between the bending of shells and the bending of beams on elastic foundation. The mathematical model is formulated in the frequency domain. Because the solution of the governing equations of vibration of beams are exact, the spatial discretization only depends on geometrical or material considerations. For some kind of situations, for example, for high frequency excitations, this approach may be more convenient than other conventional ones such as the finite element method.
Resumo:
Time-resolved reflectance is proposed and effectively used for the nondestructive measurement of the optical properties in apples. The technique is based on the detection of the temporal dispersion of a short laser pulse injected into the probed medium. The time-distribution of re-emitted photons interpreted with a solution of the Diffusion equation yields the mean values of the absorption and reduced scattering coefficients of the medium. The proposed technique proved valuable for the measurement of the absorption and scattering spectra of different varieties of apples. No major variations were observed in the experimental data when the fruit was peeled, proving that the measured optical properties are referred to the pulp. The depth of probed volume was determined to be about 2 cm. Finally, the technique proved capable to follow the change in chlorophyll absorption during storage.
Resumo:
Time-resolved reflectance is proposed and effectively used for the nondestructive measurement of the optical properties in apples. The technique is based on the detection of the temporal dispersion of a short laser pulse injected into the probed medium. The time-distribution of re-emitted photons interpreted with a solution of the Diffusion equation yields the mean values of the absorption and reduced scattering coefficients of the medium. The proposed technique proved valuable for the measurement of the absorption and scattering spectra of different varieties of apples. No major variations were observed in the experimental data when the fruit was peeled, proving that the measured optical properties are referred to the pulp. The depth of probed volume was determined to be about 2 cm. Finally, the technique proved capable to follow the change in chlorophyll absorption during storage.
Resumo:
El presente Trabajo Fin de Máster pretende llevar a cabo el análisis del comportamiento vibratorio de resonadores de membrana, consistentes en un panel delgado y ligero montado a cierta distancia de un elemento constructivo rígido y pesado. Este tipo de sistemas resonantes son empleados habitualmente como absorbentes de media-baja frecuencia en aplicaciones de acondicionamiento acústico de salas. El análisis hará especial hincapié en la influencia del acoplamiento mecánico-acústico entre la placa vibrante (estructura) y el colchón de aire (fluido) encerrado entre la misma y la pared rígida. En primer lugar, realizaremos el análisis modal experimental del resonador objeto de ensayo a partir de las mediciones de su respuesta vibratoria, con el fin de caracterizar su comportamiento en base a sus primeros modos propios acoplados de flexión. El análisis de las señales vibratorias en el dominio de la frecuencia para la identificación de dicho modos se realizará en el entorno de programación MATLAB, haciendo uso de una herramienta propia que implementa los métodos de cálculo y los algoritmos necesarios para tal fin. Asimismo, simularemos el comportamiento del resonador mediante el método de elementos finitos (FEM), utilizando las aplicaciones ANSYS y SYSNOISE, considerando diferentes condiciones frontera en el modelo generado. Los resultados aquí obtenidos serán de utilidad para complementar aquellos obtenidos de forma experimental a la hora de extraer conclusiones prácticas del análisis realizado. SUMMARY. This Master's Thesis intends to carry out the analysis of the vibratory behaviour of resonance absorbers, consisting of a thin and lightweight panel mounted at a distance from a rigid wall. Such systems are commonly used as sound absorption systems for mid-low frequency in room acoustics applications. The analysis will emphasize the influence of mechanical-acoustic coupling between the vibrating plate (structure) and the air cushion (acoustic element) enclosed behind it. First of all, we are performing the experimental modal analysis of the resonance absorber under test from the vibrational response measurements, in order to characterize its behaviour based on its first bending coupled-modes. The analysis of vibration signals in the frequency domain for the identification of such modes will be made in MATLAB programming environment, using a proprietary tool that implements the calculation methods and algorithms needed for this purpose. Furthermore, we are simulating the behaviour of the resonance absorber applying the Finite Element Method (FEM) – using ANSYS and SYSNOISE applications - considering different boundary conditions in the model created. The results from the simulation will be useful to complement those obtained experimentally when drawing practical conclusions from this analysis.
Resumo:
To perceive a coherent environment, incomplete or overlapping visual forms must be integrated into meaningful coherent percepts, a process referred to as ?Gestalt? formation or perceptual completion. Increasing evidence suggests that this process engages oscillatory neuronal activity in a distributed neuronal assembly. A separate line of evidence suggests that Gestalt formation requires top-down feedback from higher order brain regions to early visual cortex. Here we combine magnetoencephalography (MEG) and effective connectivity analysis in the frequency domain to specifically address the effective coupling between sources of oscillatory brain activity during Gestalt formation. We demonstrate that perceptual completion of two-tone ?Mooney? faces induces increased gamma frequency band power (55?71 Hz) in human early visual, fusiform and parietal cortices. Within this distributed neuronal assembly fusiform and parietal gamma oscillators are coupled by forward and backward connectivity during Mooney face perception, indicating reciprocal influences of gamma activity between these higher order visual brain regions. Critically, gamma band oscillations in early visual cortex are modulated by top-down feedback connectivity from both fusiform and parietal cortices. Thus, we provide a mechanistic account of Gestalt perception in which gamma oscillations in feature sensitive and spatial attention-relevant brain regions reciprocally drive one another and convey global stimulus aspects to local processing units at low levels of the sensory hierarchy by top-down feedback. Our data therefore support the notion of inverse hierarchical processing within the visual system underlying awareness of coherent percepts.
Resumo:
Quizás el Código Morse, inventado en 1838 para su uso en la telegrafía, es uno de los primeros ejemplos de la utilización práctica de la compresión de datos [1], donde las letras más comunes del alfabeto son codificadas con códigos más cortos que las demás. A partir de 1940 y tras el desarrollo de la teoría de la información y la creación de los primeros ordenadores, la compresión de la información ha sido un reto constante y fundamental entre los campos de trabajo de investigadores de todo tipo. Cuanto mayor es nuestra comprensión sobre el significado de la información, mayor es nuestro éxito comprimiéndola. En el caso de la información multimedia, su naturaleza permite la compresión con pérdidas, alcanzando así cotas de compresión imposibles para los algoritmos sin pérdidas. Estos “recientes” algoritmos con pérdidas han estado mayoritariamente basados en transformación de la información al dominio de la frecuencia y en la eliminación de parte de la información en dicho dominio. Transformar al dominio de la frecuencia posee ventajas pero también involucra unos costes computacionales inevitables. Esta tesis presenta un nuevo algoritmo de compresión multimedia llamado “LHE” (Logarithmical Hopping Encoding) que no requiere transformación al dominio de la frecuencia, sino que trabaja en el dominio del espacio. Esto lo convierte en un algoritmo lineal de reducida complejidad computacional. Los resultados del algoritmo son prometedores, superando al estándar JPEG en calidad y velocidad. Para ello el algoritmo utiliza como base la respuesta fisiológica del ojo humano ante el estímulo luminoso. El ojo, al igual que el resto de los sentidos, responde al logaritmo de la señal de acuerdo a la ley de Weber. El algoritmo se compone de varias etapas. Una de ellas es la medición de la “Relevancia Perceptual”, una nueva métrica que nos va a permitir medir la relevancia que tiene la información en la mente del sujeto y en base a la misma, degradar en mayor o menor medida su contenido, a través de lo que he llamado “sub-muestreado elástico”. La etapa de sub-muestreado elástico constituye una nueva técnica sin precedentes en el tratamiento digital de imágenes. Permite tomar más o menos muestras en diferentes áreas de una imagen en función de su relevancia perceptual. En esta tesis se dan los primeros pasos para la elaboración de lo que puede llegar a ser un nuevo formato estándar de compresión multimedia (imagen, video y audio) libre de patentes y de alto rendimiento tanto en velocidad como en calidad. ABSTRACT The Morse code, invented in 1838 for use in telegraphy, is one of the first examples of the practical use of data compression [1], where the most common letters of the alphabet are coded shorter than the rest of codes. From 1940 and after the development of the theory of information and the creation of the first computers, compression of information has been a constant and fundamental challenge among any type of researchers. The greater our understanding of the meaning of information, the greater our success at compressing. In the case of multimedia information, its nature allows lossy compression, reaching impossible compression rates compared with lossless algorithms. These "recent" lossy algorithms have been mainly based on information transformation to frequency domain and elimination of some of the information in that domain. Transforming the frequency domain has advantages but also involves inevitable computational costs. This thesis introduces a new multimedia compression algorithm called "LHE" (logarithmical Hopping Encoding) that does not require transformation to frequency domain, but works in the space domain. This feature makes LHE a linear algorithm of reduced computational complexity. The results of the algorithm are promising, outperforming the JPEG standard in quality and speed. The basis of the algorithm is the physiological response of the human eye to the light stimulus. The eye, like other senses, responds to the logarithm of the signal according with Weber law. The algorithm consists of several stages. One is the measurement of "perceptual relevance," a new metric that will allow us to measure the relevance of information in the subject's mind and based on it; degrade accordingly their contents, through what I have called "elastic downsampling". Elastic downsampling stage is an unprecedented new technique in digital image processing. It lets take more or less samples in different areas of an image based on their perceptual relevance. This thesis introduces the first steps for the development of what may become a new standard multimedia compression format (image, video and audio) free of patents and high performance in both speed and quality.
Resumo:
In this work the thermal analysis of a small satellite orbiting around the Earth has been approached by direct integration of the heat balance equations of a two-node reduced model, obtaining a linearized second order ODE problem, similar in form to the classical case of the forced vibration of a damped system. As the thermal loads (solar radiation, albedo, etc.) are harmonic, the problem is solved by means of Fourier analysis methods. Research on that field can be directly applied to the analysis of thermal problems and the results obtained are satisfactory. Working on the frequency domain streamlines the analysis, simplifies the study and facilitates the experimental testing. The transfer functions are obtained for the two-node case but the study can be extended to an n-node model.