935 resultados para spectral spaces in MV-algebra


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der Goldfisch besitzt, im Gegensatz zum Menschen, ein tetrachromatisches Farbensehsystem, das außerordentlich gut untersucht ist. Die Farben gleicher Helligkeit lassen sich hier in einem dreidimensionalen Tetraeder darstellen. Ziel der vorliegenden Arbeit war es herauszufinden, wie gut der Goldfisch Farben, die dem Menschen ungesättigt erscheinen und im Inneren des Farbtetraeders liegen, unterscheiden kann. Des Weiteren stellte sich die Frage, ob sowohl „Weiß“ (ohne UV) als auch Xenonweiß (mit UV) vom Fisch als „unbunt“ oder „neutral“ wahrgenommenen werden. Um all dies untersuchen zu können, musste ein komplexer Versuchsaufbau entwickelt werden, mit dem den Fischen monochromatische und mit Weiß gemischte Lichter gleicher Helligkeit, sowie Xenonweiß gezeigt werden konnte. Die Fische erlernten durch operante Konditionierung einen Dressurstimulus (monochromatisches Licht der Wellenlängen 660 nm, 599 nm, 540 nm, 498 nm oder 450 nm) von einem Vergleichsstimulus (Projektorweiß) zu unterscheiden. Im Folgenden wurde dem Vergleichstimulus in 10er-Schritten immer mehr der jeweiligen Dressurspektralfarbe beigemischt, bis die Goldfische keine sichere Wahl für den Dressurstimulus mehr treffen konnten. Die Unterscheidungsleistung der Goldfische wurde mit zunehmender Beimischung von Dressurspektralfarbe zum Projektorweiß immer geringer und es kristallisierte sich ein Bereich in der Grundfläche des Tetraeders heraus, in dem die Goldfische keine Unterscheidung mehr treffen konnten. Um diesen Bereich näher zu charakterisieren, bekamen die Goldfische Mischlichter, bei denen gerade keine Unterscheidung mehr zum Projektorweiß möglich war, in Transfertests gezeigt. Da die Goldfische diese Mischlichter nicht voneinander unterscheiden konnten, läßt sich schließen, dass es einen größeren Bereich gibt, der, ebenso wie Weiß (ohne UV) für den Goldfisch „neutral“ erscheint. Wenn nun Weiß (ohne UV) für den Goldfisch „neutral“ erscheint, sollte es dem Xenonweiß ähnlich sein. Die Versuche zeigten allerdings, dass die Goldfische die Farben Weiß (ohne UV) und Xenonweiß als verschieden wahrnehmen. Betrachtet man die Sättigung für die Spektralfarben, so zeigte sich, dass die Spektralfarbe 540 nm für den Goldfisch am gesättigsten, die Spektralfarbe 660 nm am ungesättigsten erscheint.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lo scopo del presente lavoro di tesi riguarda la caratterizzazione di un sensore ottico per la lettura di ematocrito e lo sviluppo dell’algoritmo di calibrazione del dispositivo. In altre parole, utilizzando dati ottenuti da una sessione di calibrazione opportunamente pianificata, l’algoritmo sviluppato ha lo scopo di restituire la curva di interpolazione dei dati che caratterizza il trasduttore. I passi principali del lavoro di tesi svolto sono sintetizzati nei punti seguenti: 1) Pianificazione della sessione di calibrazione necessaria per la raccolta dati e conseguente costruzione di un modello black box.  Output: dato proveniente dal sensore ottico (lettura espressa in mV)  Input: valore di ematocrito espresso in punti percentuali ( questa grandezza rappresenta il valore vero di volume ematico ed è stata ottenuta con un dispositivo di centrifugazione sanguigna) 2) Sviluppo dell’algoritmo L’algoritmo sviluppato e utilizzato offline ha lo scopo di restituire la curva di regressione dei dati. Macroscopicamente, il codice possiamo distinguerlo in due parti principali: 1- Acquisizione dei dati provenienti da sensore e stato di funzionamento della pompa bifasica 2- Normalizzazione dei dati ottenuti rispetto al valore di riferimento del sensore e implementazione dell’algoritmo di regressione. Lo step di normalizzazione dei dati è uno strumento statistico fondamentale per poter mettere a confronto grandezze non uniformi tra loro. Studi presenti, dimostrano inoltre un mutazione morfologica del globulo rosso in risposta a sollecitazioni meccaniche. Un ulteriore aspetto trattato nel presente lavoro, riguarda la velocità del flusso sanguigno determinato dalla pompa e come tale grandezza sia in grado di influenzare la lettura di ematocrito.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elliptical galaxies are one of the most characteristic objects we can find in the sky. In order to unveil their properties, such as their structure or chemical composition, one must study their spectral emission. In fact they seem to behave rather differently when observed with different eyes. This is because their light is mainly brought by two different components: optical radiation arises from its stars, while the X emission is primarly due to a halo of extremely hot gas in which ellipticals seem to be embedded. After a brief classification, the two main processes linked to these phenomena will be described, together with the informations we can collect thanks to them. Eventually, we will take a quick look at the other regions of the electromagnetic spectrum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectroscopy of the 1S-2S transition of antihydrogen confined in a neutral atom trap and comparison with the equivalent spectral line in hydrogen will provide an accurate test of CPT symmetry and the first one in a mixed baryon-lepton system. Also, with neutral antihydrogen atoms, the gravitational interaction between matter and antimatter can be tested unperturbed by the much stronger Coulomb forces.rnAntihydrogen is regularly produced at CERN's Antiproton Decelerator by three-body-recombination (TBR) of one antiproton and two positrons. The method requires injecting antiprotons into a cloud of positrons, which raises the average temperature of the antihydrogen atoms produced way above the typical 0.5 K trap depths of neutral atom traps. Therefore only very few antihydrogen atoms can be confined at a time. Precision measurements, like laser spectroscopy, will greatly benefit from larger numbers of simultaneously trapped antihydrogen atoms.rnTherefore, the ATRAP collaboration developed a different production method that has the potential to create much larger numbers of cold, trappable antihydrogen atoms. Positrons and antiprotons are stored and cooled in a Penning trap in close proximity. Laser excited cesium atoms collide with the positrons, forming Rydberg positronium, a bound state of an electron and a positron. The positronium atoms are no longer confined by the electric potentials of the Penning trap and some drift into the neighboring cloud of antiprotons where, in a second charge exchange collision, they form antihydrogen. The antiprotons remain at rest during the entire process, so much larger numbers of trappable antihydrogen atoms can be produced. Laser excitation is necessary to increase the efficiency of the process since the cross sections for charge-exchange collisions scale with the fourth power of the principal quantum number n.rnThis method, named double charge-exchange, was demonstrated by ATRAP in 2004. Since then, ATRAP constructed a new combined Penning Ioffe trap and a new laser system. The goal of this thesis was to implement the double charge-exchange method in this new apparatus and increase the number of antihydrogen atoms produced.rnCompared to our previous experiment, we could raise the numbers of positronium and antihydrogen atoms produced by two orders of magnitude. Most of this gain is due to the larger positron and antiproton plasmas available by now, but we could also achieve significant improvements in the efficiencies of the individual steps. We therefore showed that the double charge-exchange can produce comparable numbers of antihydrogen as the TBR method, but the fraction of cold, trappable atoms is expected to be much higher. Therefore this work is an important step towards precision measurements with trapped antihydrogen atoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After briefly discuss the natural homogeneous Lie group structure induced by Kolmogorov equations in chapter one, we define an intrinsic version of Taylor polynomials and Holder spaces in chapter two. We also compare our definition with others yet known in literature. In chapter three we prove an analogue of Taylor formula, that is an estimate of the remainder in terms of the homogeneous metric.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plasmons in metal nanoparticles respond to changes in their local environment by a spectral shift in resonance. Here, the potential of plasmonic metal nanoparticles for label-free detection and observation of biological systems is presented. Comparing the material silver and gold concerning plasmonic sensitivity, silver nanoparticles exhibit a higher sensitivity but their chemical instability under light exposure limits general usage. A new approach combining results from optical dark-field microscopy and transmission electron microscopy allows localization and quantification of gold nanoparticles internalized into living cells. Nanorods exposing a negatively charged biocompatible polymer seem to be promising candidates to sense membrane fluctuations of adherent cells. Many small nanoparticles being specific sensing elements can build up a sensor for parallel analyte detection without need of labeling, which is easy to fabricate, re-usable, and has sensitivity down to nanomolar concentrations. Besides analyte detection, binding kinetics of various partner proteins interacting with one protein of interest are accessible in parallel. Gold nanoparticles are able to sense local oscillations in the surface density of proteins on a lipid bilayer, which could not be resolved so far. Studies on the fluorescently labeled system and the unlabeled system identify an influence of the label on the kinetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance imaging of inhaled fluorinated inert gases ((19)F-MRI) such as sulfur hexafluoride (SF(6)) allows for analysis of ventilated air spaces. In this study, the possibility of using this technique to image lung function was assessed. For this, (19)F-MRI of inhaled SF(6) was compared with respiratory gas analysis, which is a global but reliable measure of alveolar gas fraction. Five anesthetized pigs underwent multiple-breath wash-in procedures with a gas mixture of 70% SF(6) and 30% oxygen. Two-dimensional (19)F-MRI and end-expiratory gas fraction analysis were performed after 4 to 24 inhaled breaths. Signal intensity of (19)F-MRI and end-expiratory SF(6) fraction were evaluated with respect to linear correlation and reproducibility. Time constants were estimated by both MRI and respiratory gas analysis data and compared for agreement. A good linear correlation between signal intensity and end-expiratory gas fraction was found (correlation coefficient 0.99+/-0.01). The data were reproducible (standard error of signal intensity 8% vs. that of gas fraction 5%) and the comparison of time constants yielded a sufficient agreement. According to the good linear correlation and the acceptable reproducibility, we suggest the (19)F-MRI to be a valuable tool for quantification of intrapulmonary SF(6) and hence lung function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We re-analyze the signal of non-planetary energetic neutral atoms (ENAs) in the 0.4-5.0 keV range measured with the Neutral Particle Detector (NPD) of the ASPERA-3 and ASPERA-4 experiments on board the Mars and Venus Express satellites. Due to improved knowledge of sensor characteristics and exclusion of data sets affected by instrument effects, the typical intensity of the ENA signal obtained by ASPERA-3 is an order of magnitude lower than in earlier reports. The ENA intensities measured with ASPERA-3 and ASPERA-4 now agree with each other. In the present analysis, we also correct the ENA signal for Compton-Getting and for ionization loss processes under the assumption of a heliospheric origin. We find spectral shapes and intensities consistent with those measured by the Interstellar Boundary Explorer (IBEX). The principal advantage of ASPERA with respect to the IBEX sensors is the two times better spectral resolution. In this study, we discuss the physical significance of the spectral shapes and their potential variation across the sky. At present, these observations are the only independent test of the heliospheric ENA signal measured with IBEX in this energy range. The ASPERA measurements also allow us to check for a temporal variation of the heliospheric signal as they were obtained between 2003 and 2007, whereas IBEX has been operational since the end of 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze a series of targeted CRISM and HiRISE observations of seven regions of interest at high latitudes in the Northern polar regions of Mars. These data allow us to investigate the temporal evolution of the composition of the seasonal ice cap during spring, with a special emphasis on peculiar phenomena occurring in the dune fields and in the vicinity of the scarps of the North Polar Layered Deposits (NPLDs). The strength of the spectral signature of CO2 ice continuously decreases during spring whereas the one of H2O ice first shows a strong increase until Ls = 50°. This evolution is consistent with a scenario previously established from analysis of OMEGA data, in which a thin layer of pure H2O ice progressively develops at the surface of the volatile layer. During early spring (Ls < 10°), widespread jet activity is observed by HiRISE while strong spectral signatures of CO2 ice are detected by CRISM. Later, around Ls = 20-40°, activity concentrates at the dune fields where CRISM also detects a spectral enrichment in CO2 ice, consistent with "Kieffer's model" (Kieffer, H.H. [2007]. J. Geophys. Res. 112, E08005. doi:10.1029/2006JE002816) for jet activity. Effects of wind are prominent across the dune fields and seem to strongly influence the sublimation of the volatile layer. Strong winds blowing down the scarps could also be responsible for the significant spatial and temporal variability of the surface ice composition observed close to the NPLD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical instrumentation used in diagnosis and treatment relies on the accurate detection and processing of various physiological events and signals. While signal detection technology has improved greatly in recent years, there remain inherent delays in signal detection/ processing. These delays may have significant negative clinical consequences during various pathophysiological events. Reducing or eliminating such delays would increase the ability to provide successful early intervention in certain disorders thereby increasing the efficacy of treatment. In recent years, a physical phenomenon referred to as Negative Group Delay (NGD), demonstrated in simple electronic circuits, has been shown to temporally advance the detection of analog waveforms. Specifically, the output is temporally advanced relative to the input, as the time delay through the circuit is negative. The circuit output precedes the complete detection of the input signal. This process is referred to as signal advance (SA) detection. An SA circuit model incorporating NGD was designed, developed and tested. It imparts a constant temporal signal advance over a pre-specified spectral range in which the output is almost identical to the input signal (i.e., it has minimal distortion). Certain human patho-electrophysiological events are good candidates for the application of temporally-advanced waveform detection. SA technology has potential in early arrhythmia and epileptic seizure detection and intervention. Demonstrating reliable and consistent temporally advanced detection of electrophysiological waveforms may enable intervention with a pathological event (much) earlier than previously possible. SA detection could also be used to improve the performance of neural computer interfaces, neurotherapy applications, radiation therapy and imaging. In this study, the performance of a single-stage SA circuit model on a variety of constructed input signals, and human ECGs is investigated. The data obtained is used to quantify and characterize the temporal advances and circuit gain, as well as distortions in the output waveforms relative to their inputs. This project combines elements of physics, engineering, signal processing, statistics and electrophysiology. Its success has important consequences for the development of novel interventional methodologies in cardiology and neurophysiology as well as significant potential in a broader range of both biomedical and non-biomedical areas of application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proton radiation therapy is gaining popularity because of the unique characteristics of its dose distribution, e.g., high dose-gradient at the distal end of the percentage-depth-dose curve (known as the Bragg peak). The high dose-gradient offers the possibility of delivering high dose to the target while still sparing critical organs distal to the target. However, the high dose-gradient is a double-edged sword: a small shift of the highly conformal high-dose area can cause the target to be substantially under-dosed or the critical organs to be substantially over-dosed. Because of that, large margins are required in treatment planning to ensure adequate dose coverage of the target, which prevents us from realizing the full potential of proton beams. Therefore, it is critical to reduce uncertainties in the proton radiation therapy. One major uncertainty in a proton treatment is the range uncertainty related to the estimation of proton stopping power ratio (SPR) distribution inside a patient. The SPR distribution inside a patient is required to account for tissue heterogeneities when calculating dose distribution inside the patient. In current clinical practice, the SPR distribution inside a patient is estimated from the patient’s treatment planning computed tomography (CT) images based on the CT number-to-SPR calibration curve. The SPR derived from a single CT number carries large uncertainties in the presence of human tissue composition variations, which is the major drawback of the current SPR estimation method. We propose to solve this problem by using dual energy CT (DECT) and hypothesize that the range uncertainty can be reduced by a factor of two from currently used value of 3.5%. A MATLAB program was developed to calculate the electron density ratio (EDR) and effective atomic number (EAN) from two CT measurements of the same object. An empirical relationship was discovered between mean excitation energies and EANs existing in human body tissues. With the MATLAB program and the empirical relationship, a DECT-based method was successfully developed to derive SPRs for human body tissues (the DECT method). The DECT method is more robust against the uncertainties in human tissues compositions than the current single-CT-based method, because the DECT method incorporated both density and elemental composition information in the SPR estimation. Furthermore, we studied practical limitations of the DECT method. We found that the accuracy of the DECT method using conventional kV-kV x-ray pair is susceptible to CT number variations, which compromises the theoretical advantage of the DECT method. Our solution to this problem is to use a different x-ray pair for the DECT. The accuracy of the DECT method using different combinations of x-ray energies, i.e., the kV-kV, kV-MV and MV-MV pair, was compared using the measured imaging uncertainties for each case. The kV-MV DECT was found to be the most robust against CT number variations. In addition, we studied how uncertainties propagate through the DECT calculation, and found general principles of selecting x-ray pairs for the DECT method to minimize its sensitivity to CT number variations. The uncertainties in SPRs estimated using the kV-MV DECT were analyzed further and compared to those using the stoichiometric method. The uncertainties in SPR estimation can be divided into five categories according to their origins: the inherent uncertainty, the DECT modeling uncertainty, the CT imaging uncertainty, the uncertainty in the mean excitation energy, and SPR variation with proton energy. Additionally, human body tissues were divided into three tissue groups – low density (lung) tissues, soft tissues and bone tissues. The uncertainties were estimated separately because their uncertainties were different under each condition. An estimate of the composite range uncertainty (2s) was determined for three tumor sites – prostate, lung, and head-and-neck, by combining the uncertainty estimates of all three tissue groups, weighted by their proportions along typical beam path for each treatment site. In conclusion, the DECT method holds theoretical advantages in estimating SPRs for human tissues over the current single-CT-based method. Using existing imaging techniques, the kV-MV DECT approach was capable of reducing the range uncertainty from the currently used value of 3.5% to 1.9%-2.3%, but it is short to reach our original goal of reducing the range uncertainty by a factor of two. The dominant source of uncertainties in the kV-MV DECT was the uncertainties in CT imaging, especially in MV CT imaging. Further reduction in beam hardening effect, the impact of scatter, out-of-field object etc. would reduce the Hounsfeld Unit variations in CT imaging. The kV-MV DECT still has the potential to reduce the range uncertainty further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dedicated mission to investigate exoplanetary atmospheres represents a major milestone in our quest to understand our place in the universe by placing our Solar System in context and by addressing the suitability of planets for the presence of life. EChO—the Exoplanet Characterisation Observatory—is a mission concept specifically geared for this purpose. EChO will provide simultaneous, multi-wavelength spectroscopic observations on a stable platform that will allow very long exposures. The use of passive cooling, few moving parts and well established technology gives a low-risk and potentially long-lived mission. EChO will build on observations by Hubble, Spitzer and ground-based telescopes, which discovered the first molecules and atoms in exoplanetary atmospheres. However, EChO’s configuration and specifications are designed to study a number of systems in a consistent manner that will eliminate the ambiguities affecting prior observations. EChO will simultaneously observe a broad enough spectral region—from the visible to the mid-infrared—to constrain from one single spectrum the temperature structure of the atmosphere, the abundances of the major carbon and oxygen bearing species, the expected photochemically-produced species and magnetospheric signatures. The spectral range and resolution are tailored to separate bands belonging to up to 30 molecules and retrieve the composition and temperature structure of planetary atmospheres. The target list for EChO includes planets ranging from Jupiter-sized with equilibrium temperatures T eq up to 2,000 K, to those of a few Earth masses, with T eq \u223c 300 K. The list will include planets with no Solar System analog, such as the recently discovered planets GJ1214b, whose density lies between that of terrestrial and gaseous planets, or the rocky-iron planet 55 Cnc e, with day-side temperature close to 3,000 K. As the number of detected exoplanets is growing rapidly each year, and the mass and radius of those detected steadily decreases, the target list will be constantly adjusted to include the most interesting systems. We have baselined a dispersive spectrograph design covering continuously the 0.4–16 μm spectral range in 6 channels (1 in the visible, 5 in the InfraRed), which allows the spectral resolution to be adapted from several tens to several hundreds, depending on the target brightness. The instrument will be mounted behind a 1.5 m class telescope, passively cooled to 50 K, with the instrument structure and optics passively cooled to \u223c45 K. EChO will be placed in a grand halo orbit around L2. This orbit, in combination with an optimised thermal shield design, provides a highly stable thermal environment and a high degree of visibility of the sky to observe repeatedly several tens of targets over the year. Both the baseline and alternative designs have been evaluated and no critical items with Technology Readiness Level (TRL) less than 4–5 have been identified. We have also undertaken a first-order cost and development plan analysis and find that EChO is easily compatible with the ESA M-class mission framework.