981 resultados para ‘Keep-out’ signal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.

The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical frequency domain phase conjugation (FDPC) is based on phase conjugation of spectrum of an input signal. It is equivalent to the phase conjugation and the time reversal of the temporal envelope of an input signal. The use of FDPC to control polarization signal distortion in birefringent optical fiber systems is proposed. Evolution of polarization signals in the system using midway FDPC is analyzed theoretically and simulated numerically. It is shown that the distortion of polarization signals can be controlled effectively by FDPC. The impairments due to dispersion and nonlinear effects can be suppressed simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.

These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.

In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.

For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.

It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present what we believe is a novel technique based on the moire effect for fully diagnosing the beam quality of an x-ray laser. Using Fresnel diffraction theory, we investigated the intensity profile of the moire pattern when a general paraxial beam illuminates a pair of Ronchi gratings in the quasi-far field. Two formulas were derived to determine the beam quality factor M-2 and the effective radius of curvature R-e from the moire pattern. On the basis of the results, the far-field divergence, the waist location, and the radius can be calculated further. Finally, we verified the approach by use of numerical simulation. (C) 1999 Optical Society of America [S0740-3232(99)01502-1].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a simple and practical method for the single-ended distributed fiber temperature measurements using microwave (11-GHz) coherent detection and the instantaneous frequency measurement (IFM) technique to detect spontaneous Brillouin backscattered signal in which a specially designed rf bandpass filter at 11 GHz is used as a frequency discriminator to transform frequency shift to intensity fluctuation. A Brillouin temperature signal can be obtained at 11 GHz over a sensing length of 10 km. The power sensitivity dependence on temperature induced by frequency shift is measured as 2.66%/K. (c) 2007 Society of Photo-Optical Instrumentation Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analytic propagation expressions of pulsed Gaussian beam are deduced by using complex amplitude envelope representation and complex analytic signal representation. Numerical calculations are given to illustrate the differences between them. The results show that the major difference between them is that there exists singularity in the beam obtained by using complex amplitude envelope representation. It is also found that singularity presents near propagation axis in the case of broadband and locates far from propagation axis in the case of narrowband. The critical condition to determine what representation should be adopted in studying pulsed Gaussian beam is also given. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

abstract {We present a simple and practical method for the single-ended distributed fiber temperature measurements using microwave (11-GHz) coherent detection and the instantaneous frequency measurement (IFM) technique to detect spontaneous Brillouin backscattered signal in which a specially designed rf bandpass filter at 11 GHz is used as a frequency discriminator to transform frequency shift to intensity fluctuation. A Brillouin temperature signal can be obtained at 11 GHz over a sensing length of 10 km. The power sensitivity dependence on temperature induced by frequency shift is measured as 2.66%/K. © 2007 Society of Photo-Optical Instrumentation Engineers.}

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recentemente a utilização de estruturas mistas, aço concreto, vem ganhando espaço nas construções. Isso deve-se principalmente a economia de tempo em sua execução. Além disto, nesta solução a utilização do aço e do concreto otimiza as características estruturais de seus elementos: a resistência a tração do aço e a compressão do concreto. A transferência dos esforços entre os dois materiais possui grande influência no desempenho de uma estrutura mista, sendo comum a utilização de conectores de cisalhamento na região da interface entre estes dois materiais. O Eurocode 4 define um ensaio experimental denominado push-out de modo a determinar a resistência e ductilidade de conectores de cisalhamento. Seu desempenho é influenciado pelas resistências do concreto a compressão, as dimensões e taxa de armadura da laje de concreto, dimensões do perfil de aço, a disposição e a geometria dos conectores e pelas características dos aços utilizados no conector, no perfil e nas barras de reforço. Nota-se com isso uma grande quantidade de variáveis que influenciam o ensaio. Assim, o presente trabalho apresenta o desenvolvimento de um modelo em elementos finitos com base no programa ANSYS para simulação de ensaios push-out. Os resultados numéricos apresentados neste trabalho foram calibrados com resultados obtidos em ensaios experimentais existentes na literatura de ensaios push-out para conectores do tipo pino com cabeça (stud) e conectores tipo perfobond. Estes últimos apresentam elevada resistência sendo influenciados por inúmeros fatores como: número e diâmetro dos furos no conector e a inclusão ou não de barras de reforço extras nestes furos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste estudo foi realizar uma avaliação tridimensional da rugosidade superficial em 3 tipos de pinos de fibra - DT LightPost, FRC Postec Plus e Transluma Post - submetidos a diferentes tratamentos de superfície e avaliar os efeitos dos pré-tratamentos na resistência adesiva a um compósito de presa dual Biscore. Os tratamentos de superfície foram: imersão em ácido hidrofluorídrico, jateamento com óxido de alumínio a 50m, imersão em peróxido de hidrogênio, jateamento com óxido de alumínio a 50m seguido de imersão em ácido hidrofluorídrico e jateamento com óxido de alumínio a 50m seguido de imersão em peróxido de hidrogênio. No experimento 1, 75 pinos foram divididos em 3 grupos (n = 25), de acordo com seu fabricante e subdivididos em cinco subgrupos. A rugosidade superficial foi medida usando um rugosímetro tridimensional e analisada com o software de análise 3D. Os valores de rugosidade foram obtidos antes e após diferentes tratamentos de superfície na área dos mesmos corpos-de-prova. Para o experimento 2, foram utilizados os mesmos corpos-de-prova, os mesmos grupos e subgrupos do experimento 1, tendo sido adicionado o subgrupo de controle (n=90) e a resistência adesiva a um compósito presa dual Biscore foi mensurada através de um teste push-out. A resistência adesiva foi medida em uma máquina universal de ensaios, com uma célula de carga tipo SLBL-5kN em uma velocidade de 0,5 mm / min. Os resultados do experimento 1 foram analisados através de um teste estatístico t-Student. Jateamento e jateamento seguido de imersão em ácido hidrofluorídrico produziram um aumento estatisticamente significante na rugosidade, contudo somente o tratamento por jateamento proporcionou um aumento significativo nos valores de rugosidade. Os resultados do experimento 2 foram obtidos através de um um teste t-unilateral de hipótese com variância desconhecida. Concluiu-se que o jateamento com óxido de alumínio a 50μm em uma distância de 30 mm a 2,5 bar de pressão por 5 segundos foi suficiente para modificar a topografia dos pinos de fibra de vidro e quartzo e que o jateamento com partículas de 50 μ alumina a distância de 30 mm a 2,5 bar de pressão por 5 segundos foi o único tratamento de superfície que aumentou a resistência adesiva do compósito Biscore aos pinos DT Light Post e Transluma Post. Os pinos FRC Postec Plus não demostraram um aumento estatisticamente significante na resistência adesiva em nenhum grupo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most controversial inquiries in academic writing is whether it is admissible to use first person pronouns in a scientific paper or not. Many professors discourage their students from using them, rather favoring a more passive tone, and thus causing novices to avoid inserting themselves into their texts in an expert-like manner. Abundant research, however, has recently attested that negotiation of identity is plausible in academic prose, and there is no need for a paper to be void of an authorial identity. Because in the course of the English Studies Degree we have received opposing prompts in the use of I, the aim of this dissertation is to throw some light upon this vexed issue. To this end, I compiled a corpus of 16 Research Articles (RAs) that comprises two sub-corpora, one featuring Linguistics RAs and the other one Literature RAs, and each, in turn, consists of articles written by American and British authors. I then searched for real occurrences of I, me, my, mine, we, us, our and ours, and studied their frequency, rhetorical functions and distribution along each paper. The results obtained certainly show that academic writing is no longer the faceless prose that it used to be, for I is highly used in both disciplines and varieties of English. Concerning functions, the most typically used roles were the use of I to take credit for the writer’s research process, and also those involving plural forms. With respect to the spatial disposition, all sections welcomed first person pronouns, but the Method and the Results/Discussion sections seem to stimulate their appearance. On the basis of these findings, I suggest that an L2 writing pedagogy that is mindful not only of the language proficiency, but also of the students’ own identity may have a beneficial effect on the composition of their texts.