926 resultados para Sub-Pixel
Resumo:
A key to the larvae of the genera of the sub-family Orthocladiinae from Larvae and Pupae of midges of the sub-family Orthocladiinae. Parts of the key refer to the rest of the publication which is not included in this partial translation.
Resumo:
根据双中心带输运模型,对(Ce,Cu)∶LiNbO3晶体双中心非挥发全息记录进行了理论研究与优化。推导了(Ce,Cu)∶LiNbO3晶体的微观参量,采用数值方法通过严格求解模拟双中心带输运方程来模拟全息记录过程。分析了记录过程中,记录与敏化光强、Ce和Cu掺杂浓度以及晶体微观参量对(Ce,Cu)∶LiNbO3晶体双中心全息记录的影响。发现(Ce,Cu)∶LiNbO3晶体非挥发全息记录中实现高衍射效率与固定效率的主导因素是深中心Cu,在记录过程中,深中心Cu建立起了很强的空间电荷场。数值模拟的结果经过实验验
Resumo:
采用双中心记录方案在双掺杂LiNbO3∶Fe∶Rh晶体中实现了近红外非挥发全息记录,研究了LiNbO3∶Fe∶Rh晶体在633 nm,752 nm,799 nm波长下的全息记录性能。结果表明,在使用近红外记录光时,其记录灵敏度随敏化光强的变化趋势与双中心短波长记录时的不同。通过和LiNbO3∶Fe∶Mn等传统双掺杂铌酸锂晶体的近红外波段记录效果对比,发现同时掺杂Fe和Rh可增强晶体对近红外光的吸收,获得更高的浅中心Fe光生伏特系数,从而能够在LiNbO3∶Fe∶Rh晶体中实现近红外波段的光折变全息记录。
Resumo:
The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.
The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Sub-lethal toxicity tests, such as the scope-for-growth test, reveal simple relationships between measures of contaminant concentration and effect on respiratory and feeding physiology. Simple models are presented to investigate the potential impact of different mechanisms of chronic sub-lethal toxicity on these physiological processes. Since environmental quality is variable, even in unimpacted environments, toxicants may have differentially greater impacts in poor compared to higher quality environments. The models illustrate the implications of different degrees and mechanisms of toxicity in response to variability in the quality of the feeding environment, and variability in standard metabolic rate. The models suggest that the relationships between measured degrees of toxic stress, and the maintenance ration required to maintain zero scope-for-growth, may be highly nonlinear. In addition it may be possible to define critical levels of sub-lethal toxic effect above which no environment is of sufficient quality to permit prolonged survival.
Resumo:
A Baía de Sepetiba, localizada entre a Baía de Guanabara e a Baía de Ilha Grande, Estado do Rio de Janeiro, está inserida em um cenário estratégico para o desenvolvimento econômico do Estado. Isto ocorre devido ao aumento da concentração populacional, que está diretamente relacionado com o turismo, com a presença de portos e de áreas industriais. Sendo assim, se faz necessário estudar sua estrutura geológica e dinâmica sedimentar para entender sua evolução ao longo do tempo e para uma utilização mais racional desta área. Utilizando-se da sísmica rasa de alta resolução e da sonografia de varredura lateral juntamente com dados pretéritos de amostragem superficial de sedimentos, o presente trabalho tem como objetivo principal analisar sua geologia holocênica. A investigação, em subsuperfície, da geologia estrutural e sedimentar dessa baía, através da interpretação de 09 perfis sísmicos, baseada na determinação de diferentes tipos de ecotexturas, revelou a presença de diferentes pacotes sedimentares depositados ao longo do Holoceno. Ao todo, foram encontrados 15 tipos de ecotexturas perfazendo 14 camadas sedimentares, que estão relacionados em 4 Grupos de acordo com sua distribuição. Já a investigação em superfície através dos registros sonográficos, baseada nos diferentes graus de reflexão acústica (backscattering) e parametrizada pelos dados de amostragem direta pretérita, identificou 6 padrões sonográficos distintos. Com isso foi confeccionado um novo mapa de distribuição textural dos sedimentos superficiais da Baía de Sepetiba. Com a correlação dos dados de sísmica rasa com os dados sonográficos, foi possível ainda sugerir a provável existência de neotectonismo na área de estudo.
Resumo:
由于a轴切割Nd∶YVO4晶体的非对称性,使得激光二极管(LD)端面抽运Nd∶YVO4固体激光器不同于Nd∶YAG激光器,输出的激光经常产生非对称结果。用有限元法分析激光二极管端面抽运a轴切割Nd∶YVO4固体激光器的晶体热效应,包括温度分布、内部应力和产生的形变。分析结果表明端面抽运a轴切割Nd∶YVO4晶体产生了椭球热透镜效应。从结构方面和抽运方面提出了热透镜非对称性的平衡方法,实验验证了方法的可行性。
Resumo:
O Membro Maruim da Formação Riachuelo (Neoalbiano), na parte terrestre da Sub-bacia de Sergipe, contém fácies de água rasa compostas, principalmente, por rudstone/grainstone oncolítico oolítico, com baixo conteúdo e variedade de bioclastos. A correlação dos afloramentos e análise petrográfica detalhada, envolvendo catodoluminescência, microscopia eletrônica de varredura (MEV) e estudos isotópicos e análise química elementar, permitiram a reconstrução da história diagenética do intervalo estudado. As rochas carbonáticas do Membro Maruim estão completamente afetadas por processos diagenéticos associados aos estágios eogenético, mesogenético e telogenético. A dolomitização foi um dos principais produtos diagenéticos observados no estágio eogenético e encontra-se substituindo total ou parcialmente os calcários do Membro Maruim. A dolomitização concentra-se no topo dos ciclos deposicionais descritos na área de estudo e diminuem gradativamente para a base dos mesmos. As relações entre a porosidade e a dolomitização foram estudadas com base nas comparações da fábrica cristalina da dolomita preservada nos afloramentos estudados. Os resultados isotópicos das dolomitas indicam que o processo de dolomitização ocorreu a partir do refluxo de salmouras em um ambiente ligeiramente hipersalino (penesalino). As áreas mais próximas ao contato com a salmoura, fonte dos fluidos dolomitizantes, exibem menor desenvolvimento de porosidade, uma vez que nessas regiões ocorreriam processos de superdolomitização (Pedreira Carapeba). Nestas áreas a assinatura isotópica do carbono e do oxigênio é muito positiva (o valor do δ13C varia de 2.37 a 4.83 e o valor do δ18O oscila entre 0.61 e 3.92), indicando que os processos diagenéticos tardios não teriam alterado significativamente a assinatura isotópica original. As dolomitas geradas nas áreas afastadas da salmoura (pedreiras Massapé, Inorcal I, Inorcal II, Inhumas e Santo Antônio) exibem um maior desenvolvimento de porosidade e têm uma composição isotópica de carbono e oxigênio mais negativa (o valor do δ13C varia de -5.66 a 2.61 e o valor do δ18O oscila entre -4.25 e 0.38). A assinatura isotópica das dolomitas descritas nestas pedreiras também se encontra alterada por processos de dedolomitização. Os cimentos diagenéticos precipitados durante o estágio mesogenético foram os principais responsáveis pela obliteração da porosidade primária e secundária dos calcários do Membro Maruim. Adicionalmente, estes cimentos diagenéticos tardios calcitizaram as dolomitas, fechando parcialmente a porosidade secundária das mesmas. A porosidade das rochas carbonáticas também se encontra fortemente reduzida pela compactação mecânica e química. A dissolução foi o único processo que levou à geração de porosidade secundária no estágio telogenético, porém em porcentagens muito baixas. As fácies dolomíticas são as que apresentam maior desenvolvimento de porosidade secundária, como consequência dos processos de dissolução no ambiente telogenético. A dissolução compreende um dos últimos eventos diagenéticos identificados no intervalo estudado.