990 resultados para Sub-sampling
Resumo:
This research program consisted of three major component areas: (I) development of experimental design, (II) calibration of the trawl design, and (III) development of the foundation for stock assessment analysis. The products which have I. EXPERIMENTAL DESIGN resulted from - the program are indicated below: The study was successful in identifying spatial and temporal distribution characteristics of the several key species, and the relationships between given species catches and environmental and physical factors which are thought to influence species abundance by areas within the mainstem of the Chesapeake Bay and tributaries
Resumo:
The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.
The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.
Resumo:
The first bilateral study of methods of biological sampling and biological methods of water quality assessment took place during June 1977 on selected sampling sites in the catchment of the River Trent (UK). The study was arranged in accordance with the protocol established by the joint working group responsible for the Anglo-Soviet Environmental Agreement. The main purpose of the bilateral study in Nottingham was for some of the methods of sampling and biological assessment used by UK biologists to be demonstrated to their Soviet counterparts and for the Soviet biologists to have the opportunity to test these methods at first hand in order to judge the potential of any of these methods for use within the Soviet Union. This paper is concerned with the nine river stations in the Trent catchment.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Sub-lethal toxicity tests, such as the scope-for-growth test, reveal simple relationships between measures of contaminant concentration and effect on respiratory and feeding physiology. Simple models are presented to investigate the potential impact of different mechanisms of chronic sub-lethal toxicity on these physiological processes. Since environmental quality is variable, even in unimpacted environments, toxicants may have differentially greater impacts in poor compared to higher quality environments. The models illustrate the implications of different degrees and mechanisms of toxicity in response to variability in the quality of the feeding environment, and variability in standard metabolic rate. The models suggest that the relationships between measured degrees of toxic stress, and the maintenance ration required to maintain zero scope-for-growth, may be highly nonlinear. In addition it may be possible to define critical levels of sub-lethal toxic effect above which no environment is of sufficient quality to permit prolonged survival.
Resumo:
O manto polar antártico retêm informação paleoclimatologica por entres suas camadas de neve e gelo. O gelo antártico tem revelado a base de dados paleoclimática de maior resolução para os últimos 800 mil anos. Os padrões de transporte atmosférico refletem a composição e a fonte do particulado encontrado na neve e no gelo do continente Antártico. Estando relacionado a processos climáticos, as características desse transporte alteram em quantidade e qualidade as espécies químicas que se depositam sobre o manto de gelo. Dessa forma, o estudo dos depósitos de particulado ao longo das camadas de neve/gelo na Antártica pode sugerir mudanças nos padrões de transporte atmosférico. Atualmente a comunidade científica discute as diferenças de padrões climáticos entre o leste e o oeste antártico. Enquanto de forma geral observa-se instabilidade no setor oeste, o clima da antártica oriental demonstra relativa estabilidade climática. Neste estudo, analisamos dois testemunhos de gelo recente de duas regiões com características climáticas diferentes do continente Antártico. No Platô Detroit situado na Península Antártica (6410′S/0600′O), analisamos a variabilidade de Black Carbon (BC) ao longo de 20 metros de neve. O BC encontrado na Península Antártica apresentou baixas concentrações comparáveis as encontradas no gelo do Artico período pré-industrial. Nossos resultados sugerem que sua variabiliade corresponde à sazonalidade dos períodos de queimada nos continentes do Hemisfério Sul. No interior do continente Antártico, analisamos o particulado em geral por um processo de microanálise ao longo de um testemunho de 40 metros extraído em Mont Johns (79o55′S/09423′O). Encontramos uma tendência negativa na deposição de poeira mineral (AlSi) entre 1967 e 2007. Nossos resultados sugerem que esta tendência seja resultado de um crescente isolamento atmosférico da região central do continente antártico pelo aumento da intensidade dos ventos ao redor da Antártica. Este aumento na intensidade dos ventos reflete por sua vez o resfriamento da alta atmosfera no centro antártico causado pela depleção da camada de ozônio na região. Adicionalmente, amostras de diferentes microambientes de Patriot Hills (8018′S/08121′O) foram coletadas de maneira asséptica para análise microbiológica. As amostras foram cultivadas em meio R2 e paralelamente o DNA total extraído foi sequenciado pela técnica de pirosequenciamento. Os resultados preliminares desta analise mostram grande riqueza de espécies dos mais variados grupos. Os resultados deste trabalho caracterizam três diferentes parâmetros relacionados a deposição atmosférica em duas áreas pouco exploradas e de grande interesse científico do continente antártico.
Resumo:
This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies.
Resumo:
由于a轴切割Nd∶YVO4晶体的非对称性,使得激光二极管(LD)端面抽运Nd∶YVO4固体激光器不同于Nd∶YAG激光器,输出的激光经常产生非对称结果。用有限元法分析激光二极管端面抽运a轴切割Nd∶YVO4固体激光器的晶体热效应,包括温度分布、内部应力和产生的形变。分析结果表明端面抽运a轴切割Nd∶YVO4晶体产生了椭球热透镜效应。从结构方面和抽运方面提出了热透镜非对称性的平衡方法,实验验证了方法的可行性。
Resumo:
Manguezais são ecossistemas marinhos costeiros que ocorrem nas regiões tropicais e subtropicais do globo. A associação desses ambientes a formações recifais é restrita, particularmente no Brasil, onde se destaca a ilha de Tinharé, na costa sul do estado da Bahia, não só pela ocorrência desse sistema manguezal-recifes, mas também pelo desenvolvimento estrutural da floresta e pela atividade produtiva de mariscagem exercida pela população do povoado de Garapuá. Apesar da proximidade de Morro de São Paulo, atrator turístico internacional, este povoado experimentava certo isolamento socioeconômico até a chegada da indústria do petróleo que, em função de suas potencialidades e riscos, tensionou a vida da comunidade local. Este estudo tem por objetivo analisar a vulnerabilidade socioambiental dos manguezais adjacentes à Garapuá, Cairu-BA, frente a inserção da indústria petrolífera na região, a partir da caracterização estrutural das florestas de mangue e da caracterização social do povoado de Garapuá e, particularmente, das marisqueiras usuárias deste ecossistema. As abordagens metodológicas utilizadas podem ser classificadas como pesquisas quantitativas, empregadas no levantamento fitossociológico, e qualitativas, utilizadas a partir de observações de campo e entrevistas, além de levantamentos bibliográficos, para elaboração das análises sociais. Os resultados indicam florestas de mangue de porte variável, em bom estado de conservação, com altura média das dez árvores mais altas entre 2,40,2 metros (estação 7) e 22,71,1 metros (estação 29), geralmente dominadas por Rhizophora mangle (38 das 52 estações de amostragem). A partir da caracterização estrutural foi realizado teste estatístico de agrupamento que, aliado a aspectos da arquitetura das árvores, permitiu a classificação das florestas em 12 Tipos Estruturais. As análises relativas à vulnerabilidade ambiental fundamentaram-se nos aspectos de sensibilidade e na posição fisiográfica ocupada por cada Tipo de floresta e identificaram níveis distintos de vulnerabilidade a derramamentos de óleo. Com relação aos aspectos sociais, as informações sobre os sistemas socioeconômicos e culturais relacionados à saúde, à educação, às práticas produtivas e à geração de renda, ao transporte, à religião e à organização social, como um todo, evidenciaram vulnerabilidades frente à inserção da indústria do petróleo, apontando as marisqueiras como o segmento mais suscetível a vivenciar os riscos e os impactos desse empreendimento no local. A inserção da indústria do petróleo neste contexto socioambiental representa aumento de riscos e, consequentemente, de vulnerabilidade socioambiental, na medida em que o diálogo estabelecido entre empreendedor e população se apresenta de forma assimétrica, dificultando a participação da população local, sobretudo dos mais excluídos que, nesse caso, são representados pelos usuários dos manguezais.