995 resultados para astronomy popularization
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
An emerging issue in the field of astronomy is the integration, management and utilization of databases from around the world to facilitate scientific discovery. In this paper, we investigate application of the machine learning techniques of support vector machines and neural networks to the problem of amalgamating catalogues of galaxies as objects from two disparate data sources: radio and optical. Formulating this as a classification problem presents several challenges, including dealing with a highly unbalanced data set. Unlike the conventional approach to the problem (which is based on a likelihood ratio) machine learning does not require density estimation and is shown here to provide a significant improvement in performance. We also report some experiments that explore the importance of the radio and optical data features for the matching problem.
Resumo:
Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.
Resumo:
Ɣ-ray bursts (GRBs) are the Universe's most luminous transient events. Since the discovery of GRBs was announced in 1973, efforts have been ongoing to obtain data over a broader range of the electromagnetic spectrum at the earliest possible times following the initial detection. The discovery of the theorized ``afterglow'' emission in radio through X-ray bands in the late 1990s confirmed the cosmological nature of these events. At present, GRB afterglows are among the best probes of the early Universe (z ≳ 9). In addition to informing theories about GRBs themselves, observations of afterglows probe the circum-burst medium (CBM), properties of the host galaxies and the progress of cosmic reionization. To explore the early-time variability of afterglows, I have developed a generalized analysis framework which models near-infrared (NIR), optical, ultra-violet (UV) and X-ray light curves without assuming an underlying model. These fits are then used to construct the spectral energy distribution (SED) of afterglows at arbitrary times within the observed window. Physical models are then used to explore the evolution of the SED parameter space with time. I demonstrate that this framework produces evidence of the photodestruction of dust in the CBM of GRB 120119A, similar to the findings from a previous study of this afterglow. The framework is additionally applied to the afterglows of GRB 140419A and GRB 080607. In these cases the evolution of the SEDs appears consistent with the standard fireball model. Having introduced the scientific motivations for early-time observations, I introduce the Rapid Infrared Imager-Spectrometer (RIMAS). Once commissioned on the 4.3 meter Discovery Channel Telescope (DCT), RIMAS will be used to study the afterglows of GRBs through photometric and spectroscopic observations beginning within minutes of the initial burst. The instrument will operate in the NIR, from 0.97 μm to 2.37 μm, permitting the detection of very high redshift (z ≳ 7) afterglows which are attenuated at shorter wavelengths by Lyman-ɑ absorption in the intergalactic medium (IGM). A majority of my graduate work has been spent designing and aligning RIMAS's cryogenic (~80 K) optical systems. Design efforts have included an original camera used to image the field surrounding spectroscopic slits, tolerancing and optimizing all of the instrument's optics, thermal modeling of optomechanical systems, and modeling the diffraction efficiencies for some of the dispersive elements. To align the cryogenic optics, I developed a procedure that was successfully used for a majority of the instrument's sub-assemblies. My work on this cryogenic instrument has necessitated experimental and computational projects to design and validate designs of several subsystems. Two of these projects describe simple and effective measurements of optomechanical components in vacuum and at cryogenic temperatures using an 8-bit CCD camera. Models of heat transfer via electrical harnesses used to provide current to motors located within the cryostat are also presented.
Resumo:
In this thesis, we will explore approaches to faculty instructional change in astronomy and physics. We primarily focus on professional development (PD) workshops, which are a central mechanism used within our community to help faculty improve their teaching. Although workshops serve a critical role for promoting more equitable instruction, we rarely assess them through careful consideration of how they engage faculty. To encourage a shift towards more reflective, research-informed PD, we developed the Real-Time Professional Development Observation Tool (R-PDOT), to document the form and focus of faculty's engagement during workshops. We then analyze video-recordings of faculty's interactions during the Physics and Astronomy New Faculty Workshop, focusing on instances where faculty might engage in pedagogical sense-making. Finally, we consider insights gained from our own local, team-based effort to improve a course sequence for astronomy majors. We conclude with recommendations for PD leaders and researchers.
Resumo:
This thesis is focused on improving the calibration accuracy of sub-millimeter astronomical observations. The wavelength range covered by observational radio astronomy has been extended to sub-millimeter and far infrared with the advancement of receiver technology in recent years. Sub-millimeter observations carried out with airborne and ground-based telescopes typically suffer from 10% to 90% attenuation of the astronomical source signals by the terrestrial atmosphere. The amount of attenuation can be derived from the measured brightness of the atmospheric emission. In order to do this, the knowledge of the atmospheric temperature and chemical composition, as well as the frequency-dependent optical depth at each place along the line of sight is required. The altitude-dependent air temperature and composition are estimated using a parametrized static atmospheric model, which is described in Chapter 2, because direct measurements are technically and financially infeasible. The frequency dependent optical depth of the atmosphere is computed with a radiative transfer model based on the theories of quantum mechanics and, in addition, some empirical formulae. The choice, application, and improvement of third party radiative transfer models are discussed in Chapter 3. The application of the calibration procedure, which is described in Chapter 4, to the astronomical data observed with the SubMillimeter Array Receiver for Two Frequencies (SMART), and the German REceiver for Astronomy at Terahertz Frequencies (GREAT), is presented in Chapters 5 and 6. The brightnesses of atmospheric emission were fitted consistently to the simultaneous multi-band observation data from GREAT at 1.2 ∼ 1.4 and 1.8 ∼ 1.9 THz with a single set of parameters of the static atmospheric model. On the other hand, the cause of the inconsistency between the model parameters fitted from the 490 and 810 GHz data of SMART is found to be the lack of calibration of the effective cold load temperature. Besides the correctness of atmospheric modeling, the stability of the receiver is also important to achieving optimal calibration accuracy. The stabilities of SMART and GREAT are analyzed with a special calibration procedure, namely the “load calibration". The effects of the drift and fluctuation of the receiver gain and noise temperature on calibration accuracy are discussed in Chapters 5 and 6. Alternative observing strategies are proposed to combat receiver instability. The methods and conclusions presented in this thesis are applicable to the atmospheric calibration of sub-millimeter astronomical observations up to at least 4.7 THz (the H channel frequency of GREAT) for observations carried out from ∼ 4 to 14 km altitude. The procedures for receiver gain calibration and stability test are applicable to other instruments using the same calibration approach as that for SMART and GREAT. The structure of the high performance, modular, and extensible calibration program used and further developed for this thesis work is presented in the Appendix C.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Um dos maiores desafios das universidades, em especial das públicas, é transpor o conhecimento científico produzido entre seus muros para a população em geral. A educação não formal é uma ferramenta importante e ainda pouco utilizada pelos pesquisadores e docentes para aproximar o cotidiano do conhecimento científico. O câncer de boca atinge mais 11.000 brasileiros por ano. A despeito da alta incidência, esta patologia é ainda pouco conhecida da população em geral e de parte da classe médica e odontológica. Baseando-se nos dados epidemiológicos, em pesquisas e artigos científicos, o câncer de boca foi o tema eleito para a ação em educação e comunicação da primeira campanha nacional, de caráter não governamental, de prevenção de câncer de boca, um ótimo exemplo de como isso pode ser feito. Este trabalho se propõe a descrever a metodologia de comunicação utilizada e os resultados obtidos nesta experiência.
Resumo:
O presente artigo analisa os resultados obtidos num minicurso sobre o Sol e sua dinâmica realizado no Observatório Astronômico do Centro de Divulgação Científica e Cultural (CDCC) pertencente à Universidade de São Paulo (USP) na cidade de São Carlos para alunos do ensino fundamental. As atividades foram desenvolvidas na recente inaugurada, Sala Solar. Ela é dedicada ao estudo do Sol, enfatizando a observação de manchas solares e do espectro do Sol. A metodologia adotada no minicurso consistiu em pequenos experimentos, observações e diálogos expositivos. Isto incentivou os estudantes a tomarem decisões, fazerem questionamentos e refletirem gerando pensamentos mais críticos e produzindo um maior número de conexões entre o real e o abstrato que contribuiu para níveis de maior complexidade conceitual verificados durante entrevistas semiestruturadas e nas respostas ao questionário final.
Resumo:
Apesar das dificuldades em abordar a natureza da ciência em sala de aula, há um entendimento geral da necessidade de incorporar nos currículos noções sobre como ocorre a construção do conhecimento científico. Conhecer a história do desenvolvimento e do processo de aceitação de teorias científicas pode ajudar os professores a incluir discussões sobre a natureza da ciência no ensino de ciências. Este trabalho apresenta uma análise da aceitação e propagação das teorias sobre luz e cores de Newton ao longo do século XVIII. Apontaremos para alguns aspectos da natureza da ciência que podem ser evidenciados pelo estudo desse episódio histórico.
Resumo:
A utilização de textos de divulgação científica no ensino formal tem sido discutida por pesquisadores da área de educação em ciências. Tais discussões sugerem que esses textos podem funcionar como instrumento de motivação em sala de aula, organizando explicações e estimulando debates. Nesta perspectiva, foi aplicada uma proposta de ensino pautada no uso de capítulos do livro Tio Tungstênio: Memórias de uma Infância Química, de Oliver Sacks. A proposta, que envolveu a produção de textos, pelos estudantes, sobre conteúdos do livro, foi aplicada em uma disciplina do Ensino Superior de química. Os textos foram analisados segundo a Análise do Discurso de linha francesa, especificamente com relação à noção de autoria.
Resumo:
In this work we have derived a class of geometries which describe black holes and wormholes in Randall-Sundrum-type brane models, focusing mainly on asymptotically anti-de Sitter backgrounds. We show that by continuously deforming the usual four-dimensional vacuum background, a specific family of solutions is obtained. Maximal extensions of the solutions are presented, and their causal structures are discussed.