972 resultados para Hindu astronomy.
Resumo:
Neuroaesthetics is the study of the brain’s response to artistic stimuli. The neuroscientist V.S. Ramachandran contends that art is primarily “caricature” or “exaggeration.” Exaggerated forms hyperactivate neurons in viewers’ brains, which in turn produce specific, “universal” responses. Ramachandran identifies a precursor for his theory in the concept of rasa (literally “juice”) from classical Hindu aesthetics, which he associates with “exaggeration.” The canonical Sanskrit texts of Bharata Muni’s Natya Shastra and Abhinavagupta’s Abhinavabharati, however, do not support Ramachandran’s conclusions. They present audiences as dynamic co-creators, not passive recipients. I believe we could more accurately model the neurology of Hindu aesthetic experiences if we took indigenous rasa theory more seriously as qualitative data that could inform future research.
Resumo:
This thesis is focused on improving the calibration accuracy of sub-millimeter astronomical observations. The wavelength range covered by observational radio astronomy has been extended to sub-millimeter and far infrared with the advancement of receiver technology in recent years. Sub-millimeter observations carried out with airborne and ground-based telescopes typically suffer from 10% to 90% attenuation of the astronomical source signals by the terrestrial atmosphere. The amount of attenuation can be derived from the measured brightness of the atmospheric emission. In order to do this, the knowledge of the atmospheric temperature and chemical composition, as well as the frequency-dependent optical depth at each place along the line of sight is required. The altitude-dependent air temperature and composition are estimated using a parametrized static atmospheric model, which is described in Chapter 2, because direct measurements are technically and financially infeasible. The frequency dependent optical depth of the atmosphere is computed with a radiative transfer model based on the theories of quantum mechanics and, in addition, some empirical formulae. The choice, application, and improvement of third party radiative transfer models are discussed in Chapter 3. The application of the calibration procedure, which is described in Chapter 4, to the astronomical data observed with the SubMillimeter Array Receiver for Two Frequencies (SMART), and the German REceiver for Astronomy at Terahertz Frequencies (GREAT), is presented in Chapters 5 and 6. The brightnesses of atmospheric emission were fitted consistently to the simultaneous multi-band observation data from GREAT at 1.2 ∼ 1.4 and 1.8 ∼ 1.9 THz with a single set of parameters of the static atmospheric model. On the other hand, the cause of the inconsistency between the model parameters fitted from the 490 and 810 GHz data of SMART is found to be the lack of calibration of the effective cold load temperature. Besides the correctness of atmospheric modeling, the stability of the receiver is also important to achieving optimal calibration accuracy. The stabilities of SMART and GREAT are analyzed with a special calibration procedure, namely the “load calibration". The effects of the drift and fluctuation of the receiver gain and noise temperature on calibration accuracy are discussed in Chapters 5 and 6. Alternative observing strategies are proposed to combat receiver instability. The methods and conclusions presented in this thesis are applicable to the atmospheric calibration of sub-millimeter astronomical observations up to at least 4.7 THz (the H channel frequency of GREAT) for observations carried out from ∼ 4 to 14 km altitude. The procedures for receiver gain calibration and stability test are applicable to other instruments using the same calibration approach as that for SMART and GREAT. The structure of the high performance, modular, and extensible calibration program used and further developed for this thesis work is presented in the Appendix C.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
The unusual behaviour of fine lunar regolith like stickiness and low heat conductivity is dominated by the structural arrangement of its finest fraction in the outer-most topsoil layer. Here, we show the previously unknown phenomenon of building a globular 3-D superstructure within the dust fraction of the regolith. New technology, Transmission X-ray Microscopy (TXM) with tomographic reconstruction, reveals a highly porous network of cellular void system in the lunar finest dust fraction aggregates. Such porous chained aggregates are composed of sub-micron in size particles that build cellular void networks. Voids are a few micrometers in diameter. Discovery of such a superstructure within the finest fraction of the lunar topsoil allow building a model of heat transfer which is discussed.
Resumo:
In this article some basic laboratory bench experiments are described that are useful for teaching high school students some of the basic principles of stellar astrophysics. For example, in one experiment, students slam a plastic water-filled bottle down onto a bench, ejecting water towards the ceiling illustrating the physics associated with a type II supernova explosion. In another experiment, students roll marbles up and down a double ramp in an attempt to get a marble to enter a tube half way up the slope, which illustrates quantum tunnelling in stellar cores. The experiments are reasonably low cost to either purchase or manufacture.
Resumo:
In the early part of 2008, a major political upset was pulled off in the Southeast Asian nation of Malaysia when the ruling coalition, Barisan Nasional (National Front), lost its long-held parliamentary majority after the general elections. Given the astonishingly high profile of political bloggers and relatively well established alternative online new sites within the nation, it was not surprising that many new media proponents saw the result as a major triumph of the medium. Through a brief account of the Hindraf (Hindu Rights Action Force) saga and the socio-political dissent nursed, in part, through new media in contemporary Malaysia, this paper seeks to lend context to the events that precede and surround the election as an example of the relationship between media and citizenship in praxis. In so doing it argues that the political turnaround, if indeed it proves to be, cannot be considered the consequence of new media alone. Rather, that to comprehensively assess the implications of new media for citizenship is to take into account the specific histories, conditions and actions (or lack of) of the various social actors involved.
Resumo:
The Space Day has been running at QUT for about a decade. The Space Day started out as a single lecture on the stars delivered to a group of high school students from Brisbane State High School (BSHS), just across the river from QUT and therefore convenient for the school to visit. I was contacted by Victor James of St. Laurence’s College (SLC), Brisbane asking if he could bring a group of boys to QUT for a lecture similar to that delivered to BSHS. However, for SLC a hands-on laboratory session was added to the lecture and thus the Space Day was born. For the Space Day we have concentrated on year 7 – 10 students. Subsequently, many other schools from Brisbane and further afield in Queensland have attended a Space Day.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
The underlying objective of this study was to develop a novel approach to evaluate the potential for commercialisation of a new technology. More specifically, this study examined the 'ex-ante'. evaluation of the technology transfer process. For this purpose, a technology originating from the high technology sector was used. The technology relates to the application of software for the detection of weak signals from space, which is an established method of signal processing in the field of radio astronomy. This technology has the potential to be used in commercial and industrial areas other than astronomy, such as detecting water leakages in pipes. Its applicability to detecting water leakage was chosen owing to several problems with detection in the industry as well as the impact it can have on saving water in the environment. This study, therefore, will demonstrate the importance of interdisciplinary technology transfer. The study employed both technical and business evaluation methods including laboratory experiments and the Delphi technique to address the research questions. There are several findings from this study. Firstly, scientific experiments were conducted and these resulted in a proof of concept stage of the chosen technology. Secondly, validation as well as refinement of criteria from literature that can be used for „ex-ante. evaluation of technology transfer has been undertaken. Additionally, after testing the chosen technology.s overall transfer potential using the modified set of criteria, it was found that the technology is still in its early stages and will require further development for it to be commercialised. Furthermore, a final evaluation framework was developed encompassing all the criteria found to be important. This framework can help in assessing the overall readiness of the technology for transfer as well as in recommending a viable mechanism for commercialisation. On the whole, the commercial potential of the chosen technology was tested through expert opinion, thereby focusing on the impact of a new technology and the feasibility of alternate applications and potential future applications.
Resumo:
This paper describes a simple activity for plotting and characterising the light curve from an exoplanet transit event by way of differential photometry analysis. Using free digital imaging software, participants analyse a series of telescope images with the goal of calculating various exoplanet parameters, including its size, orbital radius and habitability. The activity has been designed for a high-school or undergraduate university level and introduces fundamental concepts in astrophysics and an understanding of the basis for exoplanetary science, the transit method and digital photometry.
Resumo:
A recent theoretical investigation by Terzieva & Herbst of linear carbon chains, C-n where n greater than or equal to 6, in the interstellar medium has shown that these species can undergo efficient radiative association to form the corresponding anions. An experimental study by Barckholtz, Snow & Bierbaum of these anions has demonstrated that they do not react efficiently with molecular hydrogen, leading to the possibility of detectable abundances of cumulene-type anions in dense interstellar and circumstellar environments. Here we present a series of electronic structure calculations which examine possible anionic candidates for detection in these media, namely the anion analogues of the previously identified interstellar cumulenes CnH and Cn-1CH2 and heterocumulenes CnO (where n = 2-10). The extraordinary electron affinities calculated for these molecules suggest that efficient radiative electron attachment could occur, and the large dipole moments of these simple (generally) linear molecules point to the possibility of detection by radio astronomy.
Resumo:
This paper presents Australian results from the Interests and Recruitment in Science (IRIS) study with respect to the influence of STEM-related mass media, including science fiction, on students’ decisions to enrol in university STEM courses. The study found that across the full cohort (N=2999), students tended to attribute far greater influence to science-related documentaries/channels such as Life on Earth and the Discovery Channel, etc. than to science-fiction movies or STEM-related TV dramas. Males were more inclined than females to consider science fiction/fantasy books and films and popular science books/magazines as having been important in their decisions. Students taking physics/astronomy tended to rate the importance of science fiction/fantasy books and films higher than students in other courses. The implications of these results for our understanding of influences on STEM enrolments are discussed.
Resumo:
This article describes a parallax experiment performed by undergraduate physics students at Queensland University of Technology. The experiment is analogous to the parallax method used in astronomy to measure distances to the local stars. The result of one of these experiments is presented in this paper. A target was photographed using a digital camera at five distances between 3 and 8 metres from two vantage points spaced 0.6 m apart. The parallax distances were compared with the actual distance measured using a tape measure and the average error was 0.5 ± 0.9 %.
Resumo:
The medieval icons of southern India are among the most acclaimed Indian artistic innovations, especially those of the Chola Tamil kingdom (9th–10th centuries), which is best known for the Hindu iconography of the Dance of Siva that captured the imagination of master sculptor Rodin.1 Apart from these prolific images, however, not much was known about southern Indian copperbased metallurgy. Hence, these often spectacular castings have been regarded as a sudden efflorescence, almost without precedent, of skilled metallurgy as contrasted with tin-rich China or southeast Asia, for instance, where a developed copper-bronze tradition has been better appreciated.