982 resultados para Ground-based observations
Resumo:
The remarkable astrometric capabilities of the Chandra Observatory offer the possibility to measure proper motions of X-ray sources with an unprecedented accuracy in this wavelength range. We recently completed a proper motion survey of three of the seven thermally emitting radio-quiet isolated neutron stars (INSs) discovered in the ROSAT all-sky survey. These INSs (RXJ0420.0-5022, RXJ0806.4-4123 and RXJ1308.6+2127) either lack an optical counterpart or have one so faint that ground based or space born optical observations push the current possibilities of the instrumentation to the limit. Pairs of ACIS observations were acquired 3 to 5 years apart to measure the displacement of the sources on the X-ray sky using as a reference the background of extragalactic or remote Galactic X-ray sources. We derive 2 sigma upper limits of 123 mas yr(-1) and 86 mas yr(-1) on the proper motion of RXJ0420.0-5022 and RXJ0806.4-4123, respectively. RXJ1308.6+2127 exhibits a very significant displacement (similar to 9 sigma) yielding mu = 220 +/- 25 mas yr(-1), the second fastest measured among all ROSAT-discovered INSs. The source is probably moving away rapidly from the Galactic plane at a speed which precludes any significant accretion of matter from the interstellar medium. Its transverse velocity of similar to 740 (d/700 pc) km s(-1) might be the largest of all ROSAT INSs and its corresponding spatial velocity lies among the fastest recorded for neutron stars. RXJ1308.6+2127 is thus a middle-aged (age similar to 1 My) high velocity cooling neutron star. We investigate its possible origin in nearby OB associations or from a field OB star. In most cases, the flight time from birth place appears significantly shorter than the characteristic age derived from spin down rate. Overall, the distribution in transverse velocity of the ROSAT INSs is not statistically different from that of normal radio pulsars.
Resumo:
The Amazon Basin provides an excellent environment for studying the sources, transformations, and properties of natural aerosol particles and the resulting links between biological processes and climate. With this framework in mind, the Amazonian Aerosol Characterization Experiment (AMAZE-08), carried out from 7 February to 14 March 2008 during the wet season in the central Amazon Basin, sought to understand the formation, transformations, and cloud-forming properties of fine-and coarse-mode biogenic aerosol particles, especially as related to their effects on cloud activation and regional climate. Special foci included (1) the production mechanisms of secondary organic components at a pristine continental site, including the factors regulating their temporal variability, and (2) predicting and understanding the cloud-forming properties of biogenic particles at such a site. In this overview paper, the field site and the instrumentation employed during the campaign are introduced. Observations and findings are reported, including the large-scale context for the campaign, especially as provided by satellite observations. New findings presented include: (i) a particle number-diameter distribution from 10 nm to 10 mu m that is representative of the pristine tropical rain forest and recommended for model use; (ii) the absence of substantial quantities of primary biological particles in the submicron mode as evidenced by mass spectral characterization; (iii) the large-scale production of secondary organic material; (iv) insights into the chemical and physical properties of the particles as revealed by thermodenuder-induced changes in the particle number-diameter distributions and mass spectra; and (v) comparisons of ground-based predictions and satellite-based observations of hydrometeor phase in clouds. A main finding of AMAZE-08 is the dominance of secondary organic material as particle components. The results presented here provide mechanistic insight and quantitative parameters that can serve to increase the accuracy of models of the formation, transformations, and cloud-forming properties of biogenic natural aerosol particles, especially as related to their effects on cloud activation and regional climate.
Resumo:
By utilizing the large multiplexing advantage of the Two-degree Field spectrograph on the Anglo-Australian Telescope, we have been able to obtain a complete spectroscopic sample of all objects in a predefined magnitude range, 16.5 < b(j) < 19.7 regardless of morphology, in an area toward the center of the Fornax Cluster of galaxies. Among the unresolved or marginally resolved targets, we have found five objects that are actually at the redshift of the Fornax Cluster; i.e., they are extremely compact dwarf galaxies or extremely large star clusters. All five have absorption-line spectra. With intrinsic sizes of less than 1.1 HWHM (corresponding to approximately 100 pc at the distance of the cluster), they are more compact and significantly less luminous than other known compact dwarf galaxies, yet much brighter than any globular cluster. In this paper we present new ground-based optical observations of these enigmatic objects. In addition to having extremely high central surface brightnesses, these objects show no evidence of any surrounding low surface brightness envelopes down to much fainter limits than is the case for, e.g., nucleated dwarf elliptical galaxies. Thus, if they are not merely the stripped remains of some other type of galaxy, then they appear to have properties unlike any previously known type of stellar system.
Resumo:
Absolute positioning – the real time satellite based positioning technique that relies solely on global navigation satellite systems – lacks accuracy for several real time application domains. To provide increased positioning quality, ground or satellite based augmentation systems can be devised, depending on the extent of the area to cover. The underlying technique – multiple reference station differential positioning – can, in the case of ground systems, be further enhanced through the implementation of the virtual reference station concept. Our approach is a ground based system made of a small-sized network of three stations where the concept of virtual reference station was implemented. The stations provide code pseudorange corrections, which are combined using a measurement domain approach inversely proportional to the distance from source station to rover. All data links are established trough the Internet.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente
Resumo:
Ground-based measurements of the parameters of atmosphere in Tbilisi during the same period, which are provided by the Mikheil Nodia Institute of geophysics, were used as calibration data. Satellite data monthly averaging, preprocessing, analysis and visualization was performed using Giovanni web-based application. Maps of trends and periodic components of the atmosphere aerosol optical thickness and ozone concentration over the study area were calculated.
Resumo:
The present study is an analysis of IR sources in the Alpha Persei open cluster region from the IRAS Point Source Catalog and from ground-based photometric observations. Cross-identification between stars in the region and IRAS Point Source Catalog was performed and nine new associations were found. BVRI Johnson photometry for 24 of the matched objects have been carried out. Physical identity of visual and IRAS sources and relationship to the Alpha Persei open cluster are discussed.
Resumo:
SEPServer is a three-year collaborative project funded by the seventh framework programme (FP7-SPACE) of the European Union. The objective of the project is to provide access to state-of-the-art observations and analysis tools for the scientific community on solar energetic particle (SEP) events and related electromagnetic (EM) emissions. The project will eventually lead to better understanding of the particle acceleration and transport processes at the Sun and in the inner heliosphere. These processes lead to SEP events that form one of the key elements of space weather. In this paper we present the first results from the systematic analysis work performed on the following datasets: SOHO/ERNE, SOHO/EPHIN, ACE/EPAM, Wind/WAVES and GOES X-rays. A catalogue of SEP events at 1 AU, with complete coverage over solar cycle 23, based on high-energy (~68-MeV) protons from SOHO/ERNE and electron recordings of the events by SOHO/EPHIN and ACE/EPAM are presented. A total of 115 energetic particle events have been identified and analysed using velocity dispersion analysis (VDA) for protons and time-shifting analysis (TSA) for electrons and protons in order to infer the SEP release times at the Sun. EM observations during the times of the SEP event onset have been gathered and compared to the release time estimates of particles. Data from those events that occurred during the European day-time, i.e., those that also have observations from ground-based observatories included in SEPServer, are listed and a preliminary analysis of their associations is presented. We find that VDA results for protons can be a useful tool for the analysis of proton release times, but if the derived proton path length is out of a range of 1 AU < s[3 AU, the result of the analysis may be compromised, as indicated by the anti-correlation of the derived path length and release time delay from the asso ciated X-ray flare. The average path length derived from VDA is about 1.9 times the nominal length of the spiral magnetic field line. This implies that the path length of first-arriving MeV to deka-MeV protons is affected by interplanetary scattering. TSA of near-relativistic electrons results in a release time that shows significant scatter with respect to the EM emissions but with a trend of being delayed more with increasing distance between the flare and the nominal footpoint of the Earth-connected field line.
Resumo:
Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.
Resumo:
Landslide processes can have direct and indirect consequences affecting human lives and activities. In order to improve landslide risk management procedures, this PhD thesis aims to investigate capabilities of active LiDAR and RaDAR sensors for landslides detection and characterization at regional scales, spatial risk assessment over large areas and slope instabilities monitoring and modelling at site-specific scales. At regional scales, we first demonstrated recent boat-based mobile LiDAR capabilities to model topography of the Normand coastal cliffs. By comparing annual acquisitions, we validated as well our approach to detect surface changes and thus map rock collapses, landslides and toe erosions affecting the shoreline at a county scale. Then, we applied a spaceborne InSAR approach to detect large slope instabilities in Argentina. Based on both phase and amplitude RaDAR signals, we extracted decisive information to detect, characterize and monitor two unknown extremely slow landslides, and to quantify water level variations of an involved close dam reservoir. Finally, advanced investigations on fragmental rockfall risk assessment were conducted along roads of the Val de Bagnes, by improving approaches of the Slope Angle Distribution and the FlowR software. Therefore, both rock-mass-failure susceptibilities and relative frequencies of block propagations were assessed and rockfall hazard and risk maps could be established at the valley scale. At slope-specific scales, in the Swiss Alps, we first integrated ground-based InSAR and terrestrial LiDAR acquisitions to map, monitor and model the Perraire rock slope deformation. By interpreting both methods individually and originally integrated as well, we therefore delimited the rockslide borders, computed volumes and highlighted non-uniform translational displacements along a wedge failure surface. Finally, we studied specific requirements and practical issues experimented on early warning systems of some of the most studied landslides worldwide. As a result, we highlighted valuable key recommendations to design new reliable systems; in addition, we also underlined conceptual issues that must be solved to improve current procedures. To sum up, the diversity of experimented situations brought an extensive experience that revealed the potential and limitations of both methods and highlighted as well the necessity of their complementary and integrated uses.
Resumo:
Le relevé DEBRIS est effectué par le télescope spatial Herschel. Il permet d’échantillonner les disques de débris autour d’étoiles de l’environnement solaire. Dans la première partie de ce mémoire, un relevé polarimétrique de 108 étoiles des candidates de DEBRIS est présenté. Utilisant le polarimètre de l’Observatoire du Mont-Mégantic, des observations ont été effectuées afin de détecter la polarisation due à la présence de disques de débris. En raison d’un faible taux de détection d’étoiles polarisées, une analyse statistique a été réalisée dans le but de comparer la polarisation d’étoiles possédant un excès dans l’infrarouge et la polarisation de celles n’en possédant pas. Utilisant la théorie de diffusion de Mie, un modèle a été construit afin de prédire la polarisation due à un disque de débris. Les résultats du modèle sont cohérents avec les observations. La deuxième partie de ce mémoire présente des tests optiques du polarimètre POL-2, construit à l’Université de Montréal. L’imageur du télescope James-Clerk-Maxwell passe de l’instrument SCUBA à l’instrument SCUBA-2, qui sera au moins cent fois plus rapide que son prédécesseur. De même, le polarimètre suit l’amélioration et un nouveau polarimètre, POL-2, a été installé sur SCUBA-2 en juillet 2010. Afin de vérifier les performances optiques de POL-2, des tests ont été exécutés dans les laboratoires sub-millimétriques de l’Université de Western Ontario en juin 2009 et de l’Université de Lethbridge en septembre 2009. Ces tests et leurs implications pour les observations futures sont discutés.
Resumo:
Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.
Resumo:
Comets are the spectacular objects in the night sky since the dawn of mankind. Due to their giant apparitions and enigmatic behavior, followed by coincidental calamities, they were termed as notorious and called as `bad omens'. With a systematic study of these objects modern scienti c community understood that these objects are part of our solar system. Comets are believed to be remnant bodies of at the end of evolution of solar system and possess the material of solar nebula. Hence, these are considered as most pristine objects which can provide the information about the conditions of solar nebula. These are small bodies of our solar system, with a typical size of about a kilometer to a few tens of kilometers orbiting the Sun in highly elliptical orbits. The solid body of a comet is nucleus which is a conglomerated mixture of water ice, dust and some other gases. When the cometary nucleus advances towards the Sun in its orbit the ices sublimates and produces the gaseous envelope around the nucleus which is called coma. The gravity of cometary nucleus is very small and hence can not in uence the motion of gases in the cometary coma. Though the cometary nucleus is a few kilometers in size they can produce a transient, extensive, and expanding atmosphere with size several orders of magnitude larger in space. By ejecting gas and dust into space comets became the most active members of the solar system. The solar radiation and the solar wind in uences the motion of dust and ions and produces dust and ion tails, respectively. Comets have been observed in di erent spectral regions from rocket, ground and space borne optical instruments. The observed emission intensities are used to quantify the chemical abundances of di erent species in the comets. The study of various physical and chemical processes that govern these emissions is essential before estimating chemical abundances in the coma. Cameron band emission of CO molecule has been used to derive CO2 abundance in the comets based on the assumption that photodissociation of CO2 mainly produces these emissions. Similarly, the atomic oxygen visible emissions have been used to probe H2O in the cometary coma. The observed green ([OI] 5577 A) to red-doublet emission ([OI] 6300 and 6364 A) ratio has been used to con rm H2O as the parent species of these emissions. In this thesis a model is developed to understand the photochemistry of these emissions and applied to several comets. The model calculated emission intensities are compared with the observations done by space borne instruments like International Ultraviolet Explorer (IUE) and Hubble Space Telescope (HST) and also by various ground based telescopes.
Resumo:
The impact of selected observing systems on the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA40) is explored by mimicking observational networks of the past. This is accomplished by systematically removing observations from the present observational data base used by ERA40. The observing systems considered are a surface-based system typical of the period prior to 1945/50, obtained by only retaining the surface observations, a terrestrial-based system typical of the period 1950-1979, obtained by removing all space-based observations, and finally a space-based system, obtained by removing all terrestrial observations except those for surface pressure. Experiments using these different observing systems have been limited to seasonal periods selected from the last 10 yr of ERA40. The results show that the surface-based system has severe limitations in reconstructing the atmospheric state of the upper troposphere and stratosphere. The terrestrial system has major limitations in generating the circulation of the Southern Hemisphere with considerable errors in the position and intensity of individual weather systems. The space-based system is able to analyse the larger-scale aspects of the global atmosphere almost as well as the present observing system but performs less well in analysing the smaller-scale aspects as represented by the vorticity field. Here, terrestrial data such as radiosondes and aircraft observations are of paramount importance. The terrestrial system in the form of a limited number of radiosondes in the tropics is also required to analyse the quasi-biennial oscillation phenomenon in a proper way. The results also show the dominance of the satellite observing system in the Southern Hemisphere. These results all indicate that care is required in using current reanalyses in climate studies due to the large inhomogeneity of the available observations, in particular in time.