15 resultados para tropospheric
em Queensland University of Technology - ePrints Archive
Resumo:
This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.
Resumo:
Exposure to ultraviolet radiation (UV) results in both damaging and beneficial health outcomes. Excessive UV exposure has been linked to many skin and eye problems, but moderate exposure induces vitamin D production. It has been reported that humans receive 90-95% of their vitamin D from production that starts after UV exposure. Although it is possible to acquire vitamin D through dietary supplementation, the average person receives very little in this manner. Therefore, since most people acquire their vitamin D from synthesis after exposure to UV from sunlight, it is very important to understand the different environments in which people encounter UV. This project measured UV radiation and in-vitro vitamin D production in the urban canyon and at a nearby suburban location. The urban canyon is an environment consisting of tall buildings and tropospheric air pollution, which have an attenuating effect on UV. Typically, UV measurements are collected in areas outside the urban canyon, meaning that at times studies and public recommendations do not accurately represent the amount of UV reaching street-level in highly urbanized areas. Understanding of UV exposure in urban canyons becomes increasingly important as the number of people working and living in large cities steadily increases worldwide. This study was conducted in the central business district (CBD) of Brisbane, Australia, which models the urban canyons of large cities around the world in that it boasts a great number of tall buildings, including many skyscrapers, meaning that most areas only see a small amount of direct sunlight each day. During the winter of 2007 measurements of UV radiation and in-vitro vitamin D production were collected in the CBD and at a suburban site approximately 2.5km outside the CBD. Air pollution data was obtained from a central CBD measurement site. Data analysis showed that urban canyon measurements of both UV radiation and in-vitro vitamin D production were significantly lower than those collected at the suburban site. These results will aid both future researchers and policy makers in better understanding human UV exposure in Brisbane’s CBD and other urban canyons around the world.
Resumo:
Much landscape architectural form seems like hackneyed modernism, whether it be orthogonal or biomorphic, 'formal or informal' and doesn't seem to get to grips with the truly complex nature of the landscape, making any project seem potentially simplistic. This is largely because it has inherited languages from architecture that are based around objects, and that therefore can act to make designs self-referential rather than edgy instances in a dialogue much larger than the site itself, connected to systems that are unavoidable, even if one chooses to ignore them. These systems constitute a formal language even if landscape architecture looks to things like GIS to engage with them. Tropospheric Temperament was an Advanced Computing subject, for second-year landscape architecture students at UWA, taught by Julian Raxworthy and Rene Van Meeuwen, which ran in Semester 1, 2004. For this subject, the question was: how can we learn to wield such systems in design terms, even if they are developed through un-self-conscious natural and vernacular forces?
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
In spite of significant research in the development of efficient algorithms for three carrier ambiguity resolution, full performance potential of the additional frequency signals cannot be demonstrated effectively without actual triple frequency data. In addition, all the proposed algorithms showed their difficulties in reliable resolution of the medium-lane and narrow-lane ambiguities in different long-range scenarios. In this contribution, we will investigate the effects of various distance-dependent biases, identifying the tropospheric delay to be the key limitation for long-range three carrier ambiguity resolution. In order to achieve reliable ambiguity resolution in regional networks with the inter-station distances of hundreds of kilometers, a new geometry-free and ionosphere-free model is proposed to fix the integer ambiguities of the medium-lane or narrow-lane observables over just several minutes without distance constraint. Finally, the semi-simulation method is introduced to generate the third frequency signals from dual-frequency GPS data and experimentally demonstrate the research findings of this paper.
Resumo:
Sampling of the El Chichón stratospheric cloud in early May and in late July, 1982, showed that a significant proportion of the cloud consisted of solid particles between 2 μm and 40 μm size. In addition, many particles may have been part of larger aggregates or clusters that ranged in size from < 10 μm to > 50 μm. The majority of individual grains were angular aluminosilicate glass shards with various amounts of smaller, adhering particles. Surface features on individual grains include sulfuric acid droplets and larger (0.5 μm to 1 μm) sulfate gel droplets with various amounts of Na, Mg, Ca and Fe. The sulfate gels probably formed by the interaction of sulfur-rich gases and solid particles within the cloud soon after eruption. Ca-sulfate laths may have formed by condensation within the plume during eruption, or alternatively, at a later stage by the reaction of sulfuric acid aerosols with ash fragments within the stratospheric cloud. A Wilson-Huang formulation for the settling rate of individual particles qualitatively agrees with the observed particle-size distribution for a period at least four months after injection of material into the stratosphere. This result emphasizes the importance of particle shape in controlling the settling rate of volcanic ash from the stratosphere.
Resumo:
Particle collections from the stratosphere via either the JSC Curatorial Program or the U2 Program (NASA Ames) occur between 16km and 19km altitude and are usually part of ongoing experiments to measure parameters related to the aerosol layer. Fine-grained aerosols (<0.1µm) occur in the stratosphere up to 35km altitude and are concentrated between 15km and 25km altitude[1]. All interplanetary dust particles (IDP's) from these stratospheric collections must pass through this aerosol layer before reaching the collection altitude. The major compounds in this aerosol layer are sulfur rich particulates (<0.1µm) and gases and include H2S04, OCS, S02 and CS2 [2].In order to assess possible surface reactions of interplanetary dust particles (IDP's) with ambient aerosols in the stratosphere, we have initiated a Surface Auger Microprobe (SAM) and electron microscope study of selected particles from the JSC Cosmic Dust Collection.
Resumo:
Over the past two decades, flat-plate particle collections have revealed the presence of a remarkable variety of both terrestrial and extraterrestrial material in the stratosphere [1-6]. The ratio of terrestrial to extraterrestrial material and the nature of material collected may vary over observable time scales. Variations in particle number density can be important since the earth’s atmospheric radiation balance, and therefore the earth’s climate, can be influenced by articulate absorption and scattering of radiation from the sun and earth [7-9]. In order to assess the number density of solid particles in the stratosphere, we have examined a representative fraction of the so1id particles from two flat-plate collection surfaces, whose collection dates are separated in time by 5 years.
Resumo:
Chondritic porous aggregates (CPA's) belong to an important subset of small particles (usually between 5 and 50 micrometers) collected from the stratosphere by high flying aircraft. These aggregates are approximately chondritic in elemental abundance and are composed of many thousands of smaller, submicrometer particles. CPA particles have been the subject of intensive study during the past few years [1-3] and there is strong evidence that they are a new class of extraterrestrial material not represented in the meteorite collection [3,4]. However, CPA's may be related to carbonaceous chondrites and in fact, both may be part of a continuum of primitive extraterrestrial materials [5]. The importance of CPA's stems from suggestions that they are very primitive solar system material possibly derived from early formed proto planets, chondritic parent bodies, or comets [3, 6]. To better understand the origin and evolution of these particles, we have attempted to summarize all of the mineralogical data on identified CPA's published since about 1976.
Resumo:
Collections of solid particles from the Earth's stratosphere by high-flying aircraft have been reported since 1965, with the initial primary objective of understanding the nature of the aerosol layer that occurs in the lower stratosphere. With the advent of efficient collection procedures and sophisticated electron- and ion-beam techniques, the primary aim of current stratospheric collections has been to study specific particle types that are extraterrestrial in origin and have survived atmospheric entry processes. The collection program provided by NASA at Johnson Space Center (JSC) has conducted many flights over the past 4 years and retrieved a total of 99 collection surfaces (flags) suitable for detailed study. Most of these collections are part of dedicated flights and have occurred during volcanically quiescent periods, although solid particles from the El Chichon eruptions have also been collected. Over 800 individual particles (or representative samples from larger aggregates) have been picked from these flags, examined in a preliminary fashion by SEM and EDS, and cataloged in a manner suitable for selection and study by the wider scientific community.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Hailstones in wet growth are commonly found in thunderclouds. While the ice-ice relative growth rate mechanism is generally accepted as the most likely cause of thunderstorm electrification, it is uncertain if this mechanism will operate under wet growth conditions because ice crystals are more likely to stick to the wet surface of a hailstone rather than bounce off it. Experiments were carried out in the laboratory to investigate if there was any charge separated when vapor-grown ice crystals bounced off a wet hailstone. A cloud of supercooled droplets, with and without ice crystals, was drawn past a simulated hailstone. In the dry growth regime, the hailstone charged strongly positive when droplets and crystals co-existed in the cloud. With only droplets in the cloud, there was no charging in the dry growth regime. However, as the hailstone attained wet growth, positive charging currents of about 0.5 and 3.5 pA were observed at 12 and 20 m s-1, respectively. We hypothesize that this observed charging was due to the evaporation of melt water. This so called Dinger-Gunn Effect is due to the ejection of negatively charged minute droplets produced by air bubbles bursting at the surface of the melt water. However the charge separated in wet growth was an order of magnitude smaller than that in dry growth and, therefore, we conclude that it is unlikely to play an important role in the electrification of thunderstorms.
Resumo:
Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.