969 resultados para Long baseline


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breen, Andrew; Fallows, R. A.; Thomasson, P.; Bisi, M. M., 'Extremely long baseline interplanetary scintillation measurements of solar wind velocity', Journal of Geophysical Research (2006) 111(A8) pp.A08104 RAE2008

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a general formalism for extracting information on the fundamental parameters associated with neutrino masses and mixings from two or more long baseline neutrino oscillation experiments. This formalism is then applied to the current most likely experiments using neutrino beams from the Japan Hadron Facility (JHF) and Fermilab's NuMI beamline. Different combinations of muon neutrino or muon anti-neutrino running are considered. The type of neutrino mass hierarchy is extracted using the effects of matter on neutrino propogation. Contrary to naive expectation, we find that both beams using neutrinos is more suitable for determining the hierarchy provided that the neutrino energy divided by baseline (E/L) for NuMI is smaller than or equal to that of JHF, whereas to determine the small mixing angle, theta(13), and the CP or T violating phase delta, one neutrino and the other anti-neutrino are most suitable. We make extensive use of bi-probability diagrams for both understanding and extracting the physics involved in such comparisons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The next generation neutrino observatory proposed by the LBNO collaboration will address fundamental questions in particle and astroparticle physics. The experiment consists of a far detector, in its first stage a 20 kt LAr double phase TPC and a magnetised iron calorimeter, situated at 2300 km from CERN and a near detector based on a highpressure argon gas TPC. The long baseline provides a unique opportunity to study neutrino flavour oscillations over their 1st and 2nd oscillation maxima exploring the L/E behaviour, and distinguishing effects arising from δCP and matter. In this paper we have reevaluated the physics potential of this setup for determining the mass hierarchy (MH) and discovering CP-violation (CPV), using a conventional neutrino beam from the CERN SPS with a power of 750 kW. We use conservative assumptions on the knowledge of oscillation parameter priors and systematic uncertainties. The impact of each systematic error and the precision of oscillation prior is shown. We demonstrate that the first stage of LBNO can determine unambiguously the MH to > 5δ C.L. over the whole phase space. We show that the statistical treatment of the experiment is of very high importance, resulting in the conclusion that LBNO has ~ 100% probability to determine the MH in at most 4-5 years of running. Since the knowledge of MH is indispensable to extract δCP from the data, the first LBNO phase can convincingly give evidence for CPV on the 3δ C.L. using today’s knowledge on oscillation parameters and realistic assumptions on the systematic uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of HyperKamiokande is the study of CP asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this paper, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis uses the framework and systematic uncertainties derived from the ongoing T2K experiment. With a total exposure of 7.5 MW × 10⁷ s integrated proton beam power (corresponding to 1.56 × 10²² protons on target with a 30 GeV proton beam) to a 2.5-degree off-axis neutrino beam, it is expected that the leptonic CP phase δCP can be determined to better than 19 degrees for all possible values of δCP , and CP violation can be established with a statistical significance of more than 3 σ (5 σ) for 76% (58%) of the δCP parameter space. Using both νe appearance and νµ disappearance data, the expected 1σ uncertainty of sin²θ₂₃ is 0.015(0.006) for sin²θ₂₃ = 0.5(0.45).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very-long-baseline radio interferometry (VLBI) imaging surveys have been undertaken since the late 1970s. The sample sizes were initially limited to a few tens of objects but the snapshot technique has now allowed samples containing almost 200 sources to be studied. The overwhelming majority of powerful compact sources are asymmetric corejects of one form or another, most of which exhibit apparent superluminal motion. However 5-10% of powerful flat-spectrum sources are 100-parsec (pc)-scale compact symmetric objects; these appear to form a continuum with the 1-kpc-scale double-lobed compact steep-spectrum sources, which make up 15-20% of lower frequency samples. It is likely that these sub-galactic-size symmetric sources are the precursors to the large-scale classical double sources. There is a surprising peak around 90 degrees in the histogram of misalignments between the dominant source axes on parsec and kiloparsec scales; this seems to be associated with sources exhibiting a high degree of relativistic beaming. VLBI snapshot surveys have great cosmological potential via measurements of both proper motion and angular size vs. redshift as well as searches for gravitational "millilensing."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The parsec scale properties of low power radio galaxies are reviewed here, using the available data on 12 Fanaroff-Riley type I galaxies. The most frequent radio structure is an asymmetric parsec-scale morphology--i.e., core and one-sided jet. It is shared by 9 (possibly 10) of the 12 mapped radio galaxies. One (possibly 2) of the other galaxies has a two-sided jet emission. Two sources are known from published data to show a proper motion; we present here evidence for proper motion in two more galaxies. Therefore, in the present sample we have 4 radio galaxies with a measured proper motion. One of these has a very symmetric structure and therefore should be in the plane of the sky. The results discussed here are in agreement with the predictions of the unified scheme models. Moreover, the present data indicate that the parsec scale structure in low and high power radio galaxies is essentially the same.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

VLBI observations of the extremely gamma-bright blazar PKS 0528+134 at 8, 22, 43, and 86 GHz reveal a strongly bent one-sided-core jet structure with at least three moving and two apparently stationary jet components. At the highest observing frequencies the brightest and most compact jet component (the VLBI core) is unresolved with an upper limit to its size of approximately 50 microarcsec corresponding to approximately 0.2 parsec [H0 = 100 km.s-1.Mpc-1 (megaparsec-1), q0 = 0.5, where H0 is Hubble constant and q0 is the deceleration parameter]. Two 86-GHz VLBI observations performed in 1993.3 and 1994.0 reveal a new jet component emerging with superluminal speed from the core. Linear back-extrapolation of its motion yields strong evidence that the ejection of this component is related to an outburst in the millimeter regime and a preceding intense flare of the gamma-flux density observed in early 1993. This and the radio/optical "light curves" and VLBI data for two other sources (S5 0836+710 and 3C 454.3) suggest that the observed gamma-radiation might be Doppler-boosted and perhaps is closely related to the physical processes acting near the "base" of the highly relativistic jets observed in quasars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systematic differences in the very long baseline interferometry (VLBI) radio polarization structure and average VLBI component speeds of BL Lacertae objects and quasars support the view that the observational distinction between these classes, based in large part on the strength of their optical line emission, is meaningful; in other words, this distinction reflects significant differences in the physical conditions in these sources. Possible models providing a link between the optical and VLBI properties of BL Lacertae objects and quasars are discussed. Most VLBI polarization observations to date have been global observations made at 6 cm; recent results suggest that the VLBI polarization structure of some sources may change dramatically on scales smaller than those probed by these 6-cm observations.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The BeiDou system is the first global navigation satellite system in which all satellites transmit triple-frequency signals that can provide the positioning, navigation, and timing independently. A benefit of triple-frequency signals is that more useful combinations can be formed, including some extrawide-lane combinations whose ambiguities can generally be instantaneously fixed without distance restriction, although the narrow-lane ambiguity resolution (NL AR) still depends on the interreceiver distance or requires a long time to achieve. In this paper, we synthetically study decimeter and centimeter kinematic positioning using BeiDou triple-frequency signals. It starts with AR of two extrawide-lane signals based on the ionosphere-free or ionosphere-reduced geometry-free model. For decimeter positioning, one can immediately use two ambiguity-fixed extrawide-lane observations without pursuing NL AR. To achieve higher accuracy, NL AR is the necessary next step. Despite the fact that long-baseline NL AR is still challenging, some NL ambiguities can indeed be fixed with high reliability. Partial AR for NL signals is acceptable, because as long as some ambiguities for NL signals are fixed, positioning accuracy will be certainly improved.With accumulation of observations, more and more NL ambiguities are fixed and the positioning accuracy continues to improve. An efficient Kalman-filtering system is established to implement the whole process. The formulated system is flexible, since the additional constraints can be easily applied to enhance the model's strength. Numerical results from a set of real triple-frequency BeiDou data on a 50 km baseline show that decimeter positioning is achievable instantaneously.With only five data epochs, 84% of NL ambiguities can be fixed so that the real-time kinematic accuracies are 4.5, 2.5, and 16 cm for north, east, and height components (respectively), while with 10 data epochs more than 90% of NL ambiguities are fixed, and the rea- -time kinematic solutions are improved to centimeter level for all three coordinate components.