3 resultados para Rapid through transportation

em DRUM (Digital Repository at the University of Maryland)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Travel demand models are important tools used in the analysis of transportation plans, projects, and policies. The modeling results are useful for transportation planners making transportation decisions and for policy makers developing transportation policies. Defining the level of detail (i.e., the number of roads) of the transport network in consistency with the travel demand model’s zone system is crucial to the accuracy of modeling results. However, travel demand modelers have not had tools to determine how much detail is needed in a transport network for a travel demand model. This dissertation seeks to fill this knowledge gap by (1) providing methodology to define an appropriate level of detail for a transport network in a given travel demand model; (2) implementing this methodology in a travel demand model in the Baltimore area; and (3) identifying how this methodology improves the modeling accuracy. All analyses identify the spatial resolution of the transport network has great impacts on the modeling results. For example, when compared to the observed traffic data, a very detailed network underestimates traffic congestion in the Baltimore area, while a network developed by this dissertation provides a more accurate modeling result of the traffic conditions. Through the evaluation of the impacts a new transportation project has on both networks, the differences in their analysis results point out the importance of having an appropriate level of network detail for making improved planning decisions. The results corroborate a suggested guideline concerning the development of a transport network in consistency with the travel demand model’s zone system. To conclude this dissertation, limitations are identified in data sources and methodology, based on which a plan of future studies is laid out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ɣ-ray bursts (GRBs) are the Universe's most luminous transient events. Since the discovery of GRBs was announced in 1973, efforts have been ongoing to obtain data over a broader range of the electromagnetic spectrum at the earliest possible times following the initial detection. The discovery of the theorized ``afterglow'' emission in radio through X-ray bands in the late 1990s confirmed the cosmological nature of these events. At present, GRB afterglows are among the best probes of the early Universe (z ≳ 9). In addition to informing theories about GRBs themselves, observations of afterglows probe the circum-burst medium (CBM), properties of the host galaxies and the progress of cosmic reionization. To explore the early-time variability of afterglows, I have developed a generalized analysis framework which models near-infrared (NIR), optical, ultra-violet (UV) and X-ray light curves without assuming an underlying model. These fits are then used to construct the spectral energy distribution (SED) of afterglows at arbitrary times within the observed window. Physical models are then used to explore the evolution of the SED parameter space with time. I demonstrate that this framework produces evidence of the photodestruction of dust in the CBM of GRB 120119A, similar to the findings from a previous study of this afterglow. The framework is additionally applied to the afterglows of GRB 140419A and GRB 080607. In these cases the evolution of the SEDs appears consistent with the standard fireball model. Having introduced the scientific motivations for early-time observations, I introduce the Rapid Infrared Imager-Spectrometer (RIMAS). Once commissioned on the 4.3 meter Discovery Channel Telescope (DCT), RIMAS will be used to study the afterglows of GRBs through photometric and spectroscopic observations beginning within minutes of the initial burst. The instrument will operate in the NIR, from 0.97 μm to 2.37 μm, permitting the detection of very high redshift (z ≳ 7) afterglows which are attenuated at shorter wavelengths by Lyman-ɑ absorption in the intergalactic medium (IGM). A majority of my graduate work has been spent designing and aligning RIMAS's cryogenic (~80 K) optical systems. Design efforts have included an original camera used to image the field surrounding spectroscopic slits, tolerancing and optimizing all of the instrument's optics, thermal modeling of optomechanical systems, and modeling the diffraction efficiencies for some of the dispersive elements. To align the cryogenic optics, I developed a procedure that was successfully used for a majority of the instrument's sub-assemblies. My work on this cryogenic instrument has necessitated experimental and computational projects to design and validate designs of several subsystems. Two of these projects describe simple and effective measurements of optomechanical components in vacuum and at cryogenic temperatures using an 8-bit CCD camera. Models of heat transfer via electrical harnesses used to provide current to motors located within the cryostat are also presented.