10 resultados para cooperative level crossings

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooperative director fluctuations in lipid bilayers have been postulated for many years. ^2H-NMR T_1^(-1), T_(1P)^(-1) , and T_2^(-1); measurements have been used identify these motions and to determine the origin of increased slow bilayer motion upon addition of unlike lipids or proteins to a pure lipid bilayer.

The contribution of cooperative director fluctuations to NMR relaxation in lipid bilayers has been expressed mathematically using the approach of Doane et al.^1 and Pace and Chan.^2 The T_2^(-1)’s of pure dimyristoyllecithin (DML) bilayers deuterated at the 2, 9 and 10, and all positions on both lipid hydrocarbon chains have been measured. Several characteristics of these measurements indicate the presence of cooperative director fluctuations. First of all, T_2^(-1) exhibits a linear dependence on S2/CD. Secondly, T_2^(-1) varies across the ^2H-NMR powder pattern as sin^2 (2, β), where , β is the angle between the average bilayer director and the external magnetic field. Furthermore, these fluctuations are restricted near the lecithin head group suggesting that the head group does not participate in these motions but, rather, anchors the hydrocarbon chains in the bilayer.

T_2^(-1)has been measured for selectively deuterated liquid crystalline DML hilayers to which a host of other lipids and proteins have been added. The T_2^(-1) of the DML bilayer is found to increase drastically when chlorophyll a (chl a) and Gramicidin A' (GA') are added to the bilayer. Both these molecules interfere with the lecithin head group spacing in the bilayer. Molecules such as myristic acid, distearoyllecithin (DSL), phytol, and cholesterol, whose hydrocarbon regions are quite different from DML but which have small,neutral polar head groups, leave cooperative fluctuations in the DML bilayer unchanged.

The effect of chl a on cooperative fluctuations in the DML bilayer has been examined in detail using ^2H-NMR T_1^(-1), T_(1P)^(-1) , and T_2^(-1); measurements. Cooperative fluctuations have been modelled using the continuum theory of the nematic state of liquid crystals. Chl a is found to decrease both the correlation length and the elastic constants in the DML bilayer.

A mismatch between the hydrophobic length of a lipid bilayer and that of an added protein has also been found to change the cooperative properties of the lecithin bilayer. Hydrophobic mismatch has been studied in a series GA' / lecithin bilayers. The dependence of 2H-NMR order parameters and relaxation rates on GA' concentration has been measured in selectively deuterated DML, dipalmitoyllecithin (DPL), and DSL systems. Order parameters, cooperative lengths, and elastic constants of the DML bilayer are most disrupted by GA', while the DSL bilayer is the least perturbed by GA'. Thus, it is concluded that the hydrophobic length of GA' best matches that of the DSL bilayer. Preliminary Raman spectroscopy and Differential Scanning Calorimetry experiments of GA' /lecithin systems support this conclusion. Accommodation of hydrophobic mismatch is used to rationalize the absence of H_(II) phase formation in GA' /DML systems and the observation of H_(II) phase in GA' /DPL and GA' /DSL systems.

1. J. W. Doane and D. L. Johnson, Chem. Phy3. Lett., 6, 291-295 (1970). 2. R. J. Pace and S. I. Chan, J. Chem. Phy3., 16, 4217-4227 (1982).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.

However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.

This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-classical properties and quantum interference (QI) in two-photon excitation of a three level atom (|1〉), |2〉, |3〉) in a ladder configuration, illuminated by multiple fields in non-classical (squeezed) and/or classical (coherent) states, is studied. Fundamentally new effects associated with quantum correlations in the squeezed fields and QI due to multiple excitation pathways have been observed. Theoretical studies and extrapolations of these findings have revealed possible applications which are far beyond any current capabilities, including ultrafast nonlinear mixing, ultrafast homodyne detection and frequency metrology. The atom used throughout the experiments was Cesium, which was magneto-optically trapped in a vapor cell to produce a Doppler-free sample. For the first part of the work the |1〉 → |2〉 → |3〉 transition (corresponding to the 6S1/2F = 4 → 6P3/2F' = 5 → 6D5/2F" = 6 transition) was excited by using the quantum-correlated signal (Ɛs) and idler (Ɛi) output fields of a subthreshold non-degenerate optical parametric oscillator, which was tuned so that the signal and idler fields were resonant with the |1〉 → |2〉 and |2〉 → |3〉 transitions, respectively. In contrast to excitation with classical fields for which the excitation rate as a function of intensity has always an exponent greater than or equal to two, excitation with squeezed-fields has been theoretically predicted to have an exponent that approaches unity for small enough intensities. This was verified experimentally by probing the exponent down to a slope of 1.3, demonstrating for the first time a purely non-classical effect associated with the interaction of squeezed fields and atoms. In the second part excitation of the two-photon transition by three phase coherent fields Ɛ1 , Ɛ2 and Ɛ0, resonant with the dipole |1〉 → |2〉 and |2〉 → |3〉 and quadrupole |1〉 → |3〉 transitions, respectively, is studied. QI in the excited state population is observed due to two alternative excitation pathways. This is equivalent to nonlinear mixing of the three excitation fields by the atom. Realizing that in the experiment the three fields are spaced in frequency over a range of 25 THz, and extending this scheme to other energy triplets and atoms, leads to the discovery that ranges up to 100's of THz can be bridged in a single mixing step. Motivated by these results, a master equation model has been developed for the system and its properties have been extensively studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the Atchison, Topeka, and Santa Fe railroad in Pasadena is a very dynamic one, as is readily recognized by engineers, city officials, and laymen. The route of the railroad was first laid out in the eighties and because of certain liberal concessions granted by the City of Pasadena, the right-of-way was located through Pasadena, despite the fact that the grade coming into the city either from Los Angeles or San Bernardino was enormous. Some years later, other transcontinental routes of the Santa Fe out of Los Angles were sought, and a right-of-way was obtained by way of Fullerton and Riverside to San Bernardino, where this route joins the one from Los Angeles through Pasadena. This route, however, is ten miles longer than the one through Pasadena, which means a considerable loss of time in a short diversion of approximately only sixty miles in length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.

Part II

Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.

Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.

Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An air filled ionization chamber has been constructed with a volume of 552 liters and a wall consisting of 12.7 mg/cm2 of plastic wrapped over a rigid, lightweight aluminum frame. A calibration in absolute units, independent of previous Caltech ion chamber calibrations, was applied to a sealed Neher electrometer for use in this chamber. The new chamber was flown along with an older, argon filled, balloon type chamber in a C-135 aircraft from 1,000 to 40,000 feet altitude, and other measurements of sea level cosmic ray ionization were made, resulting in the value of 2.60 ± .03 ion pairs/cm3 sec atm) at sea level. The calibrations of the two instruments were found to agree within 1 percent, and the airplane data were consistent with previous balloon measurements in the upper atmosphere. Ionization due to radon gas in the atmosphere was investigated. Absolute ionization data in the lower atmosphere have been compared with results of other observers, and discrepancies have been discussed.

Data from a polar orbiting ion chamber on the OGO-II, IV spacecraft have been analyzed. The problem of radioactivity produced on the spacecraft during passes through high fluxes of trapped protons has been investigated, and some corrections determined. Quiet time ionization averages over the polar regions have been plotted as function of altitude, and an analytical fit is made to the data that gives a value of 10.4 ± 2.3 percent for the fractional part of the ionization at the top of the atmosphere due to splash albedo particles, although this result is shown to depend on an assumed angular distribution for the albedo particles. Comparisons with other albedo measurements are made. The data are shown to be consistent with balloon and interplanetary ionization measurements. The position of the cosmic ray knee is found to exhibit an altitude dependence, a North-South effect, and a small local time variation.