12 resultados para Design methods

em CaltechTHESIS


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes research pursued in two areas, both involving the design and synthesis of sequence specific DNA-cleaving proteins. The first involves the use of sequence-specific DNA-cleaving metalloproteins to probe the structure of a protein-DNA complex, and the second seeks to develop cleaving moieties capable of DNA cleavage through the generation of a non-diffusible oxidant under physiological conditions.

Chapter One provides a brief review of the literature concerning sequence-specific DNA-binding proteins. Chapter Two summarizes the results of affinity cleaving experiments using leucine zipper-basic region (bZip) DNA-binding proteins. Specifically, the NH_2-terminal locations of a dimer containing the DNA binding domain of the yeast transcriptional activator GCN4 were mapped on the binding sites 5'-CTGACTAAT-3' and 5'ATGACTCTT- 3' using affinity cleaving. Analysis of the DNA cleavage patterns from Fe•EDTA-GCN4(222-281) and (226-281) dimers reveals that the NH_2-termini are in the major groove nine to ten base pairs apart and symmetrically displaced four to five base pairs from the central C of the recognition site. These data are consistent with structural models put forward for this class of DNA binding proteins. The results of these experiments are evaluated in light of the recently published crystal structure for the GCN4-DNA complex. Preliminary investigations of affinity cleaving proteins based on the DNA-binding domains of the bZip proteins Jun and Fos are also described.

Chapter Three describes experiments demonstrating the simultaneous binding of GCN4(226-281) and 1-Methylimidazole-2-carboxamide-netropsin (2-ImN), a designed synthetic peptide which binds in the minor groove of DNA at 5'-TGACT-3' sites as an antiparallel, side-by-side dimer. Through the use of Fe•EDTA-GCN4(226-281) as a sequence-specific footprinting agent, it is shown that the dimeric protein GCN4(226-281) and the dimeric peptide 2- ImN can simultaneously occupy their common binding site in the major and minor grooves of DNA, respectively. The association constants for 2-ImN in the presence and in the absence of Fe•EDTA-GCN4(226-281) are found to be similar, suggesting that the binding of the two dimers is not cooperative.

Chapter Four describes the synthesis and characterization of PBA-β-OH-His- Hin(139-190), a hybrid protein containing the DNA-binding domain of Hin recombinase and the putative iron-binding and oxygen-activating domain of the antitumor antibiotic bleomycin. This 54-residue protein, comprising residues 139-190 of Hin recombinase with the dipeptide pyrimidoblamic acid-β-hydroxy-L-histidine (PBA-β-OH-His) at the NH2 terminus, was synthesized by solid phase methods. PBA-β-OH-His-Hin(139- 190) binds specifically to DNA at four distinct Hin binding sites with affinities comparable to those of the unmodified Hin(139-190). In the presence of dithiothreitol (DTT), Fe•PB-β-OH-His-Hin(139-190) cleaves DNA with specificity remarkably similar to that of Fe•EDTA-Hin(139-190), although with lower efficiency. Analysis of the cleavage pattern suggests that DNA cleavage is mediated through a diffusible species, in contrast with cleavage by bleomycin, which occurs through a non-diffusible oxidant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated four unique methods for achieving scalable, deterministic integration of quantum emitters into ultra-high Q{V photonic crystal cavities, including selective area heteroepitaxy, engineered photoemission from silicon nanostructures, wafer bonding and dimensional reduction of III-V quantum wells, and cavity-enhanced optical trapping. In these areas, we were able to demonstrate site-selective heteroepitaxy, size-tunable photoluminescence from silicon nanostructures, Purcell modification of QW emission spectra, and limits of cavity-enhanced optical trapping designs which exceed any reports in the literature and suggest the feasibility of capturing- and detecting nanostructures with dimensions below 10 nm. In addition to process scalability and the requirement for achieving accurate spectral- and spatial overlap between the emitter and cavity, these techniques paid specific attention to the ability to separate the cavity and emitter material systems in order to allow optimal selection of these independently, and eventually enable monolithic integration with other photonic and electronic circuitry.

We also developed an analytic photonic crystal design process yielding optimized cavity tapers with minimal computational effort, and reported on a general cavity modification which exhibits improved fabrication tolerance by relying exclusively on positional- rather than dimensional tapering. We compared several experimental coupling techniques for device characterization. Significant efforts were devoted to optimizing cavity fabrication, including the use of atomic layer deposition to improve surface quality, exploration into factors affecting the design fracturing, and automated analysis of SEM images. Using optimized fabrication procedures, we experimentally demonstrated 1D photonic crystal nanobeam cavities exhibiting the highest Q/V reported on substrate. Finally, we analyzed the bistable behavior of the devices to quantify the nonlinear optical response of our cavities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general description of the need for hospital flow meters is given along with an analysis of some common flow measurement methods.

The design criteria, establishment of the basic configuration of the instrument, and the evolution of the final design are presented in detail. The ability of the magnetic crossover mechanism to extract the square root of an input is explained, and design curves are presented. The action of the flow totalizer is described in relation to the rest of the instrument. A complete set of manufacturing drawings for the instrument and its tooling is included in the thesis.

In conclusion, an evaluation of the completed instrument is made, and improvements and modifications are indicated. Mention is made of the adaptability of the magnetic crossover mechanism to other instrumentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sun has the potential to power the Earth's total energy needs, but electricity from solar power still constitutes an extremely small fraction of our power generation because of its high cost relative to traditional energy sources. Therefore, the cost of solar must be reduced to realize a more sustainable future. This can be achieved by significantly increasing the efficiency of modules that convert solar radiation to electricity. In this thesis, we consider several strategies to improve the device and photonic design of solar modules to achieve record, ultrahigh (> 50%) solar module efficiencies. First, we investigate the potential of a new passivation treatment, trioctylphosphine sulfide, to increase the performance of small GaAs solar cells for cheaper and more durable modules. We show that small cells (mm2), which currently have a significant efficiency decrease (~ 5%) compared to larger cells (cm2) because small cells have a higher fraction of recombination-active surface from the sidewalls, can achieve significantly higher efficiencies with effective passivation of the sidewalls. We experimentally validate the passivation qualities of treatment by trioctylphosphine sulfide (TOP:S) through four independent studies and show that this facile treatment can enable efficient small devices. Then, we discuss our efforts toward the design and prototyping of a spectrum-splitting module that employs optical elements to divide the incident spectrum into different color bands, which allows for higher efficiencies than traditional methods. We present a design, the polyhedral specular reflector, that has the potential for > 50% module efficiencies even with realistic losses from combined optics, cell, and electrical models. Prototyping efforts of one of these designs using glass concentrators yields an optical module whose combined spectrum-splitting and concentration should correspond to a record module efficiency of 42%. Finally, we consider how the manipulation of radiatively emitted photons from subcells in multijunction architectures can be used to achieve even higher efficiencies than previously thought, inspiring both optimization of incident and radiatively emitted photons for future high efficiency designs. In this thesis work, we explore novel device and photonic designs that represent a significant departure from current solar cell manufacturing techniques and ultimately show the potential for much higher solar cell efficiencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.