888 resultados para Two-stage stochastic model
Resumo:
Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Strategic supply chain optimization (SCO) problems are often modelled as a two-stage optimization problem, in which the first-stage variables represent decisions on the development of the supply chain and the second-stage variables represent decisions on the operations of the supply chain. When uncertainty is explicitly considered, the problem becomes an intractable infinite-dimensional optimization problem, which is usually solved approximately via a scenario or a robust approach. This paper proposes a novel synergy of the scenario and robust approaches for strategic SCO under uncertainty. Two formulations are developed, namely, naïve robust scenario formulation and affinely adjustable robust scenario formulation. It is shown that both formulations can be reformulated into tractable deterministic optimization problems if the uncertainty is bounded with the infinity-norm, and the uncertain equality constraints can be reformulated into deterministic constraints without assumption of the uncertainty region. Case studies of a classical farm planning problem and an energy and bioproduct SCO problem demonstrate the advantages of the proposed formulations over the classical scenario formulation. The proposed formulations not only can generate solutions with guaranteed feasibility or indicate infeasibility of a problem, but also can achieve optimal expected economic performance with smaller numbers of scenarios.
Resumo:
Because of high efficacy, long lifespan, and environment-friendly operation, LED lighting devices become more and more popular in every part of our life, such as ornament/interior lighting, outdoor lightings and flood lighting. The LED driver is the most critical part of the LED lighting fixture. It heavily affects the purchasing cost, operation cost as well as the light quality. Design a high efficiency, low component cost and flicker-free LED driver is the goal. The conventional single-stage LED driver can achieve low cost and high efficiency. However, it inevitably produces significant twice-line-frequency lighting flicker, which adversely affects our health. The conventional two-stage LED driver can achieve flicker-free LED driving at the expenses of significantly adding component cost, design complexity and low the efficiency. The basic ripple cancellation LED driving method has been proposed in chapter three. It achieves a high efficiency and a low component cost as the single-stage LED driver while also obtaining flicker-free LED driving performance. The basic ripple cancellation LED driver is the foundation of the entire thesis. As the research evolving, another two ripple cancellation LED drivers has been developed to improve different aspects of the basic ripple cancellation LED driver design. The primary side controlled ripple cancellation LED driver has been proposed in chapter four to further reduce cost on the control circuit. It eliminates secondary side compensation circuit and an opto-coupler in design while at the same time maintaining flicker-free LED driving. A potential integrated primary side controller can be designed based on the proposed LED driving method. The energy channeling ripple cancellation LED driver has been proposed in chapter five to further reduce cost on the power stage circuit. In previous two ripple cancellation LED drivers, an additional DC-DC converter is needed to achieve ripple cancellation. A power transistor has been used in the energy channeling ripple cancellation LED driving design to successfully replace a separate DC-DC converter and therefore achieved lower cost. The detailed analysis supports the theory of the proposed ripple cancellation LED drivers. Simulation and experiment have also been included to verify the proposed ripple cancellation LED drivers.
Resumo:
Global niobium production is presently dominated by three operations, Araxá and Catalão (Brazil), and Niobec (Canada). Although Brazil accounts for over 90% of the world’s niobium production, a number of high grade niobium deposits exist worldwide. The advancement of these deposits depends largely on the development of operable beneficiation flowsheets. Pyrochlore, as the primary niobium mineral, is typically upgraded by flotation with amine collectors at acidic pH following a complicated flowsheet with significant losses of niobium. This research compares the typical two stage flotation flowsheet to a direct flotation process (i.e. elimination of gangue pre-flotation) with the objective of circuit simplification. In addition, the use of a chelating reagent (benzohydroxamic acid, BHA) was studied as an alternative collector for fine grained, highly disseminated pyrochlore. For the amine based reagent system, results showed that while comparable at the laboratory scale, when scaled up to the pilot level the direct flotation process suffered from circuit instability because of high quantities of dissolved calcium in the process water due to stream recirculation and fine calcite dissolution, which ultimately depressed pyrochlore. This scale up issue was not observed in pilot plant operation of the two stage flotation process as a portion of the highly reactive carbonate minerals was removed prior to acid addition. A statistical model was developed for batch flotation using BHA on carbonatite ore (0.25% Nb2O5) that could not be effectively upgraded using the conventional amine reagent scheme. Results showed that it was possible to produce a concentrate containing 1.54% Nb2O5 with 93% Nb recovery in ~15% of the original mass. Fundamental studies undertaken included FT-IR and XPS, which showed the adsorption of both the protonized amine and the neutral amine onto the surface of the pyrochlore (possibly at niobium sites as indicated by detected shifts in the Nb3d binding energy). The results suggest that the preferential flotation of pyrochlore over quartz with amines at low pH levels can be attributed to a difference in critical hemimicelle concentration (CHC) values for the two minerals. BHA was found to be absorbed on pyrochlore surfaces by a similar mechanism to alkyl hydroxamic acid. It is hoped that this work will assist in improving operability of existing pyrochlore flotation circuits and help promote the development of niobium deposits globally. Future studies should focus on investigation into specific gangue mineral depressants and inadvertent activation phenomenon related to BHA flotation of gangue minerals.
Resumo:
Self-assembly of nanoparticles is a promising route to form complex, nanostructured materials with functional properties. Nanoparticle assemblies characterized by a crystallographic alignment of the nanoparticles on the atomic scale, i.e. mesocrystals, are commonly found in nature with outstanding functional and mechanical properties. This thesis aims to investigate and understand the formation mechanisms of mesocrystals formed by self-assembling iron oxide nanocubes. We have used the thermal decomposition method to synthesize monodisperse, oleate-capped iron oxide nanocubes with average edge lengths between 7 nm and 12 nm and studied the evaporation-induced self-assembly in dilute toluene-based nanocube dispersions. The influence of packing constraints on the alignment of the nanocubes in nanofluidic containers has been investigated with small and wide angle X-ray scattering (SAXS and WAXS, respectively). We found that the nanocubes preferentially orient one of their {100} faces with the confining channel wall and display mesocrystalline alignment irrespective of the channel widths. We manipulated the solvent evaporation rate of drop-cast dispersions on fluorosilane-functionalized silica substrates in a custom-designed cell. The growth stages of the assembly process were investigated using light microscopy and quartz crystal microbalance with dissipation monitoring (QCM-D). We found that particle transport phenomena, e.g. the coffee ring effect and Marangoni flow, result in complex-shaped arrays near the three-phase contact line of a drying colloidal drop when the nitrogen flow rate is high. Diffusion-driven nanoparticle assembly into large mesocrystals with a well-defined morphology dominates at much lower nitrogen flow rates. Analysis of the time-resolved video microscopy data was used to quantify the mesocrystal growth and establish a particle diffusion-based, three-dimensional growth model. The dissipation obtained from the QCM-D signal reached its maximum value when the microscopy-observed lateral growth of the mesocrystals ceased, which we address to the fluid-like behavior of the mesocrystals and their weak binding to the substrate. Analysis of electron microscopy images and diffraction patterns showed that the formed arrays display significant nanoparticle ordering, regardless of the distinctive formation process. We followed the two-stage formation mechanism of mesocrystals in levitating colloidal drops with real-time SAXS. Modelling of the SAXS data with the square-well potential together with calculations of van der Waals interactions suggests that the nanocubes initially form disordered clusters, which quickly transform into an ordered phase.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The FIREDASS (FIRE Detection And Suppression Simulation) project is concerned with the development of fine water mist systems as a possible replacement for the halon fire suppression system currently used in aircraft cargo holds. The project is funded by the European Commission, under the BRITE EURAM programme. The FIREDASS consortium is made up of a combination of Industrial, Academic, Research and Regulatory partners. As part of this programme of work, a computational model has been developed to help engineers optimise the design of the water mist suppression system. This computational model is based on Computational Fluid Dynamics (CFD) and is composed of the following components: fire model; mist model; two-phase radiation model; suppression model and detector/activation model. The fire model - developed by the University of Greenwich - uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation conducted by SINTEF. The mist model - developed by the University of Greenwich - is a Lagrangian particle tracking procedure that is fully coupled to both the gas phase and the radiation field. The radiation model - developed by the National Technical University of Athens - is described using a six-flux radiation model. The suppression model - developed by SINTEF and the University of Greenwich - is based on an extinguishment crietrion that relies on oxygen concentration and temperature. The detector/ activation model - developed by Cerberus - allows the configuration of many different detector and mist configurations to be tested within the computational model. These sub-models have been integrated by the University of Greenwich into the FIREDASS software package. The model has been validated using data from the SINTEF/GEC test campaigns and it has been found that the computational model gives good agreement with these experimental results. The best agreement is obtained at the ceiling which is where the detectors and misting nozzles would be located in a real system. In this paper the model is briefly described and some results from the validation of the fire and mist model are presented.
Resumo:
Human standing posture is inherently unstable. The postural control system (PCS), which maintains standing posture, is composed of the sensory, musculoskeletal, and central nervous systems. Together these systems integrate sensory afferents and generate appropriate motor efferents to adjust posture. The PCS maintains the body center of mass (COM) with respect to the base of support while constantly resisting destabilizing forces from internal and external perturbations. To assess the human PCS, postural sway during quiet standing or in response to external perturbation have frequently been examined descriptively. Minimal work has been done to understand and quantify the robustness of the PCS to perturbations. Further, there have been some previous attempts to assess the dynamical systems aspects of the PCS or time evolutionary properties of postural sway. However those techniques can only provide summary information about the PCS characteristics; they cannot provide specific information about or recreate the actual sway behavior. This dissertation consists of two parts: part I, the development of two novel methods to assess the human PCS and, part II, the application of these methods. In study 1, a systematic method for analyzing the human PCS during perturbed stance was developed. A mild impulsive perturbation that subjects can easily experience in their daily lives was used. A measure of robustness of the PCS, 1/MaxSens that was based on the inverse of the sensitivity of the system, was introduced. 1/MaxSens successfully quantified the reduced robustness to external perturbations due to age-related degradation of the PCS. In study 2, a stochastic model was used to better understand the human PCS in terms of dynamical systems aspect. This methodology also has the advantage over previous methods in that the sway behavior is captured in a model that can be used to recreate the random oscillatory properties of the PCS. The invariant density which describes the long-term stationary behavior of the center of pressure (COP) was computed from a Markov chain model that was applied to postural sway data during quiet stance. In order to validate the Invariant Density Analysis (IDA), we applied the technique to COP data from different age groups. We found that older adults swayed farther from the centroid and in more stochastic and random manner than young adults. In part II, the tools developed in part I were applied to both occupational and clinical situations. In study 3, 1/MaxSens and IDA were applied to a population of firefighters to investigate the effects of air bottle configuration (weight and size) and vision on the postural stability of firefighters. We found that both air bottle weight and loss of vision, but not size of air bottle, significantly decreased balance performance and increased fall risk. In study 4, IDA was applied to data collected on 444 community-dwelling elderly adults from the MOBILIZE Boston Study. Four out of five IDA parameters were able to successfully differentiate recurrent fallers from non-fallers, while only five out of 30 more common descriptive and stochastic COP measures could distinguish the two groups. Fall history and the IDA parameter of entropy were found to be significant risk factors for falls. This research proposed a new measure for the PCS robustness (1/MaxSens) and a new technique for quantifying the dynamical systems aspect of the PCS (IDA). These new PCS analysis techniques provide easy and effective ways to assess the PCS in occupational and clinical environments.
Resumo:
The text of this thesis provides historical introduction to the two studies Theoretical Model of Superconductivity and the Martensitic Transformation in A15 Compounds" and "A Comparison of Kadanoff-Migdal Renormalization with New Monte Carlo Results for the XY Model", contained herein as appendices.
Resumo:
The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.
Resumo:
Nonpoint sources (NPS) pollution from agriculture is the leading source of water quality impairment in U.S. rivers and streams, and a major contributor to lakes, wetlands, estuaries and coastal waters (U.S. EPA 2016). Using data from a survey of farmers in Maryland, this dissertation examines the effects of a cost sharing policy designed to encourage adoption of conservation practices that reduce NPS pollution in the Chesapeake Bay watershed. This watershed is the site of the largest Total Maximum Daily Load (TMDL) implemented to date, making it an important setting in the U.S. for water quality policy. I study two main questions related to the reduction of NPS pollution from agriculture. First, I examine the issue of additionality of cost sharing payments by estimating the direct effect of cover crop cost sharing on the acres of cover crops, and the indirect effect of cover crop cost sharing on the acres of two other practices: conservation tillage and contour/strip cropping. A two-stage simultaneous equation approach is used to correct for voluntary self-selection into cost sharing programs and account for substitution effects among conservation practices. Quasi-random Halton sequences are employed to solve the system of equations for conservation practice acreage and to minimize the computational burden involved. By considering patterns of agronomic complementarity or substitution among conservation practices (Blum et al., 1997; USDA SARE, 2012), this analysis estimates water quality impacts of the crowding-in or crowding-out of private investment in conservation due to public incentive payments. Second, I connect the econometric behavioral results with model parameters from the EPA’s Chesapeake Bay Program to conduct a policy simulation on water quality effects. I expand the econometric model to also consider the potential loss of vegetative cover due to cropland incentive payments, or slippage (Lichtenberg and Smith-Ramirez, 2011). Econometric results are linked with the Chesapeake Bay Program watershed model to estimate the change in abatement levels and costs for nitrogen, phosphorus and sediment under various behavioral scenarios. Finally, I use inverse sampling weights to derive statewide abatement quantities and costs for each of these pollutants, comparing these with TMDL targets for agriculture in Maryland.
Resumo:
Matching theory and matching markets are a core component of modern economic theory and market design. This dissertation presents three original contributions to this area. The first essay constructs a matching mechanism in an incomplete information matching market in which the positive assortative match is the unique efficient and unique stable match. The mechanism asks each agent in the matching market to reveal her privately known type. Through its novel payment rule, truthful revelation forms an ex post Nash equilibrium in this setting. This mechanism works in one-, two- and many-sided matching markets, thus offering the first mechanism to unify these matching markets under a single mechanism design framework. The second essay confronts a problem of matching in an environment in which no efficient and incentive compatible matching mechanism exists due to matching externalities. I develop a two-stage matching game in which a contracting stage facilitates subsequent conditionally efficient and incentive compatible Vickrey auction stage. Infinite repetition of this two-stage matching game enforces the contract in every period. This mechanism produces inequitably distributed social improvement: parties to the contract receive all of the gains and then some. The final essay demonstrates the existence of prices which stably and efficiently partition a single set of agents into firms and workers, and match those two sets to each other. This pricing system extends Kelso and Crawford's general equilibrium results in a labor market matching model and links one- and two-sided matching markets as well.
Resumo:
Incidents and rolling stock breakdowns are commonplace in rapid transit rail systems and may disrupt the system performance imposing deviations from planned operations. A network design model is proposed for reducing the effect of disruptions less likely to occur. Failure probabilities are considered functions of the amount of services and the rolling stock’s routing on the designed network so that they cannot be calculated a priori but result from the design process itself. A two recourse stochastic programming model is formulated where the failure probabilities are an implicit function of the number of services and routing of the transit lines.