8 resultados para continuous-time asymptotics
em Duke University
Resumo:
This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.
Resumo:
We develop general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy-to-implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability.
Resumo:
BACKGROUND: Singapore's population, as that of many other countries, is aging; this is likely to lead to an increase in eye diseases and the demand for eye care. Since ophthalmologist training is long and expensive, early planning is essential. This paper forecasts workforce and training requirements for Singapore up to the year 2040 under several plausible future scenarios. METHODS: The Singapore Eye Care Workforce Model was created as a continuous time compartment model with explicit workforce stocks using system dynamics. The model has three modules: prevalence of eye disease, demand, and workforce requirements. The model is used to simulate the prevalence of eye diseases, patient visits, and workforce requirements for the public sector under different scenarios in order to determine training requirements. RESULTS: Four scenarios were constructed. Under the baseline business-as-usual scenario, the required number of ophthalmologists is projected to increase by 117% from 2015 to 2040. Under the current policy scenario (assuming an increase of service uptake due to increased awareness, availability, and accessibility of eye care services), the increase will be 175%, while under the new model of care scenario (considering the additional effect of providing some services by non-ophthalmologists) the increase will only be 150%. The moderated workload scenario (assuming in addition a reduction of the clinical workload) projects an increase in the required number of ophthalmologists of 192% by 2040. Considering the uncertainties in the projected demand for eye care services, under the business-as-usual scenario, a residency intake of 8-22 residents per year is required, 17-21 under the current policy scenario, 14-18 under the new model of care scenario, and, under the moderated workload scenario, an intake of 18-23 residents per year is required. CONCLUSIONS: The results show that under all scenarios considered, Singapore's aging and growing population will result in an almost doubling of the number of Singaporeans with eye conditions, a significant increase in public sector eye care demand and, consequently, a greater requirement for ophthalmologists.
Resumo:
Urban problems have several features that make them inherently dynamic. Large transaction costs all but guarantee that homeowners will do their best to consider how a neighborhood might change before buying a house. Similarly, stores face large sunk costs when opening, and want to be sure that their investment will pay off in the long run. In line with those concerns, different areas of Economics have made recent advances in modeling those questions within a dynamic framework. This dissertation contributes to those efforts.
Chapter 2 discusses how to model an agent’s location decision when the agent must learn about an exogenous amenity that may be changing over time. The model is applied to estimating the marginal willingness to pay to avoid crime, in which agents are learning about the crime rate in a neighborhood, and the crime rate can change in predictable (Markovian) ways.
Chapters 3 and 4 concentrate on location decision problems when there are externalities between decision makers. Chapter 3 focuses on the decision of business owners to open a store, when its demand is a function of other nearby stores, either through competition, or through spillovers on foot traffic. It uses a dynamic model in continuous time to model agents’ decisions. A particular challenge is isolating the contribution of spillovers from the contribution of other unobserved neighborhood attributes that could also lead to agglomeration. A key contribution of this chapter is showing how we can use information on storefront ownership to help separately identify spillovers.
Finally, chapter 4 focuses on a class of models in which families prefer to live
close to similar neighbors. This chapter provides the first simulation of such a model in which agents are forward looking, and shows that this leads to more segregation than it would have been observed with myopic agents, which is the standard in this literature. The chapter also discusses several extensions of the model that can be used to investigate relevant questions such as the arrival of a large contingent high skilled tech workers in San Francisco, the immigration of hispanic families to several southern American cities, large changes in local amenities, such as the construction of magnet schools or metro stations, and the flight of wealthy residents from cities in the Rust belt, such as Detroit.
Resumo:
This dissertation consists of three separate essays on job search and labor market dynamics. In the first essay, “The Impact of Labor Market Conditions on Job Creation: Evidence from Firm Level Data”, I study how much changes in labor market conditions reduce employment fluctuations over the business cycle. Changes in labor market conditions make hiring more expensive during expansions and cheaper during recessions, creating counter-cyclical incentives for job creation. I estimate firm level elasticities of labor demand with respect to changes in labor market conditions, considering two margins: changes in labor market tightness and changes in wages. Using employer-employee matched data from Brazil, I find that all firms are more sensitive to changes in wages rather than labor market tightness, and there is substantial heterogeneity in labor demand elasticity across regions. Based on these results, I demonstrate that changes in labor market conditions reduce the variance of employment growth over the business cycle by 20% in a median region, and this effect is equally driven by changes along each margin. Moreover, I show that the magnitude of the effect of labor market conditions on employment growth can be significantly affected by economic policy. In particular, I document that the rapid growth of the national minimum wages in Brazil in 1997-2010 amplified the impact of the change in labor market conditions during local expansions and diminished this impact during local recessions.
In the second essay, “A Framework for Estimating Persistence of Local Labor
Demand Shocks”, I propose a decomposition which allows me to study the persistence of local labor demand shocks. Persistence of labor demand shocks varies across industries, and the incidence of shocks in a region depends on the regional industrial composition. As a result, less diverse regions are more likely to experience deeper shocks, but not necessarily more long lasting shocks. Building on this idea, I propose a decomposition of local labor demand shocks into idiosyncratic location shocks and nationwide industry shocks and estimate the variance and the persistence of these shocks using the Quarterly Census of Employment and Wages (QCEW) in 1990-2013.
In the third essay, “Conditional Choice Probability Estimation of Continuous- Time Job Search Models”, co-authored with Peter Arcidiacono and Arnaud Maurel, we propose a novel, computationally feasible method of estimating non-stationary job search models. Non-stationary job search models arise in many applications, where policy change can be anticipated by the workers. The most prominent example of such policy is the expiration of unemployment benefits. However, estimating these models still poses a considerable computational challenge, because of the need to solve a differential equation numerically at each step of the optimization routine. We overcome this challenge by adopting conditional choice probability methods, widely used in dynamic discrete choice literature, to job search models and show how the hazard rate out of unemployment and the distribution of the accepted wages, which can be estimated in many datasets, can be used to infer the value of unemployment. We demonstrate how to apply our method by analyzing the effect of the unemployment benefit expiration on duration of unemployment using the data from the Survey of Income and Program Participation (SIPP) in 1996-2007.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
© 2015, Institute of Mathematical Statistics. All rights reserved.In order to use persistence diagrams as a true statistical tool, it would be very useful to have a good notion of mean and variance for a set of diagrams. In [23], Mileyko and his collaborators made the first study of the properties of the Fréchet mean in (D