6 resultados para sequential exploitation
em Duke University
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log-likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.
Resumo:
We conduct the first empirical investigation of common-pool resource users' dynamic and strategic behavior at the micro level using real-world data. Fishermen's strategies in a fully dynamic game account for latent resource dynamics and other players' actions, revealing the profit structure of the fishery. We compare the fishermen's actual and socially optimal exploitation paths under a time-specific vessel allocation policy and find a sizable dynamic externality. Individual fishermen respond to other users by exerting effort above the optimal level early in the season. Congestion is costly instantaneously but is beneficial in the long run because it partially offsets dynamic inefficiencies.
Resumo:
BACKGROUND: Some of the 600,000 patients with solid organ allotransplants need reconstruction with a composite tissue allotransplant, such as the hand, abdominal wall, or face. The aim of this study was to develop a rat model for assessing the effects of a secondary composite tissue allotransplant on a primary heart allotransplant. METHODS: Hearts of Wistar Kyoto rats were harvested and transplanted heterotopically to the neck of recipient Fisher 344 rats. The anastomoses were performed between the donor brachiocephalic artery and the recipient left common carotid artery, and between the donor pulmonary artery and the recipient external jugular vein. Recipients received cyclosporine A for 10 days only. Heart rate was assessed noninvasively. The sequential composite tissue allotransplant consisted of a 3 x 3-cm abdominal musculocutaneous flap harvested from Lewis rats and transplanted to the abdomen of the heart allotransplant recipients. The abdominal flap vessels were connected to the femoral vessels. No further immunosuppression was administered following the composite tissue allotransplant. Ten days after composite tissue allotransplantation, rejection of the heart and abdominal flap was assessed histologically. RESULTS: The rat survival rate of the two-stage transplant surgery was 80 percent. The transplanted heart rate decreased from 150 +/- 22 beats per minute immediately after transplant to 83 +/- 12 beats per minute on day 20 (10 days after stopping immunosuppression). CONCLUSIONS: This sequential allotransplant model is technically demanding. It will facilitate investigation of the effects of a secondary composite tissue allotransplant following primary solid organ transplantation and could be useful in developing future immunotherapeutic strategies.
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.