16 resultados para Uniformly Convex
em Duke University
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
A mathematical theory of stochastic microlensing. II. Random images, shear, and the Kac-Rice formula
Resumo:
Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.
Resumo:
Here we show that the configuration of a slender enclosure can be optimized such that the radiation heating of a stream of solid is performed with minimal fuel consumption at the global level. The solid moves longitudinally at constant rate through the enclosure. The enclosure is heated by gas burners distributed arbitrarily, in a manner that is to be determined. The total contact area for heat transfer between the hot enclosure and the cold solid is fixed. We find that minimal global fuel consumption is achieved when the longitudinal distribution of heaters is nonuniform, with more heaters near the exit than the entrance. The reduction in fuel consumption relative to when the heaters are distributed uniformly is of order 10%. Tapering the plan view (the floor) of the heating area yields an additional reduction in overall fuel consumption. The best shape is when the floor area is a slender triangle on which the cold solid enters by crossing the base. These architectural features recommend the proposal to organize the flow of the solid as a dendritic design, which enters as several branches, and exits as a single hot stream of prescribed temperature. The thermodynamics of heating is presented in modern terms in the Sec. (exergy destruction, entropy generation). The contribution is that to optimize "thermodynamically" is the same as reducing the consumption of fuel. © 2010 American Institute of Physics.
Resumo:
We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.
Resumo:
We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.
Resumo:
We present a mathematical analysis of the asymptotic preserving scheme proposed in [M. Lemou and L. Mieussens, SIAM J. Sci. Comput., 31 (2008), pp. 334-368] for linear transport equations in kinetic and diffusive regimes. We prove that the scheme is uniformly stable and accurate with respect to the mean free path of the particles. This property is satisfied under an explicitly given CFL condition. This condition tends to a parabolic CFL condition for small mean free paths and is close to a convection CFL condition for large mean free paths. Our analysis is based on very simple energy estimates. © 2010 Society for Industrial and Applied Mathematics.
Resumo:
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.
Resumo:
The spatial variability of aerosol number and mass along roads was determined in different regions (urban, rural and coastal-marine) of the Netherlands. A condensation particle counter (CPC) and an optical aerosol spectrometer (LAS-X) were installed in a van along with a global positioning system (GPS). Concentrations were measured with high-time resolutions while driving allowing investigations not possible with stationary equipment. In particular, this approach proves to be useful to identify those locations where numbers and mass attain high levels ('hot spots'). In general, concentrations of number and mass of particulate matter increase along with the degree of urbanisation, with number concentration being the more sensitive indicator. The lowest particle numbers and PM1-concentrations are encountered in a coastal and rural area: <5000cm-3 and 6μgm-3, respectively. The presence of sea-salt material along the North-Sea coast enhances PM>1-concentrations compared to inland levels. High-particle numbers are encountered on motorways correlating with traffic intensity; the largest average number concentration is measured on the ring motorway around Amsterdam: about 160000cm-3 (traffic intensity 100000vehday-1). Peak values occur in tunnels where numbers exceed 106cm-3. Enhanced PM1 levels (i.e. larger than 9μgm-3) exist on motorways, major traffic roads and in tunnels. The concentrations of PM>1 appear rather uniformly distributed (below 6μgm-3 for most observations). On the urban scale, (large) spatial variations in concentration can be explained by varying intensities of traffic and driving patterns. The highest particle numbers are measured while being in traffic congestions or when behind a heavy diesel-driven vehicle (up to 600×103cm-3). Relatively high numbers are observed during the passages of crossings and, at a decreasing rate, on main roads with much traffic, quiet streets and residential areas with limited traffic. The number concentration exhibits a larger variability than mass: the mass concentration on city roads with much traffic is 12% higher than in a residential area at the edge of the same city while the number of particles changes by a factor of two (due to the presence of the ultrafine particles (aerodynamic diameter <100nm). It is further indicated that people residing at some 100m downwind a major traffic source are exposed to (still) 40% more particles than those living in the urban background areas. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Resumo:
OBJECTIVE: Bacterial colonization of the fetal membranes and its role in pathogenesis of membrane rupture is poorly understood. Prior retrospective work revealed chorion layer thinning in preterm premature rupture of membranes (PPROM) subjects. Our objective was to prospectively examine fetal membrane chorion thinning and to correlate to bacterial presence in PPROM, preterm, and term subjects. STUDY DESIGN: Paired membrane samples (membrane rupture and membrane distant) were prospectively collected from: PPROM = 14, preterm labor (PTL = 8), preterm no labor (PTNL = 8), term labor (TL = 10), and term no labor (TNL = 8), subjects. Sections were probed with cytokeratin to identify fetal trophoblast layer of the chorion using immunohistochemistry. Fluorescence in situ hybridization was performed using broad range 16 s ribosomal RNA probe. Images were evaluated, chorion and choriodecidua were measured, and bacterial fluorescence scored. Chorion thinning and bacterial presence were compared among and between groups using Student's t-test, linear mixed effect model, and Poisson regression model (SAS Cary, NC). RESULTS: In all groups, the fetal chorion cellular layer was thinner at rupture compared to distant site (147.2 vs. 253.7 µm, p<0.0001). Further, chorion thinning was greatest among PPROM subjects compared to all other groups combined, regardless of site sampled [PPROM(114.9) vs. PTL(246.0) vs. PTNL(200.8) vs. TL(217.9) vs. TNL(246.5)]. Bacteria counts were highest among PPROM subjects compared to all other groups regardless of site sampled or histologic infection [PPROM(31) vs. PTL(9) vs. PTNL(7) vs. TL(7) vs. TNL(6)]. Among all subjects at both sites, bacterial counts were inversely correlated with chorion thinning, even excluding histologic chorioamnionitis (p<0.0001 and p = 0.05). CONCLUSIONS: Fetal chorion was uniformly thinner at rupture site compared to distant sites. In PPROM fetal chorion, we demonstrated pronounced global thinning. Although cause or consequence is uncertain, bacterial presence is greatest and inversely correlated with chorion thinning among PPROM subjects.
Resumo:
Human embryonic stem cell-derived cardiomyocytes (hESC-CMs) provide a promising source for cell therapy and drug screening. Several high-yield protocols exist for hESC-CM production; however, methods to significantly advance hESC-CM maturation are still lacking. Building on our previous experience with mouse ESC-CMs, we investigated the effects of 3-dimensional (3D) tissue-engineered culture environment and cardiomyocyte purity on structural and functional maturation of hESC-CMs. 2D monolayer and 3D fibrin-based cardiac patch cultures were generated using dissociated cells from differentiated Hes2 embryoid bodies containing varying percentage (48-90%) of CD172a (SIRPA)-positive cardiomyocytes. hESC-CMs within the patch were aligned uniformly by locally controlling the direction of passive tension. Compared to hESC-CMs in age (2 weeks) and purity (48-65%) matched 2D monolayers, hESC-CMs in 3D patches exhibited significantly higher conduction velocities (CVs), longer sarcomeres (2.09 ± 0.02 vs. 1.77 ± 0.01 μm), and enhanced expression of genes involved in cardiac contractile function, including cTnT, αMHC, CASQ2 and SERCA2. The CVs in cardiac patches increased with cardiomyocyte purity, reaching 25.1 cm/s in patches constructed with 90% hESC-CMs. Maximum contractile force amplitudes and active stresses of cardiac patches averaged to 3.0 ± 1.1 mN and 11.8 ± 4.5 mN/mm(2), respectively. Moreover, contractile force per input cardiomyocyte averaged to 5.7 ± 1.1 nN/cell and showed a negative correlation with hESC-CM purity. Finally, patches exhibited significant positive inotropy with isoproterenol administration (1.7 ± 0.3-fold force increase, EC50 = 95.1 nm). These results demonstrate highly advanced levels of hESC-CM maturation after 2 weeks of 3D cardiac patch culture and carry important implications for future drug development and cell therapy studies.
Resumo:
PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.
Resumo:
BACKGROUND: Automated reporting of estimated glomerular filtration rate (eGFR) is a recent advance in laboratory information technology (IT) that generates a measure of kidney function with chemistry laboratory results to aid early detection of chronic kidney disease (CKD). Because accurate diagnosis of CKD is critical to optimal medical decision-making, several clinical practice guidelines have recommended the use of automated eGFR reporting. Since its introduction, automated eGFR reporting has not been uniformly implemented by U. S. laboratories despite the growing prevalence of CKD. CKD is highly prevalent within the Veterans Health Administration (VHA), and implementation of automated eGFR reporting within this integrated healthcare system has the potential to improve care. In July 2004, the VHA adopted automated eGFR reporting through a system-wide mandate for software implementation by individual VHA laboratories. This study examines the timing of software implementation by individual VHA laboratories and factors associated with implementation. METHODS: We performed a retrospective observational study of laboratories in VHA facilities from July 2004 to September 2009. Using laboratory data, we identified the status of implementation of automated eGFR reporting for each facility and the time to actual implementation from the date the VHA adopted its policy for automated eGFR reporting. Using survey and administrative data, we assessed facility organizational characteristics associated with implementation of automated eGFR reporting via bivariate analyses. RESULTS: Of 104 VHA laboratories, 88% implemented automated eGFR reporting in existing laboratory IT systems by the end of the study period. Time to initial implementation ranged from 0.2 to 4.0 years with a median of 1.8 years. All VHA facilities with on-site dialysis units implemented the eGFR software (52%, p<0.001). Other organizational characteristics were not statistically significant. CONCLUSIONS: The VHA did not have uniform implementation of automated eGFR reporting across its facilities. Facility-level organizational characteristics were not associated with implementation, and this suggests that decisions for implementation of this software are not related to facility-level quality improvement measures. Additional studies on implementation of laboratory IT, such as automated eGFR reporting, could identify factors that are related to more timely implementation and lead to better healthcare delivery.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
OBJECTIVES: Two factors have been considered important contributors to tooth wear: dietary abrasives in plant foods themselves and mineral particles adhering to ingested food. Each factor limits the functional life of teeth. Cross-population studies of wear rates in a single species living in different habitats may point to the relative contributions of each factor. MATERIALS AND METHODS: We examine macroscopic dental wear in populations of Alouatta palliata (Gray, 1849) from Costa Rica (115 specimens), Panama (19), and Nicaragua (56). The sites differ in mean annual precipitation, with the Panamanian sites receiving more than twice the precipitation of those in Costa Rica or Nicaragua (∼3,500 mm vs. ∼1,500 mm). Additionally, many of the Nicaraguan specimens were collected downwind of active plinian volcanoes. Molar wear is expressed as the ratio of exposed dentin area to tooth area; premolar wear was scored using a ranking system. RESULTS: Despite substantial variation in environmental variables and the added presence of ash in some environments, molar wear rates do not differ significantly among the populations. Premolar wear, however, is greater in individuals collected downwind from active volcanoes compared with those living in environments that did not experience ash-fall. DISCUSSION: Volcanic ash seems to be an important contributor to anterior tooth wear but less so in molar wear. That wear is not found uniformly across the tooth row may be related to malformation in the premolars due to fluorosis. A surge of fluoride accompanying the volcanic ash may differentially affect the premolars as the molars fully mineralize early in the life of Alouatta.