881 resultados para partial-state estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the aggregate performance of the banking industry, applying a modified and extended dynamic decomposition of bank return on equity. The aggregate performance of any industry depends on the underlying microeconomic dynamics within that industry . adjustments within banks, reallocations between banks, entry of new banks, and exit of existing banks. Bailey, Hulten, and Campbell (1992) and Haltiwanger (1997) develop dynamic decompositions of industry performance. We extend those analyses to derive an ideal decomposition that includes their decomposition as one component. We also extend the decomposition, consider geography, and implement decomposition on a state-by-state basis, linking that geographic decomposition back to the national level. We then consider how deregulation of geographic restrictions on bank activity affects the components of the state-level dynamic decomposition, controlling for competition and the state of the economy within each state and employing fixed- and random-effects estimation for a panel database across the fifty states and the District of Columbia from 1976 to 2000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the aggregate performance of the banking industry, applying a modified and extended dynamic decomposition of bank return on equity. The aggregate performance of any industry depends on the underlying microeconomic dynamics within that industry --- adjustments within banks, reallocations between banks, entry of new banks, and exit of existing banks. Bailey, Hulten, and Campbell (1992) and Haltiwanger (1997) develop dynamic decompositions of industry performance. We extend those analyses to derive an ideal dynamic decomposition that includes their dynamic decomposition as one component. We also extend the decomposition, consider geography, and implement decomposition on a state-by-state basis, linking that geographic decomposition back to the national level. We then consider how deregulation of geographic restrictions on bank activity affects the components of the state-level dynamic decomposition, controlling for competition and the state of the economy within each state and employing fixed- and random-effects estimation for a panel database across the fifty states and the District of Columbia from 1976 to 2000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health departments, research institutions, policy-makers, and healthcare providers are often interested in knowing the health status of their clients/constituents. Without the resources, financially or administratively, to go out into the community and conduct health assessments directly, these entities frequently rely on data from population-based surveys to supply the information they need. Unfortunately, these surveys are ill-equipped for the job due to sample size and privacy concerns. Small area estimation (SAE) techniques have excellent potential in such circumstances, but have been underutilized in public health due to lack of awareness and confidence in applying its methods. The goal of this research is to make model-based SAE accessible to a broad readership using clear, example-based learning. Specifically, we applied the principles of multilevel, unit-level SAE to describe the geographic distribution of HPV vaccine coverage among females aged 11-26 in Texas.^ Multilevel (3 level: individual, county, public health region) random-intercept logit models of HPV vaccination (receipt of ≥ 1 dose Gardasil® ) were fit to data from the 2008 Behavioral Risk Factor Surveillance System (outcome and level 1 covariates) and a number of secondary sources (group-level covariates). Sampling weights were scaled (level 1) or constructed (levels 2 & 3), and incorporated at every level. Using the regression coefficients (and standard errors) from the final models, I simulated 10,000 datasets for each regression coefficient from the normal distribution and applied them to the logit model to estimate HPV vaccine coverage in each county and respective demographic subgroup. For simplicity, I only provide coverage estimates (and 95% confidence intervals) for counties.^ County-level coverage among females aged 11-17 varied from 6.8-29.0%. For females aged 18-26, coverage varied from 1.9%-23.8%. Aggregated to the state level, these values translate to indirect state estimates of 15.5% and 11.4%, respectively; both of which fall within the confidence intervals for the direct estimates of HPV vaccine coverage in Texas (Females 11-17: 17.7%, 95% CI: 13.6, 21.9; Females 18-26: 12.0%, 95% CI: 6.2, 17.7).^ Small area estimation has great potential for informing policy, program development and evaluation, and the provision of health services. Harnessing the flexibility of multilevel, unit-level SAE to estimate HPV vaccine coverage among females aged 11-26 in Texas counties, I have provided (1) practical guidance on how to conceptualize and conduct modelbased SAE, (2) a robust framework that can be applied to other health outcomes or geographic levels of aggregation, and (3) HPV vaccine coverage data that may inform the development of health education programs, the provision of health services, the planning of additional research studies, and the creation of local health policies.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for timely population data for health planning and Indicators of need has Increased the demand for population estimates. The data required to produce estimates is difficult to obtain and the process is time consuming. Estimation methods that require less effort and fewer data are needed. The structure preserving estimator (SPREE) is a promising technique not previously used to estimate county population characteristics. This study first uses traditional regression estimation techniques to produce estimates of county population totals. Then the structure preserving estimator, using the results produced in the first phase as constraints, is evaluated.^ Regression methods are among the most frequently used demographic methods for estimating populations. These methods use symptomatic indicators to predict population change. This research evaluates three regression methods to determine which will produce the best estimates based on the 1970 to 1980 indicators of population change. Strategies for stratifying data to improve the ability of the methods to predict change were tested. Difference-correlation using PMSA strata produced the equation which fit the data the best. Regression diagnostics were used to evaluate the residuals.^ The second phase of this study is to evaluate use of the structure preserving estimator in making estimates of population characteristics. The SPREE estimation approach uses existing data (the association structure) to establish the relationship between the variable of interest and the associated variable(s) at the county level. Marginals at the state level (the allocation structure) supply the current relationship between the variables. The full allocation structure model uses current estimates of county population totals to limit the magnitude of county estimates. The limited full allocation structure model has no constraints on county size. The 1970 county census age - gender population provides the association structure, the allocation structure is the 1980 state age - gender distribution.^ The full allocation model produces good estimates of the 1980 county age - gender populations. An unanticipated finding of this research is that the limited full allocation model produces estimates of county population totals that are superior to those produced by the regression methods. The full allocation model is used to produce estimates of 1986 county population characteristics. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Two State model describes how drugs activate receptors by inducing or supporting a conformational change in the receptor from “off” to “on”. The beta 2 adrenergic receptor system is the model system which was used to formalize the concept of two states, and the mechanism of hormone agonist stimulation of this receptor is similar to ligand activation of other seven transmembrane receptors. Hormone binding to beta 2 adrenergic receptors stimulates the intracellular production of cyclic adenosine monophosphate (cAMP), which is mediated through the stimulatory guanyl nucleotide binding protein (Gs) interacting with the membrane bound enzyme adenylylcyclase (AC). ^ The effects of cAMP include protein phosphorylation, metabolic regulation and transcriptional regulation. The beta 2 adrenergic receptor system is the most well known of its family of G protein coupled receptors. Ligands have been scrutinized extensively in search of more effective therapeutic agents at this receptor as well as for insight into the biochemical mechanism of receptor activation. Hormone binding to receptor is thought to induce a conformational change in the receptor that increases its affinity for inactive Gs, catalyzes the release of GDP and subsequent binding of GTP and activation of Gs. ^ However, some beta 2 ligands are more efficient at this transformation than others, and the underlying mechanism for this drug specificity is not fully understood. The central problem in pharmacology is the characterization of drugs in their effect on physiological systems, and consequently, the search for a rational scale of drug effectiveness has been the effort of many investigators, which continues to the present time as models are proposed, tested and modified. ^ The major results of this thesis show that for many b2 -adrenergic ligands, the Two State model is quite adequate to explain their activity, but dobutamine (+/−3,4-dihydroxy-N-[3-(4-hydroxyphenyl)-1-methylpropyl]- b -phenethylamine) fails to conform to the predictions of the Two State model. It is a weak partial agonist, but it forms a large amount of high affinity complexes, and these complexes are formed at low concentrations much better than at higher concentrations. Finally, dobutamine causes the beta 2 adrenergic receptor to form high affinity complexes at a much faster rate than can be accounted for by its low efficiency activating AC. Because the Two State model fails to predict the activity of dobutamine in three different ways, it has been disproven in its strictest form. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Millennial-scale records of planktonic foraminiferal Mg/Ca, bulk sediment UK37', and planktonic foraminiferal d18O are presented across the last two deglaciations in sediment core NIOP929 from the Arabian Sea. Mg/Ca-derived temperature variability during the penultimate and last deglacial periods falls within the range of modern day Arabian Sea temperatures, which are influenced by monsoon-driven upwelling. The UK37'-derived temperatures in MIS 5e are similar to modern intermonsoon values and are on average 3.5°C higher than the Mg/Ca temperatures in the same period. MIS 5e UK37' and Mg/Ca temperatures are 1.5°C warmer than during the Holocene, while the UK37'-Mg/Ca temperature difference was about twice as large during MIS 5e. This is surprising as, nowadays, both proxy carriers have a very similar seasonal and depth distribution. Partial explanations for the MIS 5e UK37'-Mg/Ca temperature offset include carbonate dissolution, the change in dominant alkenone-producing species, and possibly lateral advection of alkenone-bearing material and a change in seasonal or depth distribution of proxy carriers. Our findings suggest that (1) Mg/Ca of G. ruber documents seawater temperature in the same way during both studied deglaciations as in the present, with respect to, e.g., season and depth, and (2) UK37'-based temperatures from MIS 5 (or older) represent neither upwelling SST nor annual average SST (as it does in the present and the Holocene) but a higher temperature, despite alkenone production mainly occurring in the upwelling season. Further we report that at the onset of the deglacial warming, the Mg/Ca record leads the UK37' record by 4 ka, of which a maximum of 2 ka may be explained by postdepositional processes. Deglacial warming in both temperature records leads the deglacial decrease in the d18O profile, and Mg/Ca-based temperature returns to lower values before d18O has reached minimum interglacial values. This indicates a substantial lead in Arabian Sea warming relative to global ice melting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon isotopically based estimates of CO2 levels have been generated from a record of the photosynthetic fractionation of 13C (epsilon p) in a central equatorial Pacific sediment core that spans the last ~255 ka. Contents of 13C in phytoplanktonic biomass were determined by analysis of C37 alkadienones. These compounds are exclusive products of Prymnesiophyte algae which at present grow most abundantly at depths of 70-90 m in the central equatorial Pacific. A record of the isotopic compostion of dissolved CO2 was constructed from isotopic analyses of the planktonic foraminifera Neogloboquadrina dutertrei, which calcifies at 70-90 m in the same region. Values of epsilon p, derived by comparison of the organic and inorganic delta values, were transformed to yield concentrations of dissolved CO2 (c e) based on a new, site-specific calibration of the relationship between epsilon p and c e. The calibration was based on reassessment of existing epsilon p versus c e data, which support a physiologically based model in which epsilon p is inversely related to c e. Values of PCO2, the partial pressure of CO2 that would be in equilibrium with the estimated concentrations of dissolved CO2, were calculated using Henry's law and the temperature determined from the alkenone-unsaturation index UK 37. Uncertainties in these values arise mainly from uncertainties about the appropriateness (particularly over time) of the site-specific relationship between epsilon p and 1/c e. These are discussed in detail and it is concluded that the observed record of epsilon p most probably reflects significant variations in Delta pCO2, the ocean-atmosphere disequilibrium, which appears to have ranged from ~110 µatm during glacial intervals (ocean > atmosphere) to ~60 µatm during interglacials. Fluxes of CO2 to the atmosphere would thus have been significantly larger during glacial intervals. If this were characteristic of large areas of the equatorial Pacific, then greater glacial sinks for the equatorially evaded CO2 must have existed elsewhere. Statistical analysis of air-sea pCO2 differences and other parameters revealed significant (p < 0.01) inverse correlations of Delta pCO2 with sea surface temperature and with the mass accumulation rate of opal. The former suggests response to the strength of upwelling, the latter may indicate either drawdown of CO2 by siliceous phytoplankton or variation of [CO2]/[Si(OH)4] ratios in upwelling waters.