15 resultados para Horne, Lena , 1917-2010, American
em Duke University
Resumo:
We propose and experimentally validate a first-principles based model for the nonlinear piezoelectric response of an electroelastic energy harvester. The analysis herein highlights the importance of modeling inherent piezoelectric nonlinearities that are not limited to higher order elastic effects but also include nonlinear coupling to a power harvesting circuit. Furthermore, a nonlinear damping mechanism is shown to accurately restrict the amplitude and bandwidth of the frequency response. The linear piezoelectric modeling framework widely accepted for theoretical investigations is demonstrated to be a weak presumption for near-resonant excitation amplitudes as low as 0.5 g in a prefabricated bimorph whose oscillation amplitudes remain geometrically linear for the full range of experimental tests performed (never exceeding 0.25% of the cantilever overhang length). Nonlinear coefficients are identified via a nonlinear least-squares optimization algorithm that utilizes an approximate analytic solution obtained by the method of harmonic balance. For lead zirconate titanate (PZT-5H), we obtained a fourth order elastic tensor component of c1111p =-3.6673× 1017 N/m2 and a fourth order electroelastic tensor value of e3111 =1.7212× 108 m/V. © 2010 American Institute of Physics.
Resumo:
We demonstrate a scalable approach to addressing multiple atomic qubits for use in quantum information processing. Individually trapped 87Rb atoms in a linear array are selectively manipulated with a single laser guided by a microelectromechanical beam steering system. Single qubit oscillations are shown on multiple sites at frequencies of ≃3.5 MHz with negligible crosstalk to neighboring sites. Switching times between the central atom and its closest neighbor were measured to be 6-7 μs while moving between the central atom and an atom two trap sites away took 10-14 μs. © 2010 American Institute of Physics.
Resumo:
When solid material is removed in order to create flow channels in a load carrying structure, the strength of the structure decreases. On the other hand, a structure with channels is lighter and easier to transport as part of a vehicle. Here, we show that this trade off can be used for benefit, to design a vascular mechanical structure. When the total amount of solid is fixed and the sizes, shapes, and positions of the channels can vary, it is possible to morph the flow architecture such that it endows the mechanical structure with maximum strength. The result is a multifunctional structure that offers not only mechanical strength but also new capabilities necessary for volumetric functionalities such as self-healing and self-cooling. We illustrate the generation of such designs for strength and fluid flow for several classes of vasculatures: parallel channels, trees with one, two, and three bifurcation levels. The flow regime in every channel is laminar and fully developed. In each case, we found that it is possible to select not only the channel dimensions but also their positions such that the entire structure offers more strength and less flow resistance when the total volume (or weight) and the total channel volume are fixed. We show that the minimized peak stress is smaller when the channel volume (φ) is smaller and the vasculature is more complex, i.e., with more levels of bifurcation. Diminishing returns are reached in both directions, decreasing φ and increasing complexity. For example, when φ=0.02 the minimized peak stress of a design with one bifurcation level is only 0.2% greater than the peak stress in the optimized vascular design with two levels of bifurcation. © 2010 American Institute of Physics.
Resumo:
Here we show that the configuration of a slender enclosure can be optimized such that the radiation heating of a stream of solid is performed with minimal fuel consumption at the global level. The solid moves longitudinally at constant rate through the enclosure. The enclosure is heated by gas burners distributed arbitrarily, in a manner that is to be determined. The total contact area for heat transfer between the hot enclosure and the cold solid is fixed. We find that minimal global fuel consumption is achieved when the longitudinal distribution of heaters is nonuniform, with more heaters near the exit than the entrance. The reduction in fuel consumption relative to when the heaters are distributed uniformly is of order 10%. Tapering the plan view (the floor) of the heating area yields an additional reduction in overall fuel consumption. The best shape is when the floor area is a slender triangle on which the cold solid enters by crossing the base. These architectural features recommend the proposal to organize the flow of the solid as a dendritic design, which enters as several branches, and exits as a single hot stream of prescribed temperature. The thermodynamics of heating is presented in modern terms in the Sec. (exergy destruction, entropy generation). The contribution is that to optimize "thermodynamically" is the same as reducing the consumption of fuel. © 2010 American Institute of Physics.
Resumo:
Protocorporatist West European countries in which economic interests were collectively organized adopted PR in the first quarter of the twentieth century, whereas liberal countries in which economic interests were not collectively organized did not. Political parties, as Marcus Kreuzer points out, choose electoral systems. So how do economic interests translate into party political incentives to adopt electoral reform? We argue that parties in protocorporatist countries were representative of and closely linked to economic interests. As electoral competition in single member districts increased sharply up to World War I, great difficulties resulted for the representative parties whose leaders were seen as interest committed. They could not credibly compete for votes outside their interest without leadership changes or reductions in interest influence. Proportional representation offered an obvious solution, allowing parties to target their own voters and their organized interest to continue effective influence in the legislature. In each respect, the opposite was true of liberal countries. Data on party preferences strongly confirm this model. (Kreuzer's historical criticisms are largely incorrect, as shown in detail in the online supplementary Appendix.). © 2010 American Political Science Association.
Resumo:
The Million Mom March (favoring gun control) and Code Pink: Women for Peace (focusing on foreign policy, especially the war in Iraq) are organizations that have mobilized women as women in an era when other women's groups struggled to maintain critical mass and turned away from non-gender-specific public issues. This article addresses how these organizations fostered collective consciousness among women, a large and diverse group, while confronting the echoes of backlash against previous mobilization efforts by women. We argue that the March and Code Pink achieved mobilization success by creating hybrid organizations that blended elements of three major collective action frames: maternalism, egalitarianism, and feminine expression. These innovative organizations invented hybrid forms that cut across movements, constituencies, and political institutions. Using surveys, interviews, and content analysis of organizational documents, this article explains how the March and Code Pink met the contemporary challenges facing women's collective action in similar yet distinct ways. It highlights the role of feminine expression and concerns about the intersectional marginalization of women in resolving the historic tensions between maternalism and egalitarianism. It demonstrates hybridity as a useful analytical lens to understand gendered organizing and other forms of grassroots collective action. © 2010 American Political Science Association.
Resumo:
Successfully predicting the frequency dispersion of electronic hyperpolarizabilities is an unresolved challenge in materials science and electronic structure theory. We show that the generalized Thomas-Kuhn sum rules, combined with linear absorption data and measured hyperpolarizability at one or two frequencies, may be used to predict the entire frequency-dependent electronic hyperpolarizability spectrum. This treatment includes two- and three-level contributions that arise from the lowest two or three excited electronic state manifolds, enabling us to describe the unusual observed frequency dispersion of the dynamic hyperpolarizability in high oscillator strength M-PZn chromophores, where (porphinato)zinc(II) (PZn) and metal(II)polypyridyl (M) units are connected via an ethyne unit that aligns the high oscillator strength transition dipoles of these components in a head-to-tail arrangement. We show that some of these structures can possess very similar linear absorption spectra yet manifest dramatically different frequency dependent hyperpolarizabilities, because of three-level contributions that result from excited state-to excited state transition dipoles among charge polarized states. Importantly, this approach provides a quantitative scheme to use linear optical absorption spectra and very limited individual hyperpolarizability measurements to predict the entire frequency-dependent nonlinear optical response. Copyright © 2010 American Chemical Society.
Resumo:
The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.
Resumo:
The variability of summer precipitation in the southeastern United States is examined in this study using 60-yr (1948-2007) rainfall data. The Southeast summer rainfalls exhibited higher interannual variability with more intense summer droughts and anomalous wetness in the recent 30 years (1978-2007) than in the prior 30 years (1948-77). Such intensification of summer rainfall variability was consistent with a decrease of light (0.1-1 mm day-1) and medium (1-10 mm day-1) rainfall events during extremely dry summers and an increase of heavy (.10 mm day-1) rainfall events in extremely wet summers. Changes in rainfall variability were also accompanied by a southward shift of the region of maximum zonal wind variability at the jet stream level in the latter period. The covariability between the Southeast summer precipitation and sea surface temperatures (SSTs) is also analyzed using the singular value decomposition (SVD) method. It is shown that the increase of Southeast summer precipitation variability is primarily associated with a higher SST variability across the equatorial Atlantic and also SST warming in the Atlantic. © 2010 American Meteorological Society.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
The third wave of the National Congregations Study (NCS-III) was conducted in 2012. The 2012 General Social Survey asked respondents who attend religious services to name their religious congregation, producing a nationally representative cross-section of congregations from across the religious spectrum. Data about these congregations was collected via a 50-minute interview with one key informant from 1,331 congregations. Information was gathered about multiple aspects of congregations’ social composition, structure, activities, and programming. Approximately two-thirds of the NCS-III questionnaire replicates items from 1998 or 2006-07 NCS waves. Each congregation was geocoded, and selected data from the 2010 United States census or American Community Survey have been appended. We describe NCS-III methodology and use the cumulative NCS dataset (containing 4,071 cases) to describe five trends: more ethnic diversity, greater acceptance of gays and lesbians, increasingly informal worship styles, declining size (but not from the perspective of the average attendee), and declining denominational affiliation.
Resumo:
In the United States, poverty has been historically higher and disproportionately concentrated in the American South. Despite this fact, much of the conventional poverty literature in the United States has focused on urban poverty in cities, particularly in the Northeast and Midwest. Relatively less American poverty research has focused on the enduring economic distress in the South, which Wimberley (2008:899) calls “a neglected regional crisis of historic and contemporary urgency.” Accordingly, this dissertation contributes to the inequality literature by focusing much needed attention on poverty in the South.
Each empirical chapter focuses on a different aspect of poverty in the South. Chapter 2 examines why poverty is higher in the South relative to the Non-South. Chapter 3 focuses on poverty predictors within the South and whether there are differences in the sub-regions of the Deep South and Peripheral South. These two chapters compare the roles of family demography, economic structure, racial/ethnic composition and heterogeneity, and power resources in shaping poverty. Chapter 4 examines whether poverty in the South has been shaped by historical racial regimes.
The Luxembourg Income Study (LIS) United States datasets (2000, 2004, 2007, 2010, and 2013) (derived from the U.S. Census Current Population Survey (CPS) Annual Social and Economic Supplement) provide all the individual-level data for this study. The LIS sample of 745,135 individuals is nested in rich economic, political, and racial state-level data compiled from multiple sources (e.g. U.S. Census Bureau, U.S. Department of Agriculture, University of Kentucky Center for Poverty Research, etc.). Analyses involve a combination of techniques including linear probability regression models to predict poverty and binary decomposition of poverty differences.
Chapter 2 results suggest that power resources, followed by economic structure, are most important in explaining the higher poverty in the South. This underscores the salience of political and economic contexts in shaping poverty across place. Chapter 3 results indicate that individual-level economic factors are the largest predictors of poverty within the South, and even more so in the Deep South. Moreover, divergent results between the South, Deep South, and Peripheral South illustrate how the impact of poverty predictors can vary in different contexts. Chapter 4 results show significant bivariate associations between historical race regimes and poverty among Southern states, although regression models fail to yield significant effects. Conversely, historical race regimes do have a small, but significant effect in explaining the Black-White poverty gap. Results also suggest that employment and education are key to understanding poverty among Blacks and the Black-White poverty gap. Collectively, these chapters underscore why place is so important for understanding poverty and inequality. They also illustrate the salience of micro and macro characteristics of place for helping create, maintain, and reproduce systems of inequality across place.
Resumo:
What role do state party organizations play in twenty-first century American politics? What is the nature of the relationship between the state and national party organizations in contemporary elections? These questions frame the three studies presented in this dissertation. More specifically, I examine the organizational development of the state party organizations and the strategic interactions and connections between the state and national party organizations in contemporary elections.
In the first empirical chapter, I argue that the Internet Age represents a significant transitional period for state party organizations. Using data collected from surveys of state party leaders, this chapter reevaluates and updates existing theories of party organizational strength and demonstrates the importance of new indicators of party technological capacity to our understanding of party organizational development in the early twenty-first century. In the second chapter, I ask whether the national parties utilize different strategies in deciding how to allocate resources to state parties through fund transfers and through the 50-state-strategy party-building programs that both the Democratic and Republican National Committees advertised during the 2010 elections. Analyzing data collected from my 2011 state party survey and party-fund-transfer data collected from the Federal Election Commission, I find that the national parties considered a combination of state and national electoral concerns in directing assistance to the state parties through their 50-state strategies, as opposed to the strict battleground-state strategy that explains party fund transfers. In my last chapter, I examine the relationships between platforms issued by Democratic and Republican state and national parties and the strategic considerations that explain why state platforms vary in their degree of similarity to the national platform. I analyze an extensive platform dataset, using cluster analysis and document similarity measures to compare platform content across the 1952 to 2014 period. The analysis shows that, as a group, Democratic and Republican state platforms exhibit greater intra-party homogeneity and inter-party heterogeneity starting in the early 1990s, and state-national platform similarity is higher in states that are key players in presidential elections, among other factors. Together, these three studies demonstrate the significance of the state party organizations and the state-national party partnership in contemporary politics.