736 resultados para Horne, Lena , 1917-2010, American
Molecular vibration spectroscopy study of irradiation effect in C-60 films induced by low energy ion
Resumo:
Irradiation effect in C-60 films induced by 170 keV B ion was investigated by means of Fourier transform infrared (FTIR) and Raman spectroscopies. The damage cross section sigma and the effective damage radius R are deduced from the experimental data of all four IR active modes and evident four Raman active modes of C-60 molecule. The differences on irradiation sensitivity and structural stability of the different active modes of C-60 molecule are compared. The results indicate that T-1u (4) of infrared active mode and A(g) (1) of Raman active mode are most sensitive for B ion irradiation. On the other hand T-1u (2) of infrared active mode and H-g (3) of Raman active mode are comparatively stable under B ion irradiation. (C) 2010 American Institute of Physics. [doi:10.1063/1.3512968]
Resumo:
We propose and experimentally validate a first-principles based model for the nonlinear piezoelectric response of an electroelastic energy harvester. The analysis herein highlights the importance of modeling inherent piezoelectric nonlinearities that are not limited to higher order elastic effects but also include nonlinear coupling to a power harvesting circuit. Furthermore, a nonlinear damping mechanism is shown to accurately restrict the amplitude and bandwidth of the frequency response. The linear piezoelectric modeling framework widely accepted for theoretical investigations is demonstrated to be a weak presumption for near-resonant excitation amplitudes as low as 0.5 g in a prefabricated bimorph whose oscillation amplitudes remain geometrically linear for the full range of experimental tests performed (never exceeding 0.25% of the cantilever overhang length). Nonlinear coefficients are identified via a nonlinear least-squares optimization algorithm that utilizes an approximate analytic solution obtained by the method of harmonic balance. For lead zirconate titanate (PZT-5H), we obtained a fourth order elastic tensor component of c1111p =-3.6673× 1017 N/m2 and a fourth order electroelastic tensor value of e3111 =1.7212× 108 m/V. © 2010 American Institute of Physics.
Resumo:
We demonstrate a scalable approach to addressing multiple atomic qubits for use in quantum information processing. Individually trapped 87Rb atoms in a linear array are selectively manipulated with a single laser guided by a microelectromechanical beam steering system. Single qubit oscillations are shown on multiple sites at frequencies of ≃3.5 MHz with negligible crosstalk to neighboring sites. Switching times between the central atom and its closest neighbor were measured to be 6-7 μs while moving between the central atom and an atom two trap sites away took 10-14 μs. © 2010 American Institute of Physics.
Resumo:
When solid material is removed in order to create flow channels in a load carrying structure, the strength of the structure decreases. On the other hand, a structure with channels is lighter and easier to transport as part of a vehicle. Here, we show that this trade off can be used for benefit, to design a vascular mechanical structure. When the total amount of solid is fixed and the sizes, shapes, and positions of the channels can vary, it is possible to morph the flow architecture such that it endows the mechanical structure with maximum strength. The result is a multifunctional structure that offers not only mechanical strength but also new capabilities necessary for volumetric functionalities such as self-healing and self-cooling. We illustrate the generation of such designs for strength and fluid flow for several classes of vasculatures: parallel channels, trees with one, two, and three bifurcation levels. The flow regime in every channel is laminar and fully developed. In each case, we found that it is possible to select not only the channel dimensions but also their positions such that the entire structure offers more strength and less flow resistance when the total volume (or weight) and the total channel volume are fixed. We show that the minimized peak stress is smaller when the channel volume (φ) is smaller and the vasculature is more complex, i.e., with more levels of bifurcation. Diminishing returns are reached in both directions, decreasing φ and increasing complexity. For example, when φ=0.02 the minimized peak stress of a design with one bifurcation level is only 0.2% greater than the peak stress in the optimized vascular design with two levels of bifurcation. © 2010 American Institute of Physics.
Resumo:
Here we show that the configuration of a slender enclosure can be optimized such that the radiation heating of a stream of solid is performed with minimal fuel consumption at the global level. The solid moves longitudinally at constant rate through the enclosure. The enclosure is heated by gas burners distributed arbitrarily, in a manner that is to be determined. The total contact area for heat transfer between the hot enclosure and the cold solid is fixed. We find that minimal global fuel consumption is achieved when the longitudinal distribution of heaters is nonuniform, with more heaters near the exit than the entrance. The reduction in fuel consumption relative to when the heaters are distributed uniformly is of order 10%. Tapering the plan view (the floor) of the heating area yields an additional reduction in overall fuel consumption. The best shape is when the floor area is a slender triangle on which the cold solid enters by crossing the base. These architectural features recommend the proposal to organize the flow of the solid as a dendritic design, which enters as several branches, and exits as a single hot stream of prescribed temperature. The thermodynamics of heating is presented in modern terms in the Sec. (exergy destruction, entropy generation). The contribution is that to optimize "thermodynamically" is the same as reducing the consumption of fuel. © 2010 American Institute of Physics.
Resumo:
Protocorporatist West European countries in which economic interests were collectively organized adopted PR in the first quarter of the twentieth century, whereas liberal countries in which economic interests were not collectively organized did not. Political parties, as Marcus Kreuzer points out, choose electoral systems. So how do economic interests translate into party political incentives to adopt electoral reform? We argue that parties in protocorporatist countries were representative of and closely linked to economic interests. As electoral competition in single member districts increased sharply up to World War I, great difficulties resulted for the representative parties whose leaders were seen as interest committed. They could not credibly compete for votes outside their interest without leadership changes or reductions in interest influence. Proportional representation offered an obvious solution, allowing parties to target their own voters and their organized interest to continue effective influence in the legislature. In each respect, the opposite was true of liberal countries. Data on party preferences strongly confirm this model. (Kreuzer's historical criticisms are largely incorrect, as shown in detail in the online supplementary Appendix.). © 2010 American Political Science Association.
Resumo:
The Million Mom March (favoring gun control) and Code Pink: Women for Peace (focusing on foreign policy, especially the war in Iraq) are organizations that have mobilized women as women in an era when other women's groups struggled to maintain critical mass and turned away from non-gender-specific public issues. This article addresses how these organizations fostered collective consciousness among women, a large and diverse group, while confronting the echoes of backlash against previous mobilization efforts by women. We argue that the March and Code Pink achieved mobilization success by creating hybrid organizations that blended elements of three major collective action frames: maternalism, egalitarianism, and feminine expression. These innovative organizations invented hybrid forms that cut across movements, constituencies, and political institutions. Using surveys, interviews, and content analysis of organizational documents, this article explains how the March and Code Pink met the contemporary challenges facing women's collective action in similar yet distinct ways. It highlights the role of feminine expression and concerns about the intersectional marginalization of women in resolving the historic tensions between maternalism and egalitarianism. It demonstrates hybridity as a useful analytical lens to understand gendered organizing and other forms of grassroots collective action. © 2010 American Political Science Association.
Resumo:
Successfully predicting the frequency dispersion of electronic hyperpolarizabilities is an unresolved challenge in materials science and electronic structure theory. We show that the generalized Thomas-Kuhn sum rules, combined with linear absorption data and measured hyperpolarizability at one or two frequencies, may be used to predict the entire frequency-dependent electronic hyperpolarizability spectrum. This treatment includes two- and three-level contributions that arise from the lowest two or three excited electronic state manifolds, enabling us to describe the unusual observed frequency dispersion of the dynamic hyperpolarizability in high oscillator strength M-PZn chromophores, where (porphinato)zinc(II) (PZn) and metal(II)polypyridyl (M) units are connected via an ethyne unit that aligns the high oscillator strength transition dipoles of these components in a head-to-tail arrangement. We show that some of these structures can possess very similar linear absorption spectra yet manifest dramatically different frequency dependent hyperpolarizabilities, because of three-level contributions that result from excited state-to excited state transition dipoles among charge polarized states. Importantly, this approach provides a quantitative scheme to use linear optical absorption spectra and very limited individual hyperpolarizability measurements to predict the entire frequency-dependent nonlinear optical response. Copyright © 2010 American Chemical Society.
Resumo:
The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.
Resumo:
The variability of summer precipitation in the southeastern United States is examined in this study using 60-yr (1948-2007) rainfall data. The Southeast summer rainfalls exhibited higher interannual variability with more intense summer droughts and anomalous wetness in the recent 30 years (1978-2007) than in the prior 30 years (1948-77). Such intensification of summer rainfall variability was consistent with a decrease of light (0.1-1 mm day-1) and medium (1-10 mm day-1) rainfall events during extremely dry summers and an increase of heavy (.10 mm day-1) rainfall events in extremely wet summers. Changes in rainfall variability were also accompanied by a southward shift of the region of maximum zonal wind variability at the jet stream level in the latter period. The covariability between the Southeast summer precipitation and sea surface temperatures (SSTs) is also analyzed using the singular value decomposition (SVD) method. It is shown that the increase of Southeast summer precipitation variability is primarily associated with a higher SST variability across the equatorial Atlantic and also SST warming in the Atlantic. © 2010 American Meteorological Society.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
The Telescope Array is a detector of extensive air shower produced by ultra High energy cosmic ray. This detector is located on Utah, USA. The construction have been completed and the full operation has been running from March 2008. In this talk, the status of observation and our prospects are described. © 2010 American Institute of Physics.
Resumo:
The expansion of a dense plasma through a more rarefied ionized medium is a phenomenon of interest in various physics environments ranging from astrophysics to high energy density laser-matter laboratory experiments. Here this situation is modeled via a one-dimensional particle-in-cell simulation; a jump in the plasma density of a factor of 100 is introduced in the middle of an otherwise equally dense electron-proton plasma with an uniform proton and electron temperature of 10 eV and 1 keV, respectively. The diffusion of the dense plasma, through the rarefied one, triggers the onset of different nonlinear phenomena such as a strong ion-acoustic shock wave and a rarefaction wave. Secondary structures are detected, some of which are driven by a drift instability of the rarefaction wave. Efficient proton acceleration occurs ahead of the shock, bringing the maximum proton velocity up to 60 times the initial ion thermal speed. (C) 2010 American Institute of Physics. [doi: 10.1063/1.3469762]
Resumo:
Dust ion acoustic solitons in an unmagnetized dusty plasma comprising cold dust particles, adiabatic fluid ions, and electrons satisfying a kappa distribution are investigated using both small amplitude and arbitrary amplitude techniques. Their existence domain is discussed in the parameter space of Mach number M and electron density fraction f over a wide range of values of kappa. For all kappa > 3/2, including the Maxwellian distribution, negative dust supports solitons of both polarities over a range in f. In that region of parameter space solitary structures of finite amplitude can be obtained even at the lowest Mach number, the acoustic speed, for all kappa. These cannot be found from small amplitude theories. This surprising behavior is investigated, and it is shown that f(c), the value of f at which the KdV coefficient A vanishes, plays a critical role. In the presence of positive dust, only positive potential solitons are found. (C) 2010 American Institute of Physics. [doi: 10.1063/1.3400229]