866 resultados para Generalized Lebesgue Spaces
Resumo:
The generalized Langevin equation (GLE) has been recently suggested to simulate the time evolution of classical solid and molecular systems when considering general nonequilibrium processes. In this approach, a part of the whole system (an open system), which interacts and exchanges energy with its dissipative environment, is studied. Because the GLE is derived by projecting out exactly the harmonic environment, the coupling to it is realistic, while the equations of motion are non-Markovian. Although the GLE formalism has already found promising applications, e. g., in nanotribology and as a powerful thermostat for equilibration in classical molecular dynamics simulations, efficient algorithms to solve the GLE for realistic memory kernels are highly nontrivial, especially if the memory kernels decay nonexponentially. This is due to the fact that one has to generate a colored noise and take account of the memory effects in a consistent manner. In this paper, we present a simple, yet efficient, algorithm for solving the GLE for practical memory kernels and we demonstrate its capability for the exactly solvable case of a harmonic oscillator coupled to a Debye bath.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
Increasingly semiconductor manufacturers are exploring opportunities for virtual metrology (VM) enabled process monitoring and control as a means of reducing non-value added metrology and achieving ever more demanding wafer fabrication tolerances. However, developing robust, reliable and interpretable VM models can be very challenging due to the highly correlated input space often associated with the underpinning data sets. A particularly pertinent example is etch rate prediction of plasma etch processes from multichannel optical emission spectroscopy data. This paper proposes a novel input-clustering based forward stepwise regression methodology for VM model building in such highly correlated input spaces. Max Separation Clustering (MSC) is employed as a pre-processing step to identify a reduced srt of well-conditioned, representative variables that can then be used as inputs to state-of-the-art model building techniques such as Forward Selection Regression (FSR), Ridge regression, LASSO and Forward Selection Ridge Regression (FCRR). The methodology is validated on a benchmark semiconductor plasma etch dataset and the results obtained are compared with those achieved when the state-of-art approaches are applied directly to the data without the MSC pre-processing step. Significant performance improvements are observed when MSC is combined with FSR (13%) and FSRR (8.5%), but not with Ridge Regression (-1%) or LASSO (-32%). The optimal VM results are obtained using the MSC-FSR and MSC-FSRR generated models. © 2012 IEEE.
Resumo:
On 26 December 2003 an Israeli activist was shot by the Israeli Army while he was participating in a demonstration organized by Anarchists Against the Wall (AAtW) in the West Bank. This was the first time Israeli Soldiers have deliberately shot live bullets at a Jewish-Israeli activist. This paper is an attempt to understand the set of conditions, the enveloping frameworks, and the new discourses that have made this event, and similar shootings that soon followed, possible. Situating the actions of AAtW within a much wider context of securitization—of identities, movements, and bodies—we examine strategies of resistance which are deployed in highly securitized public spaces. We claim that an unexpected matrix of identity in which abnormality is configured as security threat render the bodies of activists especially precarious. The paper thus provides an account of the new rationales of security technologies and tactics which increasingly govern public spaces.
Forward Stepwise Ridge Regression (FSRR) based variable selection for highly correlated input spaces
Resumo:
We consider transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decodeand-forward (DF) relaying in Nakagami-m fading channels. In an effort to assess the performance, the probability density function and the cumulative distribution function of the endto-end SNR are derived using the moment generating function, from which new exact closed-form expressions for the outage probability and the symbol error rate are derived. We then derive a new closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expressions for the outage probability and the symbol error rate, as well as the high SNR approximations of the ergodic capacity, we establish new design insights under the two distinct constraint scenarios: 1) proportional interference power constraint, and 2) fixed interference power constraint. Several pivotal conclusions are reached. For the first scenario, the full diversity order of the
outage probability and the symbol error rate is achieved, and the high SNR slope of the ergodic capacity is 1/2. For the second scenario, the diversity order of the outage probability and the symbol error rate is zero with error floors, and the high SNR slope of the ergodic capacity is zero with capacity ceiling.
Resumo:
The generalized Langevin equation (GLE) method, as developed previously [L. Stella et al., Phys. Rev. B 89, 134303 (2014)], is used to calculate the dissipative dynamics of systems described at the atomic level. The GLE scheme goes beyond the commonly used bilinear coupling between the central system and the bath, and permits us to have a realistic description of both the dissipative central system and its surrounding bath. We show how to obtain the vibrational properties of a realistic bath and how to convey such properties into an extended Langevin dynamics by the use of the mapping of the bath vibrational properties onto a set of auxiliary variables. Our calculations for a model of a Lennard-Jones solid show that our GLE scheme provides a stable dynamics, with the dissipative/relaxation processes properly described. The total kinetic energy of the central system always thermalizes toward the expected bath temperature, with appropriate fluctuation around the mean value. More importantly, we obtain a velocity distribution for the individual atoms in the central system which follows the expected canonical distribution at the corresponding temperature. This confirms that both our GLE scheme and our mapping procedure onto an extended Langevin dynamics provide the correct thermostat. We also examined the velocity autocorrelation functions and compare our results with more conventional Langevin dynamics.
Resumo:
Credal networks generalize Bayesian networks by relaxing the requirement of precision of probabilities. Credal networks are considerably more expressive than Bayesian networks, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal networks. The algorithm is based on an important representation result we prove for general credal networks: that any credal network can be equivalently reformulated as a credal network with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal network is then updated by L2U, a loopy approximate algorithm for binary credal networks. Overall, we generalize L2U to non-binary credal networks, obtaining a scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences with respect to other state-of-the-art algorithms is evaluated by extensive numerical tests.
Resumo:
Credal nets generalize Bayesian nets by relaxing the requirement of precision of probabilities. Credal nets are considerably more expressive than Bayesian nets, but this makes belief updating NP-hard even on polytrees. We develop a new efficient algorithm for approximate belief updating in credal nets. The algorithm is based on an important representation result we prove for general credal nets: that any credal net can be equivalently reformulated as a credal net with binary variables; moreover, the transformation, which is considerably more complex than in the Bayesian case, can be implemented in polynomial time. The equivalent binary credal net is updated by L2U, a loopy approximate algorithm for binary credal nets. Thus, we generalize L2U to non-binary credal nets, obtaining an accurate and scalable algorithm for the general case, which is approximate only because of its loopy nature. The accuracy of the inferences is evaluated by empirical tests.
Outperformance in exchange-traded fund pricing deviations: Generalized control of data snooping bias
Resumo:
An investigation into exchange-traded fund (ETF) outperforrnance during the period 2008-2012 is undertaken utilizing a data set of 288 U.S. traded securities. ETFs are tested for net asset value (NAV) premium, underlying index and market benchmark outperformance, with Sharpe, Treynor, and Sortino ratios employed as risk-adjusted performance measures. A key contribution is the application of an innovative generalized stepdown procedure in controlling for data snooping bias. We find that a large proportion of optimized replication and debt asset class ETFs display risk-adjusted premiums with energy and precious metals focused funds outperforming the S&P 500 market benchmark.