984 resultados para Path dependence
Resumo:
Stationary solutions to the equations of nonlinear diffusive shock acceleration play a fundamental role in the theory of cosmic-ray acceleration. Their existence usually requires that a fraction of the accelerated particles be allowed to escape from the system. Because the scattering mean free path is thought to be an increasing function of energy, this condition is conventionally implemented as an upper cutoff in energy space-particles are then permitted to escape from any part of the system, once their energy exceeds this limit. However, because accelerated particles are responsible for the substantial amplification of the ambient magnetic field in a region upstream of the shock front, we examine an alternative approach in which particles escape over a spatial boundary. We use a simple iterative scheme that constructs stationary numerical solutions to the coupled kinetic and hydrodynamic equations. For parameters appropriate for supernova remnants, we find stationary solutions with efficient acceleration when the escape boundary is placed at the point where growth and advection of strongly driven nonresonant waves are in balance. We also present the energy dependence of the distribution function close to the energy where it cuts off-a diagnostic that is in principle accessible to observation.
Resumo:
We present BDDT, a task-parallel runtime system that dynamically discovers and resolves dependencies among parallel tasks. BDDT allows the programmer to specify detailed task footprints on any memory address range, multidimensional array tile or dynamic region. BDDT uses a block-based dependence analysis with arbitrary granularity. The analysis is applicable to existing C programs without having to restructure object or array allocation, and provides flexibility in array layouts and tile dimensions.
We evaluate BDDT using a representative set of benchmarks, and we compare it to SMPSs (the equivalent runtime system in StarSs) and OpenMP. BDDT performs comparable to or better than SMPSs and is able to cope with task granularity as much as one order of magnitude finer than SMPSs. Compared to OpenMP, BDDT performs up to 3.9× better for benchmarks that benefit from dynamic dependence analysis. BDDT provides additional data annotations to bypass dependence analysis. Using these annotations, BDDT outperforms OpenMP also in benchmarks where dependence analysis does not discover additional parallelism, thanks to a more efficient implementation of the runtime system.
Resumo:
This paper offers a contribution to contemporary studies of spatial planning. In particular, it problematises the relationship between neoliberal competitiveness and spatial planning. Neoliberal competitiveness is a hegemonic discourse in public policy as it (allegedly) provides the ‘path to economic nirvana’. However, commentators have critiqued its theoretical underpinnings and labelled it a ‘dangerous obsession’ for policy makers. Another set of literatures argues that spatial planning can be understood as a form of ‘neoliberal spatial governance’ and read in a ‘postpolitical’ framework that ‘privileges competitiveness’. Synthesising these debates this paper critically analyses the application and operationalisation of neoliberal competitiveness in Northern Ireland and Belfast. In focusing on this unique case study—a deeply divided society with a turbulent history—the paper takes the debate forward in arguing that rather than offering the ‘path to economic nirvana’ neoliberal competitiveness is a ‘postpolitical strategy’ and represents a ‘dangerous obsession’ for spatial planning.
Resumo:
Three studies tested the conditions under which people judge utilitarian harm to be authority dependent (i.e., whether its right or wrongness depends on the ruling of an authority). In Study 1, participants judged the right or wrongness of physical abuse when used as an interrogation method anticipated to yield useful information for preventing future terrorist attacks. The ruling of the military authority towards the harm was manipulated (prohibited vs. prescribed) and found to significantly influence judgments of the right or wrongness of inflicting harm. Study 2 established a boundary condition with regards to the influence of authority, which was eliminated when the utility of the harm was definitely obtained rather than forecasted. Finally, Study 3 replicated the findings of Studies 1-2 in a completely different context—an expert committee’s ruling about the harming of chimpanzees for biomedical research. These results are discussed as they inform ongoing debates regarding the role of authority in moderating judgments of complex and simple harm. 2013 Elsevier B.V. © All rights reserved.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
The behaviour of syntactic foam is strongly dependent on temperature and strain rate. This research focuses on the behaviour of syntactic foam made of epoxy and glass microballoons in the glassy, transition and rubbery regions. Both epoxy and epoxy foam are investigated separately under tension and shear loadings in order to study the strain rate and temperature effects. The results indicate that the strength and strain to failure data can be collapsed onto master curves depending on temperature reduced strain rate. The highest strain to failure occurs in the transition zone. The presence of glass microballoons reduces the strain to failure over the entire range considered, an effect that is particularly significant under tensile loading. However, as the microballoons increase the elastic modulus significantly in the rubbery zone but reduce it somewhat in the glassy zone, the effect on the strength is more complicated. Different failure mechanisms are identified over the temperature-frequency range considered. As the temperature reduced strain rate is decreased, the failure mechanism changes from microballoon fracture to matrix fracture and debonding between the matrix and microballoons. © IMechE 2012.
Resumo:
Two models that can predict the voltage-dependent scattering from liquid crystal (LC)-based reflectarray cells are presented. The validity of both numerical techniques is demonstrated using measured results in the frequency range 94-110 GHz. The most rigorous approach models, for each voltage, the inhomogeneous and anisotropic permittivity of the LC as a stratified media in the direction of the biasing field. This accounts for the different tilt angles of the LC molecules inside the cell calculated from the solution of the elastic problem. The other model is based on an effective homogeneous permittivity tensor that corresponds to the average tilt angle along the longitudinal direction for each biasing voltage. In this model, convergence problems associated with the longitudinal inhomogeneity are avoided, and the computation efficiency is improved. Both models provide a correspondence between the reflection coefficient (losses and phase-shift) of the LC-based reflectarray cell and the value of biasing voltage, which can be used to design beam scanning reflectarrays. The accuracy and the efficiency of both models are also analyzed and discussed.
Resumo:
The objective of this work is an evaluation of quantitative measurements of piezoresponse force microscopy for nanoscale characterization of ferroelectric films. To this end, we investigate how the piezoresponse phase difference Delta Phi between c domains depends on the frequency omega of the applied ac field much lower than the cantilever first resonance frequency. The main specimen under study was a 102 nm thick film of Pb(Zr(0.2)Ti(0.8))O(3). For the sake of comparison, a 100 nm thick PbTiO(3) film was also used. From our measurements, we conclude a frequency dependent behavior Delta Phi similar to omega(-1), which can only be partially explained by the presence of adsorbates on the surface. (C) 2008 American Institute of Physics.
Resumo:
We have developed an instrument to study the behavior of the critical current density (J(c)) in superconducting wires and tapes as a function of field (mu(0)H), temperature (T), and axial applied strain (epsilon(a)). The apparatus is an improvement of similar devices that have been successfully used in our institute for over a decade. It encompasses specific advantages such as a simple sample layout, a well defined and homogeneous strain application, the possibility of investigating large compressive strains and the option of simple temperature variation, while improving the main drawback in our previous systems by increasing the investigated sample length by approximately a factor of 10. The increase in length is achieved via a design change from a straight beam section to an initially curved beam, placed perpendicular to the applied field axis in the limited diameter of a high field magnet bore. This article describes in detail the mechanical design of the device and its calibrations. Additionally initial J(c)(epsilon(a)) data, measured at liquid helium temperature, are presented for a bronze processed and for a powder-in-tube Nb3Sn superconducting wire. Comparisons are made with earlier characterizations, indicating consistent behavior of the instrument. The improved voltage resolution, resulting from the increased sample length, enables J(c) determinations at an electric field criterion E-c=10 muV/m, which is substantially lower than a criterion of E-c=100 muV/m which was possible in our previous systems. (C) 2004 American Institute of Physics.
Resumo:
We present results for a variety of Monte Carlo annealing approaches, both classical and quantum, benchmarked against one another for the textbook optimization exercise of a simple one-dimensional double well. In classical (thermal) annealing, the dependence upon the move chosen in a Metropolis scheme is studied and correlated with the spectrum of the associated Markov transition matrix. In quantum annealing, the path integral Monte Carlo approach is found to yield nontrivial sampling difficulties associated with the tunneling between the two wells. The choice of fictitious quantum kinetic energy is also addressed. We find that a "relativistic" kinetic energy form, leading to a higher probability of long real-space jumps, can be considerably more effective than the standard nonrelativistic one.
Resumo:
The polarization dependence of laser-driven coherent synchrotron emission transmitted through thin foils is investigated experimentally. The harmonic generation process is seen to be almost completely suppressed for circular polarization opening up the possibility of producing isolated attosecond pulses via polarization gating. Particle-in-cell simulations suggest that current laser pulses are capable of generating isolated attosecond pulses with high pulse energies.
Resumo:
Why did banking compliance fail so badly in the recent financial crisis and why, according to many, does it continue to do so? Rather than point to the lack of oversight of individuals in bank compliance roles, as many commentators do, in this paper I examine in depth the organizational context that surrounded people in such roles. I focus on those compliance personnel who did speak out about risky practices in their banks, who were forced to escalate the problem and 'whistle-blow' to external parties, and who were punished for doing so. Drawing on recent empirical data from a wider study, I argue that the concept of dependence corruption is useful in this setting, and that it can be extended to encompass interpersonal attachments. This, in turn, problematises the concept of dependence corruption because interpersonal attachments in organisational settings are inevitable. The paper engages with recent debates on whether institutional corruption is an appropriate lens for studying private-sector organisations by arguing for a focus on roles, rather than remaining at the level of institutional fields or individual organisations. Finally, the paper contributes to studies on banking compliance in the context of the recent crisis; without a deeper understanding of those who were forced to extremes to simply do their jobs, reform of the banking sector will prove difficult.