86 resultados para supramolecular architectures


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DRAM technology faces density and power challenges to increase capacity because of limitations of physical cell design. To overcome these limitations, system designers are exploring alternative solutions that combine DRAM and emerging NVRAM technologies. Previous work on heterogeneous memories focuses, mainly, on two system designs: PCache, a hierarchical, inclusive memory system, and HRank, a flat, non-inclusive memory system. We demonstrate that neither of these designs can universally achieve high performance and energy efficiency across a suite of HPC workloads. In this work, we investigate the impact of a number of multilevel memory designs on the performance, power, and energy consumption of applications. To achieve this goal and overcome the limited number of available tools to study heterogeneous memories, we created HMsim, an infrastructure that enables n-level, heterogeneous memory studies by leveraging existing memory simulators. We, then, propose HpMC, a new memory controller design that combines the best aspects of existing management policies to improve performance and energy. Our energy-aware memory management system dynamically switches between PCache and HRank based on the temporal locality of applications. Our results show that HpMC reduces energy consumption from 13% to 45% compared to PCache and HRank, while providing the same bandwidth and higher capacity than a conventional DRAM system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The singlet excited state of the 4-aminonaphthalimide fluorophore in 1a and 1b directs electron transfer from intramolecular but external amine groups along only one of two available paths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The synthesis and photophysical and biological investigation of fluorescent 1,8-naphthalimide conjugated Troger's bases 1-3 are described. These structures bind strongly to DNA in competitive media at pH 7.4, with concomitant modulation in their fluorescence emission. These structures also undergo rapid cellular uptake, being localized within the nucleus within a few hours, and are cytotoxic against HL60 and (chronic myeloid leukemia) K562 cell lines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large integer multiplication is a major performance bottleneck in fully homomorphic encryption (FHE) schemes over the integers. In this paper two optimised multiplier architectures for large integer multiplication are proposed. The first of these is a low-latency hardware architecture of an integer-FFT multiplier. Secondly, the use of low Hamming weight (LHW) parameters is applied to create a novel hardware architecture for large integer multiplication in integer-based FHE schemes. The proposed architectures are implemented, verified and compared on the Xilinx Virtex-7 FPGA platform. Finally, the proposed implementations are employed to evaluate the large multiplication in the encryption step of FHE over the integers. The analysis shows a speed improvement factor of up to 26.2 for the low-latency design compared to the corresponding original integer-based FHE software implementation. When the proposed LHW architecture is combined with the low-latency integer-FFT accelerator to evaluate a single FHE encryption operation, the performance results show that a speed improvement by a factor of approximately 130 is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.

In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

They’re cheap. They’re in every settlement of significance in Britain, Ireland and elsewhere. We all use them but perhaps do not always admit to it. Especially, if we are architects.
Over the past decades Aldi/Lidl low cost supermarkets have escaped from middle Europe to take over large tracts of the English speaking world remaking them according to a formula of mass-produced sheds, buff-coloured cobble-lock car parks, logos in primary colours, bare-shelves and eclectic special offers. Response within architectural discourse to this phenomenon has been largely one of indifference and such places remain, perhaps reiterating Pevsner’s controversial insights into the bicycle shed, on the peripheries of what we might term architecture. This paper seeks to explore the spatial complexities of the discount supermarket and in doing so open up a discussion on the architecture of cheapness. As a road-map, it takes former managing director Dieter Brandes’ treatise on the Aldi formula, Bare Essentials: the Aldi Way to Retailing, and investigates the strategies through which economic exigencies manifest themselves in a series of spatial tactics which involve building. Central to this is the idea of architecture as system rather than form and, in Aldi/Lidl’s case, the result of a spatial network of flows. To understand the architecture of the supermarket, then, it is necessary to measure the times and spaces of supply across the scales of intersection between global and local.
Evaluating the energy, economy and precision of such systems challenges the liminal position of the commercial, the placeless and especially the cheap within architectural discourse. As is well known, architectures of mass-production and prefabrication and their origins exercised modernist thinkers such as Sigfried Giedion and Walter Gropius in the early twentieth century and has undergone a resurgence in recent times. Meanwhile, the mapping of the hitherto overlooked forms and iconography of commerce in Learning from Las Vegas (1971) was extended by Rem Koolhaas et al into an investigation of the technologies, systems and precedents of retail in the Harvard Design School Guide to Shopping, thirty years later in 2001. While obviously always a criteria for building, to find writings on architecture which explicitly celebrate cheapness as a design virtue or, indeed, even iterate the word cheap is more difficult. Walter Gropius’ essay ‘How can we build cheaper, better, more attractive houses?’ (1927), however, situates the cheap within the discussions – articulated, amongst others, by Karl Teige and Bruno Taut – surrounding the minimal dwelling and the moral benefits of absence of the 1920s and 30s.
In our contemporary age of heightened consumption, it is perhaps fitting that an architecture of bare essentials is defined in retail rather than in housing, a commercial existenzminimum where the Miesian paradox of ‘less is more’ is resold as a paradigm of ‘more for less’ in the ubiquitous yet overlooked architectures of the discount supermarket.