225 resultados para Software architectures


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1989, the Irish architectural practice O’Donnell and Tuomey were commissioned to build a temporary pavilion to represent Ireland at the 11 Cities/11 Nations exhibition at Leeuwarden in the Netherlands. Citing Peter Smithson, John Tuomey suggested the pavilion, which drew inspirations from the forms and materials of the modern Irish barn, embodied an intention ‘not just to build but to communicate’. Its subsequent reassembly for the inauguration of the Irish Museum of Modern Art in the courtyard of the seventeenth-century Royal Hospital Kilmainham in Dublin in 1991, drew comparisons between the urban sophistication of this colonial building, its svelte new refit, and the rural expression of O’Donnell + Tuomey’s barn. It was, one critic recently noted, as if ‘a wedding had been crashed by a country cousin who had forgotten to clean his boots’.
It has been argued that temporary or ephemeral pieces of architecture, unburdened by the traditional constraints of firmitas or utilitas, have the ability to offer a concise distillation of meaning and intention. Approaching the qualities of rhetoric, such architectures share similarities with the monument and yet differ in fundamental ways. Their rapid construction in lightweight materials can allow for an almost instantaneous negotiation of zeitgeist. And, unlike the monument, from the outset the space and form of these installations is designed to disappear.
This paper analyses the ephemeral architectures of Dublin in the modern period contextualising their qualities and intentions as they manifest themselves across colonial, post-colonial and contemporary epochs. It finds origins in the theatrical sets of the late eighteenth century and traces their movements into the semi-public sphere of the pleasure garden and finally into the theatre of the streets. It is here that temporary architecture in the city has been at its most potent, allowing the amplification or subversion of the meanings of much larger spaces. Historically, much of Dublin’s most conspicuous instances of ephemeral architecture have been realised as a means of articulating mass spectacle in political, religious or nationalistic events. And while much of this has sought to confirm dominant ideologies, it has also been possible to discern moments of opposition.
The contemporary period, however, has arguably witnessed a shift in ephemeral architectures from explicitly representing ‘positive ideologies’ towards something more oblique or nebulous. This turn towards abstraction in form and space has rendered an especially communicative form of architecture particularly elusive. By examining continuities within the apparent disjuncture between historical and contemporary examples, this paper begins to unpick the language of recent ephemeral architecture in Dublin and situate it within wider global trends where political and economic imperatives are often simultaneously obscured and expressed in public space by a vocabulary of universality. As Jurgen Habermas has suggested, the contemporary value given to the transitory and the ephemeral ‘discloses a longing for an undefiled, immaculate and stable present’.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the viability of user-level software management of a hybrid DRAM/NVM main memory system. We propose an operating system (OS) and programming interface to place data from within the user application. We present a profiling tool to help programmers decide on the placement of application data in hybrid memory systems. Cycle-accurate simulation of modified applications confirms that our approach is more energy-efficient than state-of-the- art hardware or OS approaches at equivalent performance. Moreover, our results are validated on several candidate NVM technologies and a wide set of 14 benchmarks.
The key observation behind this work is that, for the work- loads we evaluated, application objects are too short-lived to motivate migration. Utilizing this property significantly reduces the hardware complexity of hybrid memory systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integer-based FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with medium-sized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimized. The pattern has been demonstrated to be suitable for capturing and modeling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proposed multi-table lookup architecture provides SDN-based, high-performance packet classification in an OpenFlow v1.1+ SDN switch. The objective of the demonstration is to show the functionality of the architecture deployed on the NetFPGA SUME Platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of Software-Defined Networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this article, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DRAM technology faces density and power challenges to increase capacity because of limitations of physical cell design. To overcome these limitations, system designers are exploring alternative solutions that combine DRAM and emerging NVRAM technologies. Previous work on heterogeneous memories focuses, mainly, on two system designs: PCache, a hierarchical, inclusive memory system, and HRank, a flat, non-inclusive memory system. We demonstrate that neither of these designs can universally achieve high performance and energy efficiency across a suite of HPC workloads. In this work, we investigate the impact of a number of multilevel memory designs on the performance, power, and energy consumption of applications. To achieve this goal and overcome the limited number of available tools to study heterogeneous memories, we created HMsim, an infrastructure that enables n-level, heterogeneous memory studies by leveraging existing memory simulators. We, then, propose HpMC, a new memory controller design that combines the best aspects of existing management policies to improve performance and energy. Our energy-aware memory management system dynamically switches between PCache and HRank based on the temporal locality of applications. Our results show that HpMC reduces energy consumption from 13% to 45% compared to PCache and HRank, while providing the same bandwidth and higher capacity than a conventional DRAM system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:


This paper describes how urban agriculture differs from conventional agriculture not only in the way it engages with the technologies of growing, but also in the choice of crop and the way these are brought to market. The authors propose a new model for understanding these new relationships, which is analogous to a systems view of information technology, namely Hardware-Software- Interface.
The first component of the system is hardware. This is the technological component of the agricultural system. Technology is often thought of as equipment, but its linguistic roots are in ‘technis’ which means ‘know how’. Urban agriculture has to engage new technologies, ones that deal with the scale of operation and its context which is different than rural agriculture. Often the scale is very small, and soils are polluted. There this technology in agriculture could be technical such as aquaponic systems, or could be soil-based agriculture such as allotments, window-boxes, or permaculture. The choice of method does not necessarily determine the crop produced or its efficiency. This is linked to the biotic that is added to the hardware, which is seen as the ‘software’.
The software of the system are the ecological parts of the system. These produce the crop which may or may not be determined by the technology used. For example, a hydroponic system could produce a range of crops, or even fish or edible flowers. Software choice can be driven by ideological preferences such as permaculture, where companion planting is used to reduce disease and pests, or by economic factors such as the local market at a particular time of the year. The monetary value of the ‘software’ is determined by the market. Obviously small, locally produced crops are unlikely to compete against intensive products produced globally, however the value locally might be measured in different ways, and might be sold on a different market. This leads to the final part of the analogy - interface.
The interface is the link between the system and the consumer. In traditional agriculture, there is a tenuous link between the producer of asparagus in Peru and the consumer in Europe. In fact very little of the money spent by the consumer ever reaches the grower. Most of the money is spent on refrigeration, transport and profit for agents and supermarket chains. Local or hyper-local agriculture needs to bypass or circumvent these systems, and be connected more directly to the consumer. This is the interface. In hyper-localised systems effectiveness is often more important than efficiency, and direct links between producer and consumer create new economies.