67 resultados para Conveying machinery


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the theme of exhibiting architectural research through a particular example, the development of the Irish pavilion for the 14th architectural biennale, Venice 2014. Responding to Rem Koolhaas’s call to investigate the international absorption of modernity, the Irish pavilion became a research project that engaged with the development of the architectures of infrastructure in Ireland in the twentieth and twenty-first centuries. Central to this proposition was that infrastructure is simultaneously a technological and cultural construct, one that for Ireland occupied a critical position in the building of a new, independent post-colonial nation state, after 1921.

Presupposing infrastructure as consisting of both visible and invisible networks, the idea of a matrix become a central conceptual and visual tool in the curatorial and design process for the exhibition and pavilion. To begin with this was a two-dimensional grid used to identify and order what became described as a series of ten ‘infrastructural episodes’. These were determined chronologically across the decades between 1914 and 2014 and their spatial manifestations articulated in terms of scale: micro, meso and macro. At this point ten academics were approached as researchers. Their purpose was twofold, to establish the broader narratives around which the infrastructures developed and to scrutinise relevant archives for compelling visual material. Defining the meso scale as that of the building, the media unearthed was further filtered and edited according to a range of categories – filmic/image, territory, building detail, and model – which sought to communicate the relationship between the pieces of architecture and the larger systems to which they connect. New drawings realised by the design team further iterated these relationships, filling in gaps in the narrative by providing composite, strategic or detailed drawings.

Conceived as an open-ended and extendable matrix, the pavilion was influenced by a series of academic writings, curatorial practices, artworks and other installations including: Frederick Kiesler’s City of Space (1925), Eduardo Persico and Marcello Nizzoli’s Medaglio d’Oro room (1934), Sol Le Witt’s Incomplete Open Cubes (1974) and Rosalind Krauss’s seminal text ‘Grids’ (1979). A modular frame whose structural bays would each hold and present an ‘episode’, the pavilion became both a visual analogue of the unseen networks embodying infrastructural systems and a reflection on the predominance of framed structures within the buildings exhibited. Sharing the aspiration of adaptability of many of these schemes, its white-painted timber components are connected by easily-dismantled steel fixings. These and its modularity allow the structure to be both taken down and re-erected subsequently in different iterations. The pavilion itself is, therefore, imagined as essentially provisional and – as with infrastructure – as having no fixed form. Presenting archives and other material over time, the transparent nature of the space allowed these to overlap visually conveying the nested nature of infrastructural production. Pursuing a means to evoke the qualities of infrastructural space while conveying a historical narrative, the exhibition’s termination in the present is designed to provoke in the visitor, a perceptual extension of the matrix to engage with the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter explores the ghost story on television, and particularly the tensions between the medium and the genre. Television has long been seen as a nearly-supernatural medium, an association that the very term 'medium' enhances. In particular, the very intimacy of television, and its domestic presence, have led to it being considered to be a suitable and effective venue for the ghost story, while at the same time concerns have risen over it being too effective at conveying horror into the home. The ghost story is thus one of the genres where the tensions between the medium's aesthetic possibilities and desire for censorship can be most clearly seen. As such, there is a recurring use of the ghost story in relation to different techniques of special effects and narrative on television, some more effective than others, and the presence of the ghost story on television waxes and wanes as different styles become more or less popular, and different narrative forms, such as single play or serial or series, become more or less dominant. Drawing on examples primarily from a British and US context, this chapter outlines the history of the ghost story on television and demonstrates how the tensions in presentation, narrative and considerations of the viewer have influenced the many changes that have taken place within the genre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a malware categorization method that models malware behavior in terms of instructions using PageRank. PageRank computes ranks of web pages based on structural information and can also compute ranks of instructions that represent the structural information of the instructions in malware analysis methods. Our malware categorization method uses the computed ranks as features in machine learning algorithms. In the evaluation, we compare the effectiveness of different PageRank algorithms and also investigate bagging and boosting algorithms to improve the categorization accuracy.