992 resultados para Multiplier-Less Architecture
Resumo:
It has often been assumed that the islands of Orkney were essentially treeless throughout much of the Holocene, with any ‘scrub’ woodland having been destroyed by Neolithic farming communities by around 3500 cal. BC. This apparently open, hyper-oceanic environment would presumably have provided quite marginal conditions for human settlement, yet Neolithic communities flourished and the islands contain some of the most spectacular remains of this period in north-west Europe. The study of new Orcadian pollen sequences, in conjunction with the synthesis of existing data, indicates that the timing of woodland decline was not synchronous across the archipelago, beginning in the Mesolithic, and that in some areas woodland persisted into the Bronze Age. There is also evidence to suggest that woodland communities in Orkney were more diverse, and therefore that a wider range of resources was available to Neolithic people, than has previously been assumed. Recent archaeological investigations have revealed evidence for timber buildings at early Neolithic settlement sites, suggesting that the predominance of stone architecture in Neolithic Orkney may not have been due to a lack of timber as has been supposed. Rather than simply reflecting adaptation to resource constraints, the reasons behind the shift from timber to stone construction are more complex and encompass social, cultural and environmental factors.
Resumo:
Approaches exploiting trait distribution extremes may be used to identify loci associated with common traits, but it is unknown whether these loci are generalizable to the broader population. In a genome-wide search for loci associated with the upper versus the lower 5th percentiles of body mass index, height and waist-to-hip ratio, as well as clinical classes of obesity, including up to 263,407 individuals of European ancestry, we identified 4 new loci (IGFBP4, H6PD, RSRC1 and PPP2R2A) influencing height detected in the distribution tails and 7 new loci (HNF4G, RPTOR, GNAT2, MRPS33P4, ADCY9, HS6ST3 and ZZZ3) for clinical classes of obesity. Further, we find a large overlap in genetic structure and the distribution of variants between traits based on extremes and the general population and little etiological heterogeneity between obesity subgroups.
Resumo:
Architecture Description Languages (ADLs) have emerged in recent years as a tool for providing high-level descriptions of software systems in terms of their architectural elements and the relationships among them. Most of the current ADLs exhibit limitations which prevent their widespread use in industrial applications. In this paper, we discuss these limitations and introduce ALI, an ADL that has been developed to address such limitations. The ALI language provides a rich and flexible syntax for describing component interfaces, architectural patterns, and meta-information. Multiple graphical architectural views can then be derived from ALI's textual notation.
Resumo:
Software Product-Line Engineering has emerged in recent years, as an important strategy for maximising reuse within the context of a family of related products. In current approaches to software product-lines, there is general agreement that the definition of a reference-architecture for the product-line is an important step in the software engineering process. In this paper we introduce ADLARS, a new form of architecture Description language that places emphasis on the capture of architectural relationships. ADLARS is designed for use within a product-line engineering process. The language supports both the definition of architectural structure, and of important architectural relationships. In particular it supports capture of the relationships between product features, component and task architectures, interfaces and parameter requirements.
Resumo:
In this chapter Morrow talks of her return to Northern Ireland to 2003 and how her involvement in establishing a new school of architecture and a recent suite of interdisciplinary masters has led her to consider the relationship between the post-conflict context, architectural practice and its education. She examines the consequences of not facing the effects of conflict; the impact on societal and architectural creativity; and the potential for live project pedagogy to evolve effective models of socio-spatial rehearsals. She concludes with some strategies for schools of architecture that wish to feed and be fed by their context. This is a personalized commentary that teeters somewhere between deep-seated frustration with a blind-folded profession and sustained belief in architectural education’s potential to offer more than built solutions.
Resumo:
In the digital age, the hyperspace of virtual reality systems stands out as a new spatial concept creating a parallel realm to "real" space. Virtual reality influences one’s experience of and interaction with architectural space. This "otherworld" brings up the criticism of the existing conception of space, time and body. Hyperspaces are relatively new to designers but not to filmmakers. Their cinematic representations help the comprehension of the outcomes of these new spaces. Visualisation of futuristic ideas on the big screen turns film into a medium for spatial experimentation. Creating a possible future, The Matrix (Andy and Larry Wachowski, 1999) takes the concept of hyperspace to a level not-yet-realised but imagined. With a critical gaze at the existing norms of architecture, the film creates new horizons in terms of space. In this context, this study introduces science fiction cinema as a discussion medium to understand the potentials of virtual reality systems for the architecture of the twenty first century. As a "role model" cinema helps to better understand technological and spatial shifts. It acts as a vehicle for going beyond the spatial theories and designs of the twentieth century, and defining the conception of space in contemporary architecture.
Resumo:
In The City of Collective Memory, urban historian Christina Boyer (1994) defines the image of a city as an abstracted concept, an imaginary (re)constructed form. This urban image is created from many aspects, one of which is the framed and edited views and experiences found in films situated in or about a particular city. In this study, to explore the collective memory of the city of Berlin from an architectural point of view, one film from each of the major historical periods of Berlin since the invention of cinema is examined: pre-WWI, interwar period, the Nazi period, post-WWII, Berlin Wall/Cold War, and the reunification period. Memory-making in the city is studied following the footsteps of the protagonists in the films, concluding that film-making and memory-making make use of similar processes, the editing of fragmented pieces of so-called reality, to create its own reality.
Resumo:
Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.
Resumo:
In this paper, we present a unified approach to an energy-efficient variation-tolerant design of Discrete Wavelet Transform (DWT) in the context of image processing applications. It is to be noted that it is not necessary to produce exactly correct numerical outputs in most image processing applications. We exploit this important feature and propose a design methodology for DWT which shows energy quality tradeoffs at each level of design hierarchy starting from the algorithm level down to the architecture and circuit levels by taking advantage of the limited perceptual ability of the Human Visual System. A unique feature of this design methodology is that it guarantees robustness under process variability and facilitates aggressive voltage over-scaling. Simulation results show significant energy savings (74% - 83%) with minor degradations in output image quality and avert catastrophic failures under process variations compared to a conventional design. © 2010 IEEE.
Resumo:
In this paper, we propose a novel finite impulse response (FIR) filter design methodology that reduces the number of operations with a motivation to reduce power consumption and enhance performance. The novelty of our approach lies in the generation of filter coefficients such that they conform to a given low-power architecture, while meeting the given filter specifications. The proposed algorithm is formulated as a mixed integer linear programming problem that minimizes chebychev error and synthesizes coefficients which consist of pre-specified alphabets. The new modified coefficients can be used for low-power VLSI implementation of vector scaling operations such as FIR filtering using computation sharing multiplier (CSHM). Simulations in 0.25um technology show that CSHM FIR filter architecture can result in 55% power and 34% speed improvement compared to carry save multiplier (CSAM) based filters.
Resumo:
Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration block level interactions and ensuring that under any change of operating conditions, only the "less-crucial" computations, that contribute less to block/system output quality, are affected. The proposed approach applies unequal error protection to various blocks of a system-logic and memory-and spans multiple layers of design hierarchy-algorithm, architecture and circuit. The design methodology when applied to a multimedia subsystem shows large power benefits ( up to 69% improvement in power consumption) at reasonable image quality while tolerating errors introduced due to VOS, process variations, and channel noise.
Resumo:
A fully homomorphic encryption (FHE) scheme is envisioned as a key cryptographic tool in building a secure and reliable cloud computing environment, as it allows arbitrary evaluation of a ciphertext without revealing the plaintext. However, existing FHE implementations remain impractical due to very high time and resource costs. To the authors’ knowledge, this paper presents the first hardware implementation of a full encryption primitive for FHE over the integers using FPGA technology. A large-integer multiplier architecture utilising Integer-FFT multiplication is proposed, and a large-integer Barrett modular reduction module is designed incorporating the proposed multiplier. The encryption primitive used in the integer-based FHE scheme is designed employing the proposed multiplier and modular reduction modules. The designs are verified using the Xilinx Virtex-7 FPGA platform. Experimental results show that a speed improvement factor of up to 44 is achievable for the hardware implementation of the FHE encryption scheme when compared to its corresponding software implementation. Moreover, performance analysis shows further speed improvements of the integer-based FHE encryption primitives may still be possible, for example through further optimisations or by targeting an ASIC platform.
Resumo:
Tephrochronology, a key tool in the correlation of Quaternary sequences, relies on the extraction of tephra shards from sediments for visual identification and high-precision geochemical comparison. A prerequisite for the reliable correlation of tephra layers is that the geochemical composition of glass shards remains unaltered by natural processes (e.g. chemical exchange in the sedimentary environment) and/or by laboratory analytical procedures. However, natural glasses, particularly when in the form of small shards with a high surface to volume ratio, are prone to chemical alteration in both acidic and basic environments. Current techniques for the extraction of distal tephra from sediments involve the ‘cleaning’ of samples in precisely such environments and at elevated temperatures. The acid phase of the ‘cleaning’ process risks alteration of the geochemical signature of the shards, while the basic phase leads to considerable sample loss through dissolution of the silica network. Here, we illustrate the degree of alteration and loss to which distal tephras may be prone, and introduce a less destructive procedure for their extraction. This method is based on stepped heavy liquid flotation and which results in samples of sufficient quality for analysis while preserving their geochemical integrity. In trials, this method out-performed chemical extraction procedures in terms of the number of shards recovered and has resulted in the detection of new tephra layers with low shard concentrations. The implications of this study are highly significant because (i) the current database of distal tephra records and their corresponding geochemical signatures may require refinement and (ii) the record of distal tephras may be incomplete due to sample loss induced by corrosive laboratory procedures. It is therefore vital that less corrosive laboratory procedures are developed to make the detection and classification of distal glass tephra more secure.