855 resultados para ARCHITECTURAL DESIGN SHOWROOMS
Resumo:
Details are presented of the IRIS synthesis system for high-performance digital signal processing. This tool allows non-specialists to automatically derive VLSI circuit architectures from high-level, algorithmic representations, and provides a quick route to silicon implementation. The applicability of the system is demonstrated using the design example of a one-dimensional Discrete Cosine Transform circuit.
Resumo:
The concept of space entered architectural history as late as 1893. Studies in art opened up the discussion, and it has been studied in various ways in architecture ever since. This article aims to instigate an additional reading to architectural history, one that is not supported by "isms" but based on space theories in the 20th century. Objectives of the article are to bring the concept of space and its changing paradigms to the attention of architectural researchers, to introduce a conceptual framework to classify and clarify theories of space, and to enrich the discussions on the 20th century architecture through theories that are beyond styles. The introduction of space in architecture will revolve around subject-object relationships, three-dimensionality and senses. Modern space will be discussed through concepts such as empathy, perception, abstraction, and geometry. A scientific approach will follow to study the concept of place through environment, event, behavior, and design methods. Finally, the research will look at contemporary approaches related to digitally supported space via concepts like reality-virtuality, mediated experience, and relationship with machines.
Resumo:
<p>Inter-component communication has always been of great importance in the design of software architectures and connectors have been considered as first-class entities in many approaches [1][2][3]. We present a novel architectural style that is derived from the well-established domain of computer networks. The style adopts the inter-component communication protocol in a novel way that allows large scale software reuse. It mainly targets real-time, distributed, concurrent, and heterogeneous systems.</p>
Resumo:
The main aim of this study is to investigate the consequences of cross-cultural adjustment in an under researched sample of British expatriates working on International Architectural, Engineering and Construction (AEC) assignments. Adjustment is the primary outcome of an expatriate assignment. According to Bhaskar-Srinivas et al., (2005), Harrison et al., (2004) it is viewed to affect other work related outcomes which could eventually predict expatriate success. To address the scarcity of literature on expatriate management in the AEC sector, an exploratory design was adopted. Phase one is characterised by extensive review of extant literature, whereas phase two was qualitative exploration from British expatriates’ perspective; here seven unstructured interviews were carried out. Further, cognitive mapping analysis through Banaxia decision explorer software was conducted to develop a theoretical framework and propose various hypotheses. The findings imply that British AEC firms could sustain their already established competitive advantage in the global marketplace by acknowledging the complexity of international assignments, prioritising expatriate management and offering a well-rounded support to facilitate expatriate adjustment and ultimately achieve critical outcomes like performance, assignment completion and job satisfaction.
Resumo:
The main aim of this study is to investigate the consequences of cross-cultural adjustment in an under researched sample of British expatriates working on International Architectural, Engineering and Construction (AEC) assignments. Adjustment is the primary outcome of an expatriate assignment. According to Bhaskar-Srinivas et al., (2005), Harrison et al., (2004) it is viewed to affect other work related outcomes which could eventually predict expatriate success. To address the scarcity of literature on expatriate management in the AEC sector, an exploratory design was adopted. Phase one is characterised by extensive review of extant literature, whereas phase two was qualitative exploration from British expatriatesÕ perspective; here seven unstructured interviews were carried out. Further, cognitive mapping analysis through Banaxia decision explorer software was conducted to develop a theoretical framework and propose various hypotheses. The findings imply that British AEC firms could sustain their already established competitive advantage in the global marketplace by acknowledging the complexity of international assignments, prioritising expatriate management and offering a well-rounded support to facilitate expatriate adjustment and ultimately achieve critical outcomes like performance, assignment completion and job satisfaction.
Resumo:
<p>Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.</p>
Resumo:
After an open competition, we were selected to commission, curate and design the Irish pavilion for the Venice biennale 2014. Our proposal engage with the role of infrastructure and architecture in the cultural development of the new Irish state 1914-2014. This curatorial programme was realised in a demountable, open matrix pavilion measuring 12 x 5 x 6 metres.<br/><br/>How modernity is absorbed into national cultures usually presupposes an attachment to previous conditions and a desire to reconcile the two. In an Irish context, due to the processes of de-colonisation and political independence, this relationship is more complicated.<br/><br/>In 1914, Ireland was largely agricultural and lacked any significant industrial complex. The construction of new infrastructures after independence in 1921 became central to the cultural imagining of the new nation. The adoption of modernist architecture was perceived as a way to escape the colonial past. As the desire to reconcile cultural and technological aims developed, these infrastructures became both the physical manifestation and concrete identity of the new nation with architecture an essential element in this construct.<br/><br/>Technology and infrastructure are inherently cosmopolitan. Beginning with the Shannon hydro-electric facility at Ardnacrusha (1929) involving the German firm of Siemens-Schuckert, Ireland became a point of various intersections between imported international expertise and local need. By the turn of the last century, it had become one of the most globalised countries in the world, site of the European headquarters of multinationals such as Google and Microsoft. Climatically and economically expedient to the storing and harvesting of data, Ireland has subsequently become an important repository of digital information farmed in large, single-storey sheds absorbed into dispersed suburbs. In 2013, it became the preferred site for Intel to design and develop its new microprocessor board, the Galileo, a building block for the internet of things.<br/><br/>The story of the decades in between, of shifts made manifest in architecture and infrastructure, from the policies of economic protectionism to the embracing of the EU is one of the influx of technologies and cultural references into a small country on the edges of Europe: Ireland as both a launch-pad and testing ground for a series of aspects of designed modernity.
Resumo:
This essay investigates the changing dynamics of interaction and paradigm of communication in the design studio. It analyses the process of practical implementation of interactive tools in architectural education which placed the<br/>diversity of students’ cultural experiences, contextual awareness and individual interests as crucial resource for design innovation and inquiry. Building on Brian Lawson’s thesis on creativity in design thinking, this research project undertook<br/>comprehensive investigation of students’ satisfaction of their roles in the studio and the room for liberal thought they are given to elaborate on genuine approach to architectural matters. The cyclical development of interactive learning strategy is explored through two different settings: first, it analyses architectural students’ position as passive/active in the studio, considering their relationships with tutors’ ideals; second, it reports on empirical strategy of students-led workshops at British schools of architecture, during which students have taken the lead of their creative design agenda. The practical implementation of interactive learning tools proved influential in helping students to personalize their design direction and to build a sense of confidence and independence.
Resumo:
Purpose - The purpose is to unearth managerial representations of achieving competitive advantage in relation to architectural firms operating within the United Kingdom (UK).<br/>Design/Methodology/Approach - A sequential qualitative methodology is applied, underpinned by nine managerial interviews in five architectural practices; all of which are analysed using computer assisted qualitative data analysis software.<br/>Findings - 108 representations are identified with highly rated concepts discussed in detail. Subsequently, the leading concepts include reputation, client satisfaction, fees and staff resources, among others.<br/>Research Limitations/Implications - There are numerous studies conducted on this subject; however, there has been no research done to date documenting managerial representations within the UK on achieving competitive advantage in the context of architectural firms.<br/>Practical Implications – The need for architectural firms to develop a competitive advantage within their market sector is ever more apparent, particularly during times of increased competitiveness.<br/>Originality/Value – This paper fulfils a gap in knowledge by contributing to underlying research on the subject of competitive advantage, but focusing on the managerial representations, specifically within UK practices. The findings are of relevance to architects in both the UK and beyond, as well as perhaps forming the basis of identifying further research with the area.<br/>
Resumo:
The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.
Resumo:
This study focuses on the efficacy of design studio as a form of teaching and learning, where traditional approaches can act to position the tutor as a defender of the knowledge community rather than a discourse guide for the student. The broad curriculum of architectural education with its divergent outcomes resulting from project based learning also makes it difficult to agree on what constitutes the fundamental elements of the curriculum. The research used an approach based on threshold concepts to assist in identifying and overcoming these shortcomings. Such approaches have been described as 'liminal': holding the learner in a supportive 'in-between' state where learning resources can be directed to that which is troublesome and conceptually difficult. The study involved the use of practices to identify troublesome knowledge in design studio and conceptualise blended learning as part of a liminal studio space.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
Layout planning is a process of sizing and placing rooms (e.g. in a house) while a t t empt ing to optimize various criteria. Often the r e are conflicting c r i t e r i a such as construction cost, minimizing the distance between r e l a t ed activities, and meeting the area requirements for these activities. The process of layout planning ha s mostly been done by hand, wi th a handful of a t t empt s to automa t e the process. Thi s thesis explores some of these pa s t a t t empt s and describes several new techniques for automa t ing the layout planning process using evolutionary computation. These techniques a r e inspired by the existing methods, while adding some of the i r own innovations. Additional experimenLs are done to t e s t the possibility of allowing polygonal exteriors wi th rectilinear interior walls. Several multi-objective approaches are used to evaluate and compare fitness. The evolutionary r epr e s ent a t ion and requirements specification used provide great flexibility in problem scope and depth and is worthy of considering in future layout and design a t t empt s . The system outlined in thi s thesis is capable of evolving a variety of floor plans conforming to functional and geometric specifications. Many of the resulting plans look reasonable even when compared to a professional floor plan. Additionally polygonal and multi-floor buildings were also generated.
Resumo:
En utilisant des approches qualitative and quantitative cette thèse démontre que les aspects intangibles des espaces architecturaux influencent le bien-être humain. Le but est de faire savoir que les espaces intérieurs ont un impact sur le bien-être et que l’architecture peut être considérée comme une solution pour satisfaire les besoins des usagers. Dans la première étude, l’approche qualitative est explorée en utilisant la narration pour identifier les aspects intangibles des espaces intérieurs qui affectent le bien-être. Une discussion s’articule autour du Modèle de Réponses Expérientielles des Humains (Model of Human Experiential Responses to Space) et de son importance comme outil pour déterrer les caractéristiques environnementales qui influencent le bien-être et qui peut être utile pour les professionnels du design. Les résultats démontrent que 43 catégories sont interprétées comme étant des aspects intangibles et servent de canevas pour trois autres études. Les résultats démontrent que certaines caractéristiques environnementales similaires dans les résidences et les bureaux augmentent le sentiment de satisfaction et de bien-être. Dans la deuxième étude, une approche quantitative est explorée en utilisant les neurosciences et l’architecture afin de mesurer comment les espaces architecturaux affectent le bien-être. Le concept de neuroscience / environnement / comportement est utilisé où huit corrélats neuroscientifiques (Zeisel 2006) sont investigués afin de mesurer les effets du cerveau sur les espaces architecturaux. Les résultats démontrent que l’environnement peut affecter l’humeur, le niveau d’attention et le niveau de stress chez les humains et peut également augmenter leur performance. Les deux études contribuent aux connaissances que les caractéristiques environnementales affectent l’humeur et le niveau de satisfaction de la même façon dans les espaces résidentiels et dans les espaces de bureaux. Un bon environnement qui énergise les employés peut affecter leur performance au travail de façon positive (Vischer 2005).