103 resultados para architectural computation
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
Molecular logic-based computation is a broad umbrella covering molecular sensors at its simplest level and logic gate arrays involving steadily increasing levels of parallel and serial integration. The fluorescent PET(photoinduced electron transfer) switching principle remains a loyal servant of this entire field. Applications arise from the convenient operation of molecular information processors in very small spaces.
Resumo:
After an open competition, we were selected to commission, curate and design the Irish pavilion for the Venice biennale 2014. Our proposal engage with the role of infrastructure and architecture in the cultural development of the new Irish state 1914-2014. This curatorial programme was realised in a demountable, open matrix pavilion measuring 12 x 5 x 6 metres.
How modernity is absorbed into national cultures usually presupposes an attachment to previous conditions and a desire to reconcile the two. In an Irish context, due to the processes of de-colonisation and political independence, this relationship is more complicated.
In 1914, Ireland was largely agricultural and lacked any significant industrial complex. The construction of new infrastructures after independence in 1921 became central to the cultural imagining of the new nation. The adoption of modernist architecture was perceived as a way to escape the colonial past. As the desire to reconcile cultural and technological aims developed, these infrastructures became both the physical manifestation and concrete identity of the new nation with architecture an essential element in this construct.
Technology and infrastructure are inherently cosmopolitan. Beginning with the Shannon hydro-electric facility at Ardnacrusha (1929) involving the German firm of Siemens-Schuckert, Ireland became a point of various intersections between imported international expertise and local need. By the turn of the last century, it had become one of the most globalised countries in the world, site of the European headquarters of multinationals such as Google and Microsoft. Climatically and economically expedient to the storing and harvesting of data, Ireland has subsequently become an important repository of digital information farmed in large, single-storey sheds absorbed into dispersed suburbs. In 2013, it became the preferred site for Intel to design and develop its new microprocessor board, the Galileo, a building block for the internet of things.
The story of the decades in between, of shifts made manifest in architecture and infrastructure, from the policies of economic protectionism to the embracing of the EU is one of the influx of technologies and cultural references into a small country on the edges of Europe: Ireland as both a launch-pad and testing ground for a series of aspects of designed modernity.
Resumo:
It is an exciting era for molecular computation because molecular logic gates are being pushed in new directions. The use of sulfur rather than the commonplace nitrogen as the key receptor atom in metal ion sensors is one of these directions; plant cells coming within the jurisdiction of fluorescent molecular thermometers is another, combining photochromism with voltammetry for molecular electronics is yet another. Two-input logic gates benefit from old ideas such as rectifying bilayer electrodes, cyclodextrin-enhanced room-temperature phosphorescence, steric hindrance, the polymerase chain reaction, charge transfer absorption of donor–acceptor complexes and lectin–glycocluster interactions. Furthermore, the concept of photo-uncaging enables rational ways of concatenating logic gates. Computational concepts are also applied to potential cancer theranostics and to the selective monitoring of neurotransmitters in situ. Higher numbers of inputs are also accommodated with the concept of functional integration of gates, where complex input–output patterns are sought out and analysed. Molecular emulation of computational components such as demultiplexers and parity generators/checkers are achieved in related ways. Complexity of another order is tackled with molecular edge detection routines.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
Molecular logic-based computation continues to throw up new applications in sensing and switching, the newest of which is the edge detection of objects. The scope of this phenomenon is mapped out by the use of structure-activity relationships, where several structures of the molecules and of the objects are examined. The different angles and curvatures of the objects are followed with good-fidelity in the visualized edges, even when the objects are in reverse video.
Resumo:
Purpose - The purpose is to unearth managerial representations of achieving competitive advantage in relation to architectural firms operating within the United Kingdom (UK).
Design/Methodology/Approach - A sequential qualitative methodology is applied, underpinned by nine managerial interviews in five architectural practices; all of which are analysed using computer assisted qualitative data analysis software.
Findings - 108 representations are identified with highly rated concepts discussed in detail. Subsequently, the leading concepts include reputation, client satisfaction, fees and staff resources, among others.
Research Limitations/Implications - There are numerous studies conducted on this subject; however, there has been no research done to date documenting managerial representations within the UK on achieving competitive advantage in the context of architectural firms.
Practical Implications – The need for architectural firms to develop a competitive advantage within their market sector is ever more apparent, particularly during times of increased competitiveness.
Originality/Value – This paper fulfils a gap in knowledge by contributing to underlying research on the subject of competitive advantage, but focusing on the managerial representations, specifically within UK practices. The findings are of relevance to architects in both the UK and beyond, as well as perhaps forming the basis of identifying further research with the area.
Resumo:
In Boolean games, agents try to reach a goal formulated as a Boolean formula. These games are attractive because of their compact representations. However, few methods are available to compute the solutions and they are either limited or do not take privacy or communication concerns into account. In this paper we propose the use of an algorithm related to reinforcement learning to address this problem. Our method is decentralized in the sense that agents try to achieve their goals without knowledge of the other agents’ goals. We prove that this is a sound method to compute a Pareto optimal pure Nash equilibrium for an interesting class of Boolean games. Experimental results are used to investigate the performance of the algorithm.
Resumo:
There is a broad consensus surrounding the ability of building information modelling (BIM) to positively impact a project by enabling greater collaboration. This paper aims to examine the development of BIM and how it can contribute to the evermore present and growing cold-formed steel (CFS) industry. This is achieved thorough a comprehensive literature review and four exploratory interviews with industry experts. Work has been carried out, for the first time, alongside one of the UK’s largest CFS Designer/Fabricators in conjunction with Northern Ireland’s leading Architectural and Town Planning Consultants in the identification and dissemination of information. The capabilities of BIM have been investigated through modeling of simple CFS structures n consultation with the project partners. By scrutinising the literature and associated interviews, the primary opportunities, as well as barriers, of BIM implementation have been investigated in the context of these companies. It is essential to develop greater understanding of the flexibility, adaptability and interoperability of BIM software as the UK construction industry faces a daunting challenge; fully collaborative 3D BIM as required by the UK Government under the “Government Construction Strategy” by 2016 in all public sector projects. This paper, and the wider study that it stems from, approaches the problem from a new angle, from sections of the construction industry that have not yet fully embedded BIM.
Resumo:
The goal of this contribution is to discuss local computation in credal networks — graphical models that can represent imprecise and indeterminate probability values. We analyze the inference problem in credal networks, discuss how inference algorithms can benefit from local computation, and suggest that local computation can be particularly important in approximate inference algorithms.