991 resultados para Architectural Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A high contrast ratio between windows and surrounding walls may lead to office workers visual discomfort that could negatively affect their satisfaction and productivity. Consequently, occupants may try to adapt their working environment by closing blinds and/ or turning on the lights to enhance indoor visual comfort, which can reduce predicted energy savings. The hypothesis of this study is that reducing luminance contrast ratio on the window wall will improve window appearance which potentially will reduce visual discomfort and decrease workers interventions. Thus, this PhD research proposes a simple strategy to diminish the luminance contrast on the window wall by increasing the luminance of the areas surrounding the windows using supplementary light emitting diode (LED) systems. To test the hypothesis, this investigation will involve three experiments in different office layouts with various window types and orientations in Brisbane, Australia. It will assess user preferences for different luminance patterns in windowed offices featuring flexible, lowpower LED lighting installations that allows multiple lighting design options on the window wall. Detailed luminance and illuminance measures will be used to match quantitative lighting design assessment to user preferences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of an instrumented impact test set-up to evaluate the influence of water ingress on the impact response of a carbon–epoxy (C–E) laminated composite system containing discontinuous buffer strips (BS) has been examined. The data on the BS-free C–E sample in dry conditions are used as reference to compare with the data derived from those immersed in water. The work demonstrated the utility of an instrumented impact test set-up in characterising the response, first owing to the architectural difference due to introduction of buffer strips and then due to the presence of an additional phase in the form of water ingressed into the sample. The presence of water was found to enhance the energy absorption characteristics of the C–E system with BS insertions. It was also noticed that with an increasing number of BS layer insertions, the load–time plots displayed characteristic changes. The ductility indices (DI) were found to display a lower value for the water immersed samples compared to the dry ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed system has quite a lot of servers to attain increased availability of service and for fault tolerance. Balancing the load among these servers is an important task to achieve better performance. There are various hardware and software based load balancing solutions available. However there is always an overhead on Servers and the Load Balancer while communicating with each other and sharing their availability and the current load status information. Load balancer is always busy in listening to clients' request and redirecting them. It also needs to collect the servers' availability status frequently, to keep itself up-to-date. Servers are busy in not only providing service to clients but also sharing their current load information with load balancing algorithms. In this paper we have proposed and discussed the concept and system model for software based load balancer along with Availability-Checker and Load Reporters (LB-ACLRs) which reduces the overhead on server and the load balancer. We have also described the architectural components with their roles and responsibilities. We have presented a detailed analysis to show how our proposed Availability Checker significantly increases the performance of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lee, M.H. and Rowland, J.J. (eds.), 1995, Intelligent Assembly Systems, 239pp, World Scientific series in Robotics and Intelligent Systems - Vol. 12, World Scientific, ISBN 981022494X.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The “Temporal and Urban Peripheries” is the fall project of the final year of the Architecture program of Izmir University of Economics in Turkey. With a critical and contextual look at the built environment, the studio focuses on the significance of the incorporation of history, urban design, and parametrics in architectural design. This short article presents the thrust of the studio, basic concepts, the process, and outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Architects use cycle-by-cycle simulation to evaluate design choices and understand tradeoffs and interactions among design parameters. Efficiently exploring exponential-size design spaces with many interacting parameters remains an open problem: the sheer number of experiments renders detailed simulation intractable. We attack this problem via an automated approach that builds accurate, confident predictive design-space models. We simulate sampled points, using the results to teach our models the function describing relationships among design parameters. The models produce highly accurate performance estimates for other points in the space, can be queried to predict performance impacts of architectural changes, and are very fast compared to simulation, enabling efficient discovery of tradeoffs among parameters in different regions. We validate our approach via sensitivity studies on memory hierarchy and CPU design spaces: our models generally predict IPC with only 1-2% error and reduce required simulation by two orders of magnitude. We also show the efficacy of our technique for exploring chip multiprocessor (CMP) design spaces: when trained on a 1% sample drawn from a CMP design space with 250K points and up to 55x performance swings among different system configurations, our models predict performance with only 4-5% error on average. Our approach combines with techniques to reduce time per simulation, achieving net time savings of three-four orders of magnitude. Copyright © 2006 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the digital age, the hyperspace of virtual reality systems stands out as a new spatial concept creating a parallel realm to "real" space. Virtual reality influences one’s experience of and interaction with architectural space. This "otherworld" brings up the criticism of the existing conception of space, time and body. Hyperspaces are relatively new to designers but not to filmmakers. Their cinematic representations help the comprehension of the outcomes of these new spaces. Visualisation of futuristic ideas on the big screen turns film into a medium for spatial experimentation. Creating a possible future, The Matrix (Andy and Larry Wachowski, 1999) takes the concept of hyperspace to a level not-yet-realised but imagined. With a critical gaze at the existing norms of architecture, the film creates new horizons in terms of space. In this context, this study introduces science fiction cinema as a discussion medium to understand the potentials of virtual reality systems for the architecture of the twenty first century. As a "role model" cinema helps to better understand technological and spatial shifts. It acts as a vehicle for going beyond the spatial theories and designs of the twentieth century, and defining the conception of space in contemporary architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of sustainable assessment methods in the UK is on the rise, anticipating the future regulatory trajectory towards zero carbon by 2016. The indisputable influence of sustainable rating tools on UK building regulations conveys the importance of evaluating their effectiveness in achieving true sustainable design, without adversely effecting human health and wellbeing. This paper reviews indoor air-quality (IAQ) issues addressed by UK sustainable assessment tools, and the potential trade-offs between building energy conservation and IAQ. The barriers to effective adoption of IAQ strategies are investigated, including recommendations, suggestions, and future research needs. The review identified a fundamental lack of IAQ criteria in sustainable assessment tools aimed at the residential sector. The consideration of occupants’ health and well-being should be paramount in any assessment scheme, and should not be overshadowed or obscured by the drive towards energy efficiency. A balance is essential.