897 resultados para Complex Design Space
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
We identify a number of meanings of "Open", as part of the motivating rationale for a social media space tuned for learning, called SocialLearn. We discuss why online social learning seems to be emerging so strongly at this point, explore features of social learning, and identify some of the dimensions that we believe characterize the social learning design space, before describing the emerging design concept and implementation.
Resumo:
Le design urbain, dimension de la pratique urbanistique, consiste principalement en la manipulation de la forme urbaine selon une démarche ciblée et encadrée. Les approches sont aussi nombreuses que diversifiées. On peut néanmoins les classer en deux catégories : les approches visant la forme urbaine en tant qu‟objet qui contient l‟organisation formelle de la ville et celles qui visent le travail sur la forme urbaine afin d‟organiser la dynamique urbaine humaine. Les deux types d‟approches soutiennent différentes démarches d‟aménagement qui contribuent à développer la forme urbaine et la dynamique des lieux. Parmi celles-ci se trouve la vision de type empirique, laquelle vise l‟expérience urbaine à l‟échelle du piéton. Les écrits et les théories à ce sujet sont variés et pertinents. La présente recherche porte sur la transposition des prescriptions empiriques dans la planification d‟un projet qui intègre une démarche de design urbain. Au moyen d‟une étude de cas, la Cité multimédia à Montréal, il s‟agit de comprendre plus spécifiquement comment le canevas maître du design urbain, soit l‟espace public, est étudié et reformulé et ce, en accordant une attention particulière pour la dimension empirique du futur aménagement. Quelles sont les balises ou les composantes qui permettent à la dimension empirique de se déployer dans la conception d‟un projet urbain qui vise une reformulation de la forme urbaine?
Resumo:
How a design concept was interactionally produced in the talk-in-interaction between an architect and client representatives was studied. The empirical analysis was informed by ethnomethodology and conversation analysis to observe structures and patterns of talk that accomplished actions and practices of design. Some differences were observed between the properties of the design concept in comparison with the design ideas that were considered during these conversations. The design concept was observed to be significant for assessing why some moves in a design space were considered better than others. The importance of the design concept to these interactions raised more general questions about what a design concept is and how it can be described as an object type. With reference to studies of science, technology and society these concerns were provisionally engaged with and further study of the object properties of design concepts is suggested.
Resumo:
A mapping scheme is presented which takes quantum operators associated to bosonic degrees of freedom into complex phase space integral kernel representatives. The procedure consists of using the Schrödinger squeezed state as the starting point for the construction of the integral mapping kernel which, due to its inherent structure, is suited for the description of second quantized operators. Products and commutators of operators have their representatives explicitly written which reveal new details when compared to the usual q-p phase space description. The classical limit of the equations of motion for the canonical pair q-p is discussed in connection with the effect of squeezing the quantum phase space cellular structure. © 1993.
Resumo:
Robots are needed to perform important field tasks such as hazardous material clean-up, nuclear site inspection, and space exploration. Unfortunately their use is not widespread due to their long development times and high costs. To make them practical, a modular design approach is proposed. Prefabricated modules are rapidly assembled to give a low-cost system for a specific task. This paper described the modular design problem for field robots and the application of a hierarchical selection process to solve this problem. Theoretical analysis and an example case study are presented. The theoretical analysis of the modular design problem revealed the large size of the search space. It showed the advantages of approaching the design on various levels. The hierarchical selection process applies physical rules to reduce the search space to a computationally feasible size and a genetic algorithm performs the final search in a greatly reduced space. This process is based on the observation that simple physically based rules can eliminate large sections of the design space to greatly simplify the search. The design process is applied to a duct inspection task. Five candidate robots were developed. Two of these robots are evaluated using detailed physical simulation. It is shown that the more obvious solution is not able to complete the task, while the non-obvious asymmetric design develop by the process is successful.
Resumo:
In this paper I will present the work I have completed during a five months work placement at CERN, European Organisation for Nuclear Research, from March to July 2011. This stage was done in the EN Department (ENgineering Department), STI Group (Sources, Targets and Interactions), TCD Section (Targets, Collimators and Dumps) under the supervision of Dr Cesare Maglioni. The task I was given concerned all the beam stoppers in the PS Complex, in detail: - General definition and requirements - Creation of a digital archive - Verification of the stoppers of the PS Complex - Design of the L4T.STP.1
Resumo:
In der Herstellung fester Darreichungsformen umfasst die Granulierung einen komplexen Teilprozess mit hoher Relevanz für die Qualität des pharmazeutischen Produktes. Die Wirbelschichtgranulierung ist ein spezielles Granulierverfahren, welches die Teilprozesse Mischen, Agglomerieren und Trocknen in einem Gerät vereint. Durch die Kombination mehrerer Prozessstufen unterliegt gerade dieses Verfahren besonderen Anforderungen an ein umfassendes Prozessverständnis. Durch die konsequente Verfolgung des PAT- Ansatzes, welcher im Jahre 2004 durch die amerikanische Zulassungsbehörde (FDA) als Guideline veröffentlicht wurde, wurde der Grundstein für eine kontinuierliche Prozessverbesserung durch erhöhtes Prozessverständnis, für Qualitätserhöhung und Kostenreduktion gegeben. Die vorliegende Arbeit befasste sich mit der Optimierung der Wirbelschicht-Granulationsprozesse von zwei prozesssensiblen Arzneistoffformulierungen, unter Verwendung von PAT. rnFür die Enalapril- Formulierung, einer niedrig dosierten und hochaktiven Arzneistoffrezeptur, wurde herausgefunden, dass durch eine feinere Zerstäubung der Granulierflüssigkeit deutlich größere Granulatkörnchen erhalten werden. Eine Erhöhung der MassRatio verringert die Tröpfchengröße, dies führt zu größeren Granulaten. Sollen Enalapril- Granulate mit einem gewünschten D50-Kornverteilung zwischen 100 und 140 um hergestellt werden, dann muss die MassRatio auf hohem Niveau eingestellt werden. Sollen Enalapril- Granulate mit einem D50- Wert zwischen 80 und 120µm erhalten werden, so muss die MassRatio auf niedrigem Niveau eingestellt sein. Anhand der durchgeführten Untersuchungen konnte gezeigt werden, dass die MassRatio ein wichtiger Parameter ist und zur Steuerung der Partikelgröße der Enalapril- Granulate eingesetzt werden kann; unter der Voraussetzung dass alle anderen Prozessparameter konstant gehalten werden.rnDie Betrachtung der Schnittmengenplots gibt die Möglichkeit geeignete Einstellungen der Prozessparameter bzw. Einflussgrößen zu bestimmen, welche dann zu den gewünschten Granulat- und Tabletteneigenschaften führen. Anhand der Lage und der Größe der Schnittmenge können die Grenzen der Prozessparameter zur Herstellung der Enalapril- Granulate bestimmt werden. Werden die Grenzen bzw. der „Design Space“ der Prozessparameter eingehalten, kann eine hochwertige Produktqualität garantiert werden. rnUm qualitativ hochwertige Enalapril Tabletten mit der gewählten Formulierung herzustellen, sollte die Enalapril- Granulation mit folgenden Prozessparametern durchgeführt werden: niedrige Sprührate, hoher MassRatio, einer Zulufttemperatur von mindestens > 50 °C und einer effektiven Zuluftmenge < 180 Nm³/h. Wird hingegen eine Sprührate von 45 g/min und eine mittlere MassRatio von 4.54 eingestellt, so muss die effektive Zuluftmenge mindestens 200 Nm³/h und die Zulufttemperatur mindestens 60 °C betragen, um eine vorhersagbar hohe Tablettenqualität zu erhalten. Qualität wird in das Arzneimittel bereits während der Herstellung implementiert, indem die Prozessparameter bei der Enalapril- Granulierung innerhalb des „Design Space“ gehalten werden.rnFür die Metformin- Formulierung, einer hoch dosierten aber wenig aktiven Arzneistoffrezeptur wurde herausgefunden, dass sich der Wachstumsmechanismus des Feinanteils der Metformin- Granulate von dem Wachstumsmechanismus der D50- und D90- Kornverteilung unterscheidet. Der Wachstumsmechanismus der Granulate ist abhängig von der Partikelbenetzung durch die versprühten Flüssigkeitströpfchen und vom Größenverhältnis von Partikel zu Sprühtröpfchen. Der Einfluss der MassRatio ist für die D10- Kornverteilung der Granulate vernachlässigbar klein. rnMit Hilfe der Störgrößen- Untersuchungen konnte eine Regeleffizienz der Prozessparameter für eine niedrig dosierte (Enalapril)- und eine hoch dosierte (Metformin) Arzneistoffformulierung erarbeitet werden, wodurch eine weitgehende Automatisierung zur Verringerung von Fehlerquellen durch Nachregelung der Störgrößen ermöglicht wird. Es ergibt sich für die gesamte Prozesskette ein in sich geschlossener PAT- Ansatz. Die Prozessparameter Sprührate und Zuluftmenge erwiesen sich als am besten geeignet. Die Nachregelung mit dem Parameter Zulufttemperatur erwies sich als träge. rnFerner wurden in der Arbeit Herstellverfahren für Granulate und Tabletten für zwei prozesssensible Wirkstoffe entwickelt. Die Robustheit der Herstellverfahren gegenüber Störgrößen konnte demonstriert werden, wodurch die Voraussetzungen für eine Echtzeitfreigabe gemäß dem PAT- Gedanken geschaffen sind. Die Kontrolle der Qualität des Produkts findet nicht am Ende der Produktions- Prozesskette statt, sondern die Kontrolle wird bereits während des Prozesses durchgeführt und basiert auf einem besseren Verständnis des Produktes und des Prozesses. Außerdem wurde durch die konsequente Verfolgung des PAT- Ansatzes die Möglichkeit zur kontinuierlichen Prozessverbesserung, zur Qualitätserhöhung und Kostenreduktion gegeben und damit das ganzheitliche Ziel des PAT- Gedankens erreicht und verwirklicht.rn
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.
Digital signal processing and digital system design using discrete cosine transform [student course]
Resumo:
The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.
Resumo:
Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time in between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches.In order to overcome these limitations, we designed and realized the ARTHUR system, an Augmented Reality (AR) enhanced round table to support complex design and planning decisions for architects. WhileAR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitiveinteraction mechanisms that can be easily con-figured for different application scenarios.
Resumo:
In recent decays university class small satellites are creating many opportunities for space research and professional trainings while at the same time responding to constrained budgets. In this work the main focus is on developing a simple and rapid structural sizing tool considering the main objectives of a low cost university class microsatellite project. In satellite projects, structure subsystem is one of the influential subsystems as a driver of the cost and acceptance of the final design. At the first steps of such projects there is no confirmed data regarding the launch vehicle or even in some cases there is no data for the satellite payload. Due to these facts, developing simple sizing tools at conceptual design phase for obtaining an over view of the effect of different variables is useful before entering complex calculations in detailed design phases. In this study, after developing a simple analytical model of satellite structure subsystem, a design space is evaluated with practical boundaries considering mass and dimensions constraints of such projects. The results are useful to give initial insight to establish the system level structural sizing