855 resultados para Distributed Array


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mixed-signal and analog design on a pre-diffused array is a challenging task, given that the digital array is a linear matrix arrangement of minimum-length transistors. To surmount this drawback a specific discipline for designing analog circuits over such array is required. An important novel technique proposed is the use of TAT (Trapezoidal Associations of Transistors) composite transistors on the semi-custom Sea-Of-Transistors (SOT) array. The analysis and advantages of TAT arrangement are extensively analyzed and demonstrated, with simulation and measurement comparisons to equivalent single transistors. Basic analog cells were also designed as well in full-custom and TAT versions in 1.0mm and 0.5mm digital CMOS technologies. Most of the circuits were prototyped in full-custom and TAT-based on pre-diffused SOT arrays. An innovative demonstration of the TAT technique is shown with the design and implementation of a mixed-signal analog system, i. e., a fully differential 2nd order Sigma-Delta Analog-to-Digital (A/D) modulator, fabricated in both full-custom and SOT array methodologies in 0.5mm CMOS technology from MOSIS foundry. Three test-chips were designed and fabricated in 0.5mm. Two of them are IC chips containing the full-custom and SOT array versions of a 2nd-Order Sigma-Delta A/D modulator. The third IC contains a transistors-structure (TAT and single) and analog cells placed side-by-side, block components (Comparator and Folded-cascode OTA) of the Sigma-Delta modulator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelos de tomada de decisão necessitam refletir os aspectos da psi- cologia humana. Com este objetivo, este trabalho é baseado na Sparse Distributed Memory (SDM), um modelo psicologicamente e neuro- cientificamente plausível da memória humana, publicado por Pentti Kanerva, em 1988. O modelo de Kanerva possui um ponto crítico: um item de memória aquém deste ponto é rapidamente encontrado, e items além do ponto crítico não o são. Kanerva calculou este ponto para um caso especial com um seleto conjunto de parâmetros (fixos). Neste trabalho estendemos o conhecimento deste ponto crítico, através de simulações computacionais, e analisamos o comportamento desta “Critical Distance” sob diferentes cenários: em diferentes dimensões; em diferentes números de items armazenados na memória; e em diferentes números de armazenamento do item. Também é derivada uma função que, quando minimizada, determina o valor da “Critical Distance” de acordo com o estado da memória. Um objetivo secundário do trabalho é apresentar a SDM de forma simples e intuitiva para que pesquisadores de outras áreas possam imaginar como ela pode ajudá-los a entender e a resolver seus problemas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present work, we report the use of bacterial colonies to optimize macroarray technique. The devised system is significantly cheaper than other methods available to detect large-scale differential gene expression. Recombinant Escherichia coli clones containing plasmid-encoded copies of 4,608 individual expressed sequence tag (ESTs) were robotically spotted onto nylon membranes that were incubated for 6 and 12 h to allow the bacteria to grow and, consequently, amplify the cloned ESTs. The membranes were then hybridized with a beta-lactamase gene specific probe from the recombinant plasmid and, subsequently, phosphorimaged to quantify the microbial cells. Variance analysis demonstrated that the spot hybridization signal intensity was similar for 3,954 ESTs (85.8%) after 6 h of bacterial growth. Membranes spotted with bacteria colonies grown for 12 h had 4,017 ESTs (87.2%) with comparable signal intensity but the signal to noise ratio was fivefold higher. Taken together, the results of this study indicate that it is possible to investigate large-scale gene expression using macroarrays based on bacterial colonies grown for 6 h onto membranes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to present, using a set of guidelines, how to apply the conservative distributed simulation paradigm (CMB protocol) to develop efficient applications. Using these guidelines, even a user with little experience on distributed simulation and computer architecture can have good performance on distributed simulations using conservative synchronization protocols for parallel processes.The set of guidelines is focus on a specific application domain, the performance evaluation of computer systems, considering models with coarse granularity and few logical processes and running over two platforms: parallel (high performance communication environment) and distributed (low performance communication environment).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new approach to develop Field Programmable Analog Arrays (FPAAs),(1) which avoids excessive number of programming elements in the signal path, thus enhancing the performance. The paper also introduces a novel FPAA architecture, devoid of the conventional switching and connection modules. The proposed FPAA is based on simple current mode sub-circuits. An uncompounded methodology has been employed for the programming of the Configurable Analog Cell (CAC). Current mode approach has enabled the operation of the FPAA presented here, over almost three decades of frequency range. We have demonstrated the feasibility of the FPAA by implementing some signal processing functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy policies and technological progress in the development of wind turbines have made wind power the fastest growing renewable power source worldwide. The inherent variability of this resource requires special attention when analyzing the impacts of high penetration on the distribution network. A time-series steady-state analysis is proposed that assesses technical issues such as energy export, losses, and short-circuit levels. A multiobjective programming approach based on the nondominated sorting genetic algorithm (NSGA) is applied in order to find configurations that maximize the integration of distributed wind power generation (DWPG) while satisfying voltage and thermal limits. The approach has been applied to a medium voltage distribution network considering hourly demand and wind profiles for part of the U.K. The Pareto optimal solutions obtained highlight the drawbacks of using a single demand and generation scenario, and indicate the importance of appropriate substation voltage settings for maximizing the connection of MPG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper an efficient algorithm for probabilistic analysis of unbalanced three-phase weakly-meshed distribution systems is presented. This algorithm uses the technique of Two-Point Estimate Method for calculating the probabilistic behavior of the system random variables. Additionally, the deterministic analysis of the state variables is performed by means of a Compensation-Based Radial Load Flow (CBRLF). Such load flow efficiently exploits the topological characteristics of the network. To deal with distributed generation, a strategy to incorporate a simplified model of a generator in the CBRLF is proposed. Thus, depending on the type of control and generator operation conditions, the node with distributed generation can be modeled either as a PV or PQ node. To validate the efficiency of the proposed algorithm, the IEEE 37 bus test system is used. The probabilistic results are compared with those obtained using the Monte Carlo method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)