995 resultados para Chip-tool interfaces
Resumo:
Diplomityössä käsitellään Nokia Mobile Phonesin matkapuhelimien käyttöliittymäohjelmistojen suunnittelu-ja testausympäristön kehitystä. Ympäristöön lisättiin kaksi ohjelmistomodulia avustamaan simulointia ja versionhallintaa. Visualisointityökalulla matkapuhelimen toiminta voidaan jäljittää suunnittelu- kaavioihin tilasiirtyminä, kun taas vertailusovelluksella kaavioiden väliset erot nähdään graafisesti. Kehitetyt sovellukset parantavat käyttöliittymien suunnitteluprosessia tehostaen virheiden etsintää, optimointia ja versionhallintaa. Visualisointityökalun edut ovat merkittävät, koska käyttöliittymäsovellusten toiminta on havaittavissa suunnittelu- kaavioista reaaliaikaisen simuloinnin yhteydessä. Näin virheet ovat välittömästi paikannettavissa. Lisäksi työkalua voidaan hyödyntää kaavioita optimoitaessa, jolloin sovellusten kokoja muistintarve pienenee. Graafinen vertailutyökalu tuo edun rinnakkaiseen ohjelmistosuunnitteluun. Eri versioisten suunnittelukaavioiden erot ovat nähtävissä suoraan kaaviosta manuaalisen vertailun sijaan. Molemmat työkalut otettiin onnistuneesti käyttöön NMP:llä vuoden 2001 alussa.
Resumo:
Nopea teknologian kehitys sekä kansainvälistymisen mukana tuoma kilpailupaine pakottavat yritykset jatkuvaan liiketoimintaprosessien kehittämiseen. Muutoksista organisaation rakenteissa sekä yrityksen prosesseissa on tullut yleisiä toimenpiteitä. Yksi näkyvimmistä toiminnallisista uudistuksesta on ollut toiminnanohjausjärjestelmän käyttöönotto. Toiminnanohjausjärjestelmän rakenne ja kehitys aiheuttaa yleensä suurimmat vaikeudet pyrittäessä rakentamaan liiketoimintaprosessien läpinäkyvyyttä esittävä tietojärjestelmäympäristö. Tässä tutkimuksessa liiketoiminnan sekä toiminnanohjausjärjestelmän prosessien yhdistäminen on tehty ns. toiminnanohjausjärjestelmä muutostyökaluilla. Kyseiset muutostyökalut on järjestetty yrityksissä tietojärjestelmä ympäristöön ja niiden avulla voidaan korjata teknisiä ongelmia sekä muuttaa itse prosesseja. Tutkimuksen empiria osuudessa on käytetty case-tutkimusmenetelmää Kone Oyj:n prosessien kehittämisosastolla. Tutkimuksen tavoitteena oli parantaa toiminnanohjausjärjestelmän muutostyökalujen prosesseja, liiketoimintaprosessien sekä toiminnanohjausjärjestelmän yhdistämiseksi ja harmonisoimiseksi. Tutkimuksen tavoitteiden täyttämiseksi, prosessijohtamisen käsitteitä käytettiin muutostyökaluprosessien parannusehdotusten löytymiseksi. Prosessijohtamisen käsitteet tarkoittavat prosessikartan, prosessin toimintojen, sekä prosessin kustannusten tutkimista ja hyväksikäyttöä. Prosessijohtamisen käsitteeseen kuuluu myös liiketoimintaprosessien jatkuvan parantamisen sekä uudelleenjärjestämisen mallien kuvaus. Toiminnanohjausjärjestelmäympäristön kuvaus teorian toisena osuutena antaa pohjaa muutostyökalujen prosessien käytölle. Tutkimuksen tuloksina voidaan todeta että tutkimusalue on hyvin monimutkainen ja vaikea. Toimintajärjestelmistä ei ole kirjoitettu teoriaa kovinkaan runsaasti, lukuunottamatta yritysten itse tekemiä tutkimuksia. Tutkimuksessa tarkasteltaville prosesseille löytyi kuitenkin parannusehdotuksia sekä ns. optimaalisen prosessimallin ominaisuuksia.
Resumo:
Nanotechnology can be viewed as a powerful tool, capable of shaping the chemistry of atoms and molecules, converting them into exciting nanosized and nanostructured materials, devices and machines. However, in pursuing this task, an exceptional ability is required to deal with complex inter- and multidisciplinary approaches, as imposed by the nanoscale. A new research organization framework, capable of promoting cooperative interactions in many complementary areas, including the industries, is demanded. In this sense, an interesting example are the nanotechnology networks and millenium institutes recently created in Brazil. The highlights and weakness of such cooperative research networks are discussed, in addition to relevant nanotechnology themes focusing on the special needs and resources from the developing nations.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
Les facteurs de transcription sont des protéines spécialisées qui jouent un rôle important dans différents processus biologiques tel que la différenciation, le cycle cellulaire et la tumorigenèse. Ils régulent la transcription des gènes en se fixant sur des séquences d’ADN spécifiques (éléments cis-régulateurs). L’identification de ces éléments est une étape cruciale dans la compréhension des réseaux de régulation des gènes. Avec l’avènement des technologies de séquençage à haut débit, l’identification de tout les éléments fonctionnels dans les génomes, incluant gènes et éléments cis-régulateurs a connu une avancée considérable. Alors qu’on est arrivé à estimer le nombre de gènes chez différentes espèces, l’information sur les éléments qui contrôlent et orchestrent la régulation de ces gènes est encore mal définie. Grace aux techniques de ChIP-chip et de ChIP-séquençage il est possible d’identifier toutes les régions du génome qui sont liées par un facteur de transcription d’intérêt. Plusieurs approches computationnelles ont été développées pour prédire les sites fixés par les facteurs de transcription. Ces approches sont classées en deux catégories principales: les algorithmes énumératifs et probabilistes. Toutefois, plusieurs études ont montré que ces approches génèrent des taux élevés de faux négatifs et de faux positifs ce qui rend difficile l’interprétation des résultats et par conséquent leur validation expérimentale. Dans cette thèse, nous avons ciblé deux objectifs. Le premier objectif a été de développer une nouvelle approche pour la découverte des sites de fixation des facteurs de transcription à l’ADN (SAMD-ChIP) adaptée aux données de ChIP-chip et de ChIP-séquençage. Notre approche implémente un algorithme hybride qui combine les deux stratégies énumérative et probabiliste, afin d’exploiter les performances de chacune d’entre elles. Notre approche a montré ses performances, comparée aux outils de découvertes de motifs existants sur des jeux de données simulées et des jeux de données de ChIP-chip et de ChIP-séquençage. SAMD-ChIP présente aussi l’avantage d’exploiter les propriétés de distributions des sites liés par les facteurs de transcription autour du centre des régions liées afin de limiter la prédiction aux motifs qui sont enrichis dans une fenêtre de longueur fixe autour du centre de ces régions. Les facteurs de transcription agissent rarement seuls. Ils forment souvent des complexes pour interagir avec l’ADN pour réguler leurs gènes cibles. Ces interactions impliquent des facteurs de transcription dont les sites de fixation à l’ADN sont localisés proches les uns des autres ou bien médier par des boucles de chromatine. Notre deuxième objectif a été d’exploiter la proximité spatiale des sites liés par les facteurs de transcription dans les régions de ChIP-chip et de ChIP-séquençage pour développer une approche pour la prédiction des motifs composites (motifs composés par deux sites et séparés par un espacement de taille fixe). Nous avons testé ce module pour prédire la co-localisation entre les deux demi-sites ERE qui forment le site ERE, lié par le récepteur des œstrogènes ERα. Ce module a été incorporé à notre outil de découverte de motifs SAMD-ChIP.
Resumo:
A Web-based tool developed to automatically correct relational database schemas is presented. This tool has been integrated into a more general e-learning platform and is used to reinforce teaching and learning on database courses. This platform assigns to each student a set of database problems selected from a common repository. The student has to design a relational database schema and enter it into the system through a user friendly interface specifically designed for it. The correction tool corrects the design and shows detected errors. The student has the chance to correct them and send a new solution. These steps can be repeated as many times as required until a correct solution is obtained. Currently, this system is being used in different introductory database courses at the University of Girona with very promising results
Resumo:
The Virtual Lightbox for Museums and Archives (VLMA) is a tool for collecting and reusing, in a structured fashion, the online contents of museums and archive datasets. It is not restricted to datasets with visual components although VLMA includes a lightbox service that enables comparison and manipulation of visual information. With VLMA, one can browse and search collections, construct personal collections, annotate them, export these collections to XML or Impress (Open Office) presentation format, and share collections with other VLMA users. VLMA was piloted as an e-Learning tool as part of JISC’s e-Learning focus in its first phase (2004-2005) and in its second phase (2005-2006) it has incorporated new partner collections while improving and expanding interfaces and services. This paper concerns its development as a research and teaching tool, especially to teachers using museum collections, and discusses the recent development of VLMA.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
Considering the constant technological developments in the aeronautical, space, automotive, shipbuilding, nuclear and petrochemical fields, among others, the use of materials with high strength mechanical capabilities at high temperatures has been increasingly used. Among the materials that meet the mechanical strength and corrosion properties at temperatures around 815 degrees C one can find the nickel base alloy Pyromet 31V (SAE HEV8). This alloy is commonly applied in the manufacturing of high power diesel engines exhaust valves where it is required high resistance to sulphide, corrosion and good resistance to creep. However, due to its high mechanical strength and low thermal conductivity its machinability is made difficult, creating major challenges in the analysis of the best combinations among machining parameters and cutting tools to be used. Its low thermal conductivity results in a concentration of heat at high temperatures in the interfaces of workpiece-tool and tool-chip, consequently accelerating the tools wearing and increasing production costs. This work aimed to study the machinability, using the carbide coated and uncoated tools, of the hot-rolled Pyromet 31V alloy with hardness between 41.5 and 42.5 HRC. The nickel base alloy used consists essentially of the following components: 56.5% Ni, 22.5% Cr, 2,2% Ti, 0,04% C, 1,2% Al, 0.85% Nb and the rest of iron. Through the turning of this alloy we able to analyze the working mechanisms of wear on tools and evaluate the roughness provided on the cutting parameters used. The tests were performed on a CNC lathe machine using the coated carbide tool TNMG 160408-23 Class 1005 (ISO S15) and uncoated tools TNMG 160408-23 Class H13A (ISO S15). Cutting fluid was used so abundantly and cutting speeds were fixed in 75 and 90 m/min. to feed rates that ranged from 0.12, 0.15, 0.18 and 0.21 mm/rev, and cutting depth of 0.8mm. The results of the comparison between uncoated tools and coated ones presented a machined length of just 30% to the first in relation to the performance of the second. The coated tools has obtained its best result for both 75 and 90 m/min. with feed rate of 0.15 mm/rev, unlike the uncoated tool which obtained its better results to 0.12 mm/rev.
Resumo:
With hardware and software technologies advance, it s also happenning modifications in the development models of computational systems. New methodologies for user interface specification are being created with user interface description languages (UIDL). The UIDLs are a way to have a precise description in a language with more abstraction and independent of how will be implemented. A great problem is that even using these nowadays methodologies, we still have a big distance between the UIDLs and its design, what means, the distance between abstract and concrete. The tool BRIDGE (Interface Design Generator Environment) was created with the intention of being a linking bridge between a specification language (the Interactive Message Modeling Language IMML) and its implementation in Java, linking the abstract (specification) to the concrete (implementation). IMML is a language based on models, that allows the designer works in distinct abstraction levels, being each model a distinct abstraction level. IMML is a XML language, that uses the Semiotic Engineering concepts, that deals the computational system, with the user interface and its elements like a metacommunicative artifact, where these elements must to transmit a message to the user about what task must to be realized and the way to reach this goal. With BRIDGE, we intend to supply a lot of support to the design task, being the user interface prototipation the greater of them. BRIDGE allows the design becomes easier and more intuitive coming from an interface specification language
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
This study is the analysis of cultural, political and organizational interfaces of "Caminhos do Frio Rota Cultural" Project in the context of tourism regionalization in Brejo Paraibano and it presents the characterization, routing and inventory of six municipalities of the Project, as well as the identification of cultural elements used for tourist in the routing of the pond, the investigation of political and organizational articulation and the verification of participation of each producing agent in the development of tourism resulting from the swamp of Paraiba. This is a qualitative descriptive and exploratory study, which makes use of the interpretive paradigm to perform an analysis of the environment where occurs the regionalization of tourism in Brejo of Paraíba and the social actors involved in this process in order to pursue development of the region through culture and tourism, with the collection spot in the six counties of the Project participants collected through interviews with managers, community, government agencies and tourist trade, and the use of the technique of direct observation. This time, with the data analysis it was possible to establish the production situation and its cultural and tourist development in the region of Brejo (PB), where culture has become a developmental tool within the tourism industry due to its innovation potential. It was possible to ratify the undisputed vocation of cultural tourism in the region in question, since other projects being developed with the use of cultural resources with a strong influence on the policies of regional tourism. Thus, the main result was that was seen is that the regional development has triggered a refunctionalisation / reappropriation of space just rebuilding a new territorial organization through the development of a regional autonomy of management, a capacity of collective ownership and the use of economic surplus, a spontaneous process of social inclusion as well as awareness and mobilization tourist (even if initial and shy), an appreciation of natural and cultural assets for all stakeholders and especially identification of the population with its region and its culture, as to achieve regional development is not enough to increase the economic, but above all the promotion of endogenous social factors such as changes in social and cultural values and the integration of social actors in this process. Finally, taking into account the definitions of sustainability, it is considered that cannot be said that the development model seen in the swamp of Paraiba is sustainable, but it is a model of regional development based on the unique characteristics that each municipality has and create a regional identity and have correponded expectations / desired results and therefore the viability of the region through the development of cultural tourism was proven
Resumo:
This paper presents specific cutting energy measurements as a function of the cutting speed and tool cutting edge geometry. The experimental work was carried out on a vertical CNC machining center with 7,500 rpm spindle rotation and 7.5 kW power. Hardened steels ASTM H13 (50 HRC) were machined at conventional cutting speed and high-speed cutting (HSC). TiN coated carbides with seven different geometries of chip breaker were applied on dry tests. A special milling tool holder with only one cutting edge was developed and the machining forces needed to calculate the specific cutting energy were recorded using a piezoelectric 4-component dynamometer. Workpiece roughness and chip formation process were also evaluated. The results showed that the specific cutting energy decreased 15.5% when cutting speed was increased up to 700%. An increase of 1 °in tool chip breaker chamfer angle lead to a reduction in the specific cutting energy about 13.7% and 28.6% when machining at HSC and conventional cutting speed respectively. Furthermore the workpiece roughness values evaluated in all test conditions were very low, closer to those of typical grinding operations (∼0.20 μm). Probable adiabatic shear occurred on chip segmentation at HSC Copyright © 2007 by ABCM.